AMD is set to close out the year on a high note. As promised, the company will be delivering its latest 16-core Ryzen 9 3950X processor, built with two 7nm TSMC chiplets, to the consumer platform for $749. Not only this, but AMD today has lifted the covers on its next generation Threadripper platform, which includes Zen 2-based chiplets, a new socket, and an astounding 4x increase in CPU-to-chipset bandwidth.
At this point it’s starting to feel like kicking Intel when they’re down.
Intel has been live kicking the customers for the past 10 years. Time for a good change.
This. And Intel has never been calm about kicking AMD when they were down. AMD had its flaws too, of course.
Thom Holwerda,
It sounds like you are prejudiced against intel, To be fair, the market needs more competition for intel, no doubt about that and I get it. Obviously you are put off by intel marketing claims in a previous article, however in light of that it feels like you haven’t learned the lesson. You are still taking inherently biased marketing materials at face value…all companies including AMD and Intel want to paint their products in the best light and they will take advantage of our selective hearing whether it’s representative of the truth or not. This is why it’s always best to wait for independent benchmarks and reviews before making judgment. This is true of any company’s products.
Traditionally these massively parallel CPUs (including AMDs) suffer from rather extreme SMP bottlenecks for generic multithreaded workloads, but I’m curious to see how much they can improve this as the benchmarks & reviews come in. In any case, I hope AMD’s new product lineup is awesome and spices up the CPU market.
I didn’t think that statement was based on prejudice. Intel was forced to cut back on the prices, and have been marred by troubles in the move to 10 nm. So bringing 16 core mainstream processors and 32 core HEDT parts to the market will only make Intel’s nightmares even worse. And the cheap athlon part announced alongside these will hit at the bottom end. Intel is not expected to come up with any effective response to these.
hdjhfds,
You are making assumptions about intel’s inability to respond to a yet unreleased AMD processor based on marketing material, but the whole point is that marketing material is generally not that reliable a source. I’m happy for AMD to get ahead now and then, it’s good for competition that one player never stays permanently at the top. I actually wish AMD would do better on the GPU side too, because right now the uncontested top spot is going to nvidia and they’re absolutely ripping off that segment of the market.
Erm, how about "No" (with a big side order of "WTF")?
Last year Intel screwed up. They started switching 14 nm fabs to 10 nm, but then 10 nm was delayed, effectively taking those fabs offline. Then there was a major increase in high-end server and data center demand (a record high – something like a 40% increase in data center orders). Intel couldn't fill demand, so they started cutting production of low-end chips to help meet demand of high end (high profit) chips.
They're still struggling to meet demand. Last I heard is that they're going to continue struggling to meet demand until 10 nm rolls out properly next year. It annoys the shit out of me because I want to build a nice quiet little server/workstation out of a Xeon 2288G (with 8 cores, ECC and integrated graphics) and I can't because these chips are so rare that nobody in my country is able to sell them (I suspect they all get sold in USA and there's none left to export, leaving the "4-core" and "no GPU" scraps for everyone else).
Now; what's the one thing you do not do if you can't meet demand? You don't increase demand (and reduce profit) by cutting prices. That's just plain idiotic.
Brendan,
I have not seen price drops either, last year I was affected by shortages, but I assumed those were a consequence of the trade war due to people & companies stockpiling. I don’t know where intel sits with regards to trump’s tariffs (I know they have a china fab), but many vendors had to absorb at least part of the higher taxes on chinese imports (10-25%). Apparently the trump administration is planning a new round of tariffs on december 15.
It looks like the xeon equivalent to 9900k with ECC.
https://ark.intel.com/content/www/us/en/ark/products/193743/intel-xeon-e-2288g-processor-16m-cache-3-70-ghz.html
Hmm, I took a look and I’m having trouble finding it here in the US too even at enterprise vendors.
I see plenty of other options for 8 (and higher) xeons like the following:
https://ark.intel.com/content/www/us/en/ark/products/193389/intel-xeon-silver-4215-processor-11m-cache-2-50-ghz.html
It’s not clocked as fast and doesn’t have GPU. It does have more PCI and memory lanes.
Through my own experience with i9-9900k, the limited memory lanes can impact parallel performance for extremely memory heavy workloads, the xeon 2288g is probably the same. Although it’s still a beast, I do wish it had more PCI lanes since not all PCI slots have dedicated bandwidth. Some of the slots share bandwidth with NVME, ethernet, and USB. Although if you’re using the integrated GPU, then that frees up a lot.
If you want “6+ cores, with ECC, with integrated graphics” there’s literally no alternative (from Intel or AMD) to unavailable chips in Intel’s “Xeon E-2xxxM” range.
Notes: I want 6+ cores because anything less is a performance downgrade to what I’m using now. I want ECC because it’ll be acting as a home server (providing DHCP, NTP, FTP, HTTP, NTFS and Samba to the rest of my home LAN) and never turned off. I want integrated graphics (which is more than adequate for things like web browsing, etc) to reduce power consumption and fan noise (given that the computer spends a lot of time idle), and to increase the chance that it’ll work properly (I’ve had problems with video drivers for Linux in the past – a combination of open source drivers sucking and “X11 updates” breaking proprietary video drivers),
Of course part of the problem is that I’m trying to make one computer do 2 roles (home server, and software development workstation) to reduce maintenance hassle and because I’ve become used to not needing to wait for a workstation to boot.
Brendan,
Well, of those things, you might have to give up on CPU integrated graphics, but many xeon server motherboards have an onboard controller anyways.
Some of my servers use ECC, but IMHO ECC is more of an insurance policy against corruption that is very uncommon in practice. I’d say unreliable/unclean power has been a much greater threat and IMHO a good power supply is the most important thing for reliable 24/7 operation. Even a UPS isn’t always enough, I’ve found that overprovisioning a power supply by a factor of two makes things a lot more robust. They’ll have more capacitance and therefor higher hold-up time between power outage and UPS to come online.
Over the years I’ve had a ton of linux graphics problems too and I hate to say it, but it’s still a problem. On my new system I’m using nvidia card & drivers, twice now I’ve had updates break graphics output. On top of that it’s not the easiest thing to install in the first place. I’d say nvidia’s installer is unfit for purpose since it refuses to install while X is running in unaccelerated mode. Someone without experience fixing boot and driver issues under the hood would clearly have to bring their computer in for repair. Nvidia deserves a big fat “F” grade here. I heard that ubuntu was given permission by nvidia to redistribute and bundle the proprietary driver, which I haven’t given a shot yet.
By contrast, intel’s drivers always worked perfectly for me, so I definitely understand why you want an intel GPU.
For servers, this usually doesn’t matter though. I install the headless versions that don’t even have a graphical environment in the first place such that text mode is sufficient. I feel it’s better to keep the server and workstation separate so I can dual boot/reboot/install the workstation independently from the server, but that’s just me.
Anyways, let us know what you end up doing
I’m not worried for AMD, they never had the impossible high amount of cash Intel have due to its historical position, yet they managed to stay afloat thanks to an incredible creativity to solve issues and also be imaginative of new things that’ll move into the industry as defacto standards. Say, AMD64 for instance. About SMP bottleneck, considering their technologies in multicore technologies and GPU, I’m sure they have something up their sleeves to address the issue.
Kochise,
I understand what you are saying, although find it kind of regrettable that while intel was trying to kill off x86, AMD swept in and gave it a brand new lease on life. That was our best chance at opening up to new architectures, now the x86 monopoly continues for desktop and servers.
Well, most of the performance advantages inside of GPUs cannot be directly applied to the CPU because x86 is not so great for VLIW computation. Generally the main mode of adding more parallelism with x86 is adding more cores, and AMD has jumped all in. That’s impressive and all, but SMP inherently starts to suffer around 4-6 cores and just gets worse beyond that. Obviously NUMA configurations are used to eliminate the hardware bottlenecks, but typical SMP software often performs badly on such configurations and previous Ryzen CPUs suffered so much that AMD introduced the “game mode” to artificially limit core counts.
I expect that drivers/OS software will get better in terms of automatically optimizing software efficiency by pegging processes to local NUMA regions and sacrificing parallelism so that at least performance doesn’t degrade with more cores. However unless the software industry starts to embrace the NUMA paradigm, it’s extremely difficult to get that many cores to scale well. At the end of the day I’m not even sure it’s worth it outside of niche use cases in the data center. Superscalar architectures put so much effort into parallelizing sequential code, but they fail to take advantage of easy parallelism to say nothing of spectre vulnerabilities that are a byproduct of out of order execution. I still think that GPGPU / FPGA are more promising for massively parallel computation.
While many of your points are valid, I think a more realistic value is 8 cores. Over 8 and it starts to get bad.
That said, there are workloads on PCs that can benefit from more cores now. Most people fixate on 1 app running when they discuss performance issues. A lot of the issues can be addressed with better schedulers that keep apps on the same module or use the faster cores when there is a small workload. There are already power plans for higher end AMD chips to get up to 200Mhz boost more out there.
As for the article, it wasn’t biased against intel. Intel likes to put out bold claims. Both companies are bad at this. I don’t believe marketing material after the AMD FX series launch but I also see FUD campaigns from Intel about core counts too.
Not common, but real world problem I have is building a lot of software packages fast. My package build cluster software can run parallel jobs on the same host and most ports will use multiple cores for compiling if available as well. It’s typically network or disk bound depending on the setup. I can saturare an 8 core ryzen 7 2700 now by using a memory disk for the build env.
As much as you think Thom was biased against Intel, your comments feel biased against AMD to some degree. I don’t know if that’s true, but I think you need to evaluate that too. At this point, we’ve already seen AMD zen 2 chips in the wild and they do somewhat live up to the hype. My wife upgraded a 5820k to a Ryzen 7 3700 and it’s a blowout. A lot of that is simply improvements to the motherboards but even single core performance is better. It’s also significantly faster than my 2700. So if AMD can deliver with their low TDP 8 core chip, i do have some confidence the new chip will be pretty impressive.
The key here is to not get caught up in market segments. AMD has been advertising this is a gaming chip and it’s clearly not. Intel has been attacking it as it’s only a gaming chip which it’s not either. Right now gamers can use 8 cores with streaming + gaming at the same time.
As much as I agree with you about diminishing returns on adding cores, we’ve also hit walls on die shrinks, especially Intel. That’s the only play right now.
laffer1,
Well, the loss of efficiency is measurable even with just 4 threads, but I suppose the added performance of the extra cores may nevertheless justify the additional cores despite the reduction in efficiency for individual threads. Also it obviously depends on the specifics of the workload. Non-local memory intensive algorithms will fair far worse under more cores than CPU intensive operations. Once you reach memory bottlenecks for your algorithm, the benefits of adding more cores is totally counteracted by the loss of performance on the other cores.
I think it can be summed up by observing that CPUs are easy to scale, but shared memory bandwidth is not. NUMA adds memory scalability, but is way less useful in supporting shared memory algorithms and requires a lot of work to reengineer software to eliminate costly shared memory usage.
Technically I was referring to Thom’s bias and not necessarily the article. I agree with you that marketing material should always be taken with a grain of salt, that’s essentially the point I was trying to make in the top post.
No doubt. There are good use cases for many cores particularly when shared memory is not required. For me, it’s hosting.
What evidence do you have for this “A lot of that is simply improvements to the motherboards but even single core performance is better.”?
https://www.techradar.com/reviews/amd-ryzen-7-3700x
http://hwbench.com/cpus/amd-ryzen-7-3700-vs-intel-core-i9-9900k
https://www.forbes.com/sites/antonyleather/2019/09/27/amd-ryzen-7-3800x-versus-intel-core-i9-9900k-whats-the-best-8-core-processor/#244c2b4b38a3
I really try to form my opinion objectively around factual data. I hope that I’m not so stubborn that you couldn’t convince me that I’m wrong if you’ve got data to back it up, but in my experience intel CPUs are faster on single threaded workloads. And until we get 3rd party benchmarks that show otherwise, I won’t give AMD the benefit of the doubt. Beating intel is hard, if 3rd party benchmarks ultimately show that happening, then seriously that’s great, but until then it’s just too early to take their claims as fact. I do not agree that reserving judgment until such time represents bias against AMD,
I think that perhaps pointing out AMD bias was interpreted as intel bias. However I’m no intel fanboy and in fact I really was open to buying an AMD rig last year (same with the graphics card actually), but data did not show Ryzen performing as well for my workloads at the time. I will be happy for AMD and competition overall if AMD pulls out ahead for single threaded performance. They certainly have the advantage on the power front until intel manages to make its die process smaller. Also, while I’m happy with 9900k’s performance, I wish it had more PCI lanes and better memory controller like a server grade processor, those are areas that ryzen is ahead.
It’s interesting that if I were to build a gaming rig, intel’s got marginal advantages but the best CPUs from AMD and Intel are such overkill on both sides of the fence. The GPU does all the heavy lifting. If we were to take a savvy gamer and put them in front of a black box, I doubt they’d be able to guess what’s inside more than 50% of the time by chance, haha.
It seems to me that Ryzen is quite an excellent processor. Although Intel is good. At the moment I have ryzen 5. I am very pleased.