Since AMD’s relaunch into high-performance x86 processor design, one of the fundamental targets for the company was to be a competitive force in the data center. By having a competitive product that customers could trust, the goal has always been to target what the customer wants, and subsequently grow market share and revenue. Since the launch of 3rd Generation EPYC, AMD is growing its enterprise revenue at a good pace, however questions always turn around to what the roadmap might hold. In the past, AMD has disclosed that its 4th Generation EPYC, known as Genoa, would be coming in 2022 with Zen 4 cores built on TSMC 5nm. Today, AMD is expanding the Zen 4 family with another segment of cloud-optimized processors called Bergamo.
As part of AMD’s Data Center event today, the company is showcasing that its 4th Generation EPYC roadmap will consist of two segments: Genoa, with up to 96 Zen 4 cores, and Bergamo, with up to 128 Zen 4c cores. Not only are we getting official confirmation of core counts, but AMD is disclosing that Bergamo will be using a different type of core: the Zen 4c core.
Imagine how much faster I could translate on one of these.
Thom Holwerda,
Question for Thom and everyone else: What would you do if you actually had one of these?
Browse the internet, of course!
Joking aside, I do some developing, but most of my needs are single-threaded in nature. While a few extra cores do help, I quickly reach a level of diminishing return. I cannot fathom having 128 cores say my disposal.
Ever heard of CPU-based mining? IF so, you would wish that you have 10 times 128 cores.
It would be sufficient to run most of my computing tasks on a single machine with an NVMe storage rather than distributing them across 4 32 core ones tied with slower NFS. Assuming it doesn’t burn under load or throttle itself to a crawl.
Running evolutionary simulations.
The only thing I do that would benefit from such a beast would be running multiple concurrent virtual machines. I could also see someone using Qubes OS wanting something like this for better performance and flexibility.
I can see it being useful for running one’s own (very small) hosting company as well, but these days it makes more sense to resell from one of the big providers unless you’re offering a higher level of support than you can manage with someone else’s iron.
I was expecting somebody to say run crysis, haha.
teco.sb,
Indeed, I think this is representative of the majority of users. Such high core CPUs aren’t beneficial to typical user applications and often perform worse than low threaded CPUs that push higher single thread performance. Massive parallelism could be good for rendering and AI, but I think those applications are making the shift to GPGPUs that are more powerful and more efficient.
ndrw,
Yeah, this definitely has the potential to consolidate other machines (assuming it’s cores are able to deliver the goods). If it’s not too personal to ask, what kind of computing tasks?
satai,
This is very intriguing. Your website talks about the singularity, what exactly are you working on?
Morgan,
VMs and hosting are obvious applications. If you run a business with a dozen servers, maybe this could replace them all. It does open up questions as to where the bottlenecks are. Are the cores going to throttle due to heat or power limits? Is the memory subsystem going to bottleneck them? NUMA architecture helps keep resources local with way less sharing overhead, but ironically that discourages simply spawning more threads and favors applications that limit the number threads accessing shared resources.
It makes one ponder what is the core limit? At some point it may make more sense to divide the CPU into additional hosts rather than adding more cores under one host. For example a system with 256 cores would have more coherency overhead than two systems with 128 cores or four systems with 64 cores, even though they’re all running on the same CPU. Of course this is all subject to thermal constraints and just how many bus lines we can squeeze into the CPU package.
CPU-based cryptocurrency mining. You can easily figure out that 128 cores is not enough.
AER,
Honestly, I’m rather biased against “proof of work” cryptomining. The energy consumption is just staggering and it bugs me that it’s encouraged so many to contribute to carbon emissions for profit. Leaving this opinion aside…
I don’t think this would be a good use case even if you had 1k or even 10k core CPUs, you wouldn’t want to use them for mining because they can’t compete on efficiency and would be very expensive to run. Everything about the CPU from memory fabric, caches, to sophisticated superscalar pipelines would be very wasteful for running hashes all day. A dedicated ASIC is cheaper, uses less power, and has more hashing power than even a massively parallel CPU could. So I’d suggest that CPU core counts is really not the only limiting factor for using CPUs in crypto currency, no amount of cores will be enough to make CPU cryptomining worth it.
I need to add a caveat: different crypto mining algorithms base their block chain on solving different problems. Obviously bitcoin uses sha2 hashes, others use problems that depend on lots of memory or disk space instead (these can chew through SSDs in a month). So in principal it would be possible to create a crypto currency for any arbitrary problem including those that CPUs are proficient at.
Since there are so many people dedicated to consuming energy for crypto currencies, it’s a shame that the problems they’re solving are so pointless, like generating hashes with large numbers of zero in them. Why not use all that computing power to solve problems with medical applications instead like protein folding simulations to improve drugs and cure diseases? At least that has something to show for it and can benefit humanity.
I no longer do development so have modest needs. Any app I do use which takes advantage of more cores might zip along faster but that’s really image and video stuff unless it’s offloaded to a GPU. Maybe a VM if I have to. 2-4 cores is adequate. I would be pushed to justify eight cores. The mind boggles what people do with 128.
If you do any rendering, video, audio or image processing, more cores is a must, as the app can split the workload. Now you see, what is the internet today? Videos, a lot of them rendersof 3D, that need to be converted, compressed and streamed. So this kind of processor is being helpful any time you watch the new Shrek movie on Netflix
Fair enough. I just don’t do any work anymore which demands it so didn’t think more deeply than that.
I know a snappier response can shorten people’s mental feedback loop so it will be good for artists and anyone wanting their computer to keep up with their brain. As much as I’m okay with lower fidelity proxy renders and leaving high fidelity to run overnight I guess others may be a bit more fussy especially those running virtual stage sets for movie and television shows like the Mandalorian.
I’m not sure why people who don’t have use for them get so baffled or amazed by “high” core counts, and question the validity of others who do. It’s not rocket science, though rocket science would be an area to benefit. You don’t need a Formula F1 car to run errands, but you do if you’re in a Formula F1 race. Well, you don’t need tons of cores to browse Reddit, Facebook, and watch Tiktok videos all day, but you do if you’re doing things intensive in math/physics/etc. or need to deal with large amounts of data. But, you know, 640K is all the… I mean 2-4 cores is all the cores anyone will ever need.
@friedchicken
Speaking for myself I was giving my pov. It’s what’s good enough for me. It wasn’t a judgment. More cores doesn’t help your comprehension!
That’s nice, thanks for sharing. My post wasn’t a reply to yours but I guess you see yourself as one of those who are “baffled or amazed by high core counts” since that’s who I referenced. Either that or one of us, who isn’t me, does have a comprehension deficiency.
More memory, more gigahertz, more cores progressively unlocked more things that weren’t possible before. So even if we cannot say what we’d do with *that* many cores, as average users we won’t use them but other do. So why even jeopardize them because 2- cores are enough to browse Twitter ? Is the World everything about ?
I hope nobody mentions how many cores the various supercomputers have. Heads might start exploding. Supercomputers…… What a waste.
I’m surprised nobody is talking about Amiga, or bloatware, or how back in the day they could do everything with just 8Mhz and 1MB of RAM and they liked it…
For some odd I reason I bought an Amiga even though my C=64 did everything I needed. What a sucker I was!
A c64, what a punch car reader was not enough for you?!?!?!
You of all people don’t get to call out trolling on this site.