Intel today launched a barrage of new products for the data center, tackling almost every enterprise workload out there. The company’s diverse range of products highlights how today’s data center is more than just processors, with network controllers, customizable FPGAs, and edge device processors all part of the offering.
The star of the show is the new Cascade Lake Xeons. These were first announced last November, and at the time a dual-die chip with 48 cores, 96 threads, and 12 DDR4 2933 memory channels was going to be the top spec part. But Intel has gone even further than initially planned with the new Xeon Platinum 9200 range: the top-spec part, the Platinum 9282, pairs two 28 core dies for a total of 56 cores and 112 threads. It has a base frequency of 2.6GHz, a 3.8GHz turbo, 77MB of level 3 cache, 40 lanes of PCIe 3.0 expansion, and a 400W power draw.
AnandTech has more information on these technologies, which few of us will ever get to work with.
Thom Holwerda,
Still, for those of us who buy used data-center equipment these kind of new developments are good for the secondhand market. Alot of the expensive gear today becomes more affordable when everyone starts to upgrade and need to offload their old equipment. I’ve been waiting very patiently for 10gbe prices to come down in particular since I could benefit from it except it’s not exactly affordable for me to upgrade my networks at this point.
I do work with stuff like this, actually. It’s cool and “wow” at first, but gets mundane VERY fast. So you have several terabytes of RAM and tens of CPU cores in a single server, connected via several 10G cards aggregated to a fat LACP trunk… So what… Cool to look at and brag to friends, but that’s all. Unless you actually write applications to fully utilize those resources…
spambot,
…well that’s precisely what I do, haha
I will say that 1gbe can be a significant bottleneck for anyone who needs to transfer large datasets. Personally I’ve been making due with 1gbe by running jobs overnight, but it’s definitely been a bottleneck. Given the arrangement here I’d need a minimum of two 10gbe switches and a handful of NICs in order to connect the workstations to the servers in the garage, so probably around $3K the last time I priced it out. I’m waiting for that price to come down to ~$1k at some point with used equipment…
There are some10gbe consumer switches that are entering the market, but I’d really like to use VLANs to run virtual networks over one pair of wires rather than having to run multiple cables.
Regarding other gear, I’d agree with you more, storage, ram, cpus are already relatively plentiful and cheap. Tons and tons of cores sounds neat, but in reality they tend to be badly bottlenecked by shared ram & inter-core synchronization, which is why I’m more interested in highly parallel clustering over highly parallel SMP.
True – high on my wishlist is still to build my own dual-socket Xeon machine, and the more new Xeons come out, the lower the pricing gets on good, used Xeons. Eventually I’ll pull the trigger.
Ugh, the current WordPress theme (colours… light grey text on light blue background) makes your post quite unreadable…
They say these chips will be hardened from “variant 2, 3, 3a, 4, and L1TF” of the spectre attacks. This is ostensibly good, however previous hardware mitigations for the speculation attacks did NOT regain the performance lost by software mitigations.
https://www.anandtech.com/show/13659/analyzing-core-i9-9900k-performance-with-spectre-and-meltdown-hardware-mitigations
I think there’s a good reason to believe that the performance loss for these mitigations will become permanent and that we’re entering a new era when instructions-per-clock is actually decreasing compared to earlier CPUs (ie pre-spectre). Speculation is a critical element for keeping the pipelines full and maximizing ALU utilization, and as we continue to discover speculative exploits and try to mitigate them, that will directly hurt IPC rates. The result of this will be a reversal of the so called “megahertz myth” where clock rate is deemed less important than IPC.
I’ve said this before, but I think spectre gives us a very good engineering reason for moving away from superscalar CPUs with implicit parallelism and instead focus on explicit parallelism instead like GPGPU, FPGA, or even just plain SMP multithreading (although MT can be risky for sidechannels too). We need a major overhaul for how we develop software IMHO. The free ride for software developers who want/expect the hardware to speed up our poorly optimized sequential code is over and software developers will have to adapt if we want to evolve past the hardware plateaus on the current path.
I could not agree more!