AnandTech has published its comprehensive benchmarks and tests of the Intel Core i9-9980XE, and while this $2000 processor is unlikely to grace any of our computers, the article has some choice words for Intel. The problem with the 9980XE is that it’s basically a 7980XE with slightly higher frequencies partly because Intel switched the TIM from paste to solder, and the numbers confirm this – the performance improvement isn’t all that great.
And this is a big problem for Intel.
It all boils down to ‘more of the same, but slightly better’
While Intel is having another crack at Skylake, its competition is trying to innovate, not only by trying new designs that may or may not work, but they are already showcasing the next generation several months in advance with both process node and microarchitectural changes. As much as Intel prides itself on its technological prowess, and has done well this decade, there’s something stuck in the pipe. At a time when Intel needs evolution, it is stuck doing refresh iterations.
Intel needs a breakthrough, because it can’t keep sucking blood from the 14nm stone forever.
My decision to build a new PC in August 2016 around the 6700k was like the perfect time. The longer Intel continues with its glacier development, the longer my PC stays current I honestly don’t know why people want to go back to the days where their computers were essentially paperweights after 5 years.
Silly argument, my PC from 2007 still performs like day one (and even better with upgrades) and since it covered my needs at the time, it still covers my needs now.
Same for my laptop from end of 2011. People just shouldn’t listen to marketing bullshit and don’t expect rainbow glitters rushing out from the new PCs.
Does those needs include keeping warm for the winter?
2007 : Via C7 @ 2 GHz = 20 W TDP http://www.cpu-world.com/CPUs/C7-D/VIA-C7-D%202000-800.html
2011 : AMD A8 @ 1.5 GHz = 35 W TDP http://www.cpu-world.com/CPUs/K10/AMD-A8-Series%20A8-3500M.html
2013 : Intel Atom 330 @ 1.6 GHz = 8 W TDP http://www.cpu-world.com/CPUs/Atom/Intel-Atom%20330%20AU805…
2013 : AMD E-350 @ 1.6 GHz = 18 W TDP http://www.cpu-world.com/CPUs/Bobcat/AMD-E%20Series%20E-350…
Those thermal figures looks rather pathetic against something as cold as your “humor”.
Edited 2018-11-20 08:34 UTC
Kind of disingenuous of you to compare a 4 core (with integrated GPU) to a 1 core processor. But then again, you do seem awfully defensive over a joke, so not surprised you’d stoop to such tactics.
Not comparing, those are my PCs and I carefully select them with a TDP < 50 W. Of course I can’t play those nifty AAA games with a graphic card (h)eating as much as 500 W of power just to display fancy 4K numeric blood spills.
Of course also I’m not running a computing farm at home, I just want my computers to be electricity-bill friendly while offering me enough power to cover my needs. It’s mostly a matter of self consciousness. Just like eating less meat to lessen our ecological footprint.
Hence not that I dislike your “words of wisdom”, yet just to remind you that not everyone is a techno-freak that “must have” the latest iteration of hardware piece to feel the hype. And that those “under-spec” CPUs just performs admirably considering their “power” and covers 98% of my use cases.
For the remaining 2%, I just wait a bit, batch script things that run while I’m at work so I get the result when I return home, without feeling any delay. But again, that’s my way to solve things instead to buy the new fancy steam powered/cooled CPU that will be made obsolete with the next news report.
I imagine that most people are doing the same things on their computers as they were doing 10 years ago, but their computers are running at the same speed but with better graphics and doing lots of other things in the background.
But…like smartphone battery life, I think more speed is a the top of most people’s lists
Better graphics, considering the number of laptops still being sold with 1366×768 screens, that’s not what I had in mind. The additional power is used to run AV, toolbars and other background shit, not really to do more useful things.
But yeah, present computers do feel more smooth than before, but at quite an expense. When I upgraded my Athlon XP 1700+ (Palomino) from 256 MB to 1.5 GB, the speedup was already quite tremendous. It’s not necessarily just the CPU that makes things better.
I also upgraded the RAM of 1700+ Palomino-based system, first from 192 SDR to 256 DDR, then from 256 DDR to 768 DDR, for a nice boost in comfort of use.
Just stepping in to say that I agree with you on power consumption vs performance. I’ve moved to lower power systems and got rid of my i5-7400/GTX1060 workstation/gaming rig. I hadn’t touched a AAA game in over a month and realized I was having more fun playing small games like FTL and Don’t Starve than anything requiring that beast of a system. I already had two Mac minis, one is a 2012 i7 and the other is a 2006 I’d upgraded to a Core 2 Duo to run Lion.
I’m down to three systems, each under 100 watts at full bore: The 2012 mini is my main workstation now, the 2006 mini dual boots OpenBSD and Haiku, and the third is a Raspberry Pi 3B+ for Linux and general goofing off. My wife still loves her old faithful HP Stream Mini desktop system, it’s more efficient than my Mac minis and suits her needs perfectly.
I buy a local gaming magazine, costing less than 2.50^a‘not, that usually includes a few year old (so with moderately low requirements) A-AAA game and two ~indie games (plus there’sa 100+ magazine as a bonus ). Perhaps I don’t get to choose what I will play, but I get my gaming fix (anyway, I play mostly Tetris DX on my GBA )
I have over 100 games between Steam and GoG that I have yet to even touch, due to $0.99 sales and giveaways. Most of those Steam games now work under Linux and macOS thanks to Steam Play/Proton, and GoG has done a ton of work to port several Windows-only games over to those platforms as well. All of GoG’s old DOS based games already work with their custom Dosbox package.
Besides, I’ve found myself playing old console games almost exclusively lately. I’m even bidding on a DS Lite on eBay so I can get back into that era of gaming; I missed out on the DS when it was first released and have only played a couple of games on the system over the years. As a bonus it can emulate older consoles so I’ll have a nice portable arcade machine.
Aye, those games from the magazine I mentioned also mostly wait on my Steam account for their turn; though some A-AAA ones are also on Windows-only Uplay…
And once I get bored with my GBA (I got it over 5 years ago as a quite good deal on our local eBay-like site, all for less than 9^a‘not with 3 games worth themselves ~12^a‘not …most importantly, with Tetris DX which I adore and is probably one of the best; GB version of Tetris is liked the most by Alexey Pajitnov, its creator ), mostly with Game Boy Classic and Color games, I’ll also get a DS …or maybe I’ll jump right into 3DS/2DS.
Edited 2018-11-20 20:47 UTC
I think maybe you’re talking to someone in your head, rather than responding to something I actually said. All I said was a joke.
And not that you have to know me personally, but I’m far from “techno-freak”. I didn’t even get a smartphone until last year, and I do just fine with integrated graphics.
OP said PC and you are quoting mobile chips. What are you, a politician?
Dealing with servers and PCs all day long at my job, I can tell you Intel is in a rut and needs to get out. In our labs one can hardly tell the difference between a 7 year old machine and a current machine after you put the same disk/ssd and RAM in them.
I am anxious to test some of AMDs lineup…
Yes!
that makes me feel so much better about my 7 year 12core Xeon Mac Pro that gets stomped in media tasks by the latest MacBooks with all their specialised co-processors.
I’d feel even better if it wasn’t using so much electricity though.
Maybe the future is doubling down on task-specific chips; 8K is only a few years away and I’d like to be able to edit videos of my kids playing around in 8K(!)
I don’t know why some delineate between the two – for most, laptops are PCs; the majority of PCs sold, for good several years.
I have a 2010 era dual socket machine. It definitely cuts down on my heating expenses
I put together my current PC with an eye towards low power consumption while still being able to play AAA games… at least at 2K. You aren’t going to play at 4K on low power. So I put together an FX 6300 (95W max) with a GTX 1050Ti (75W max). I get by with a 350W power supply, don’t run up my electric bills, and don’t heat the room (that’s what the heat pump is for). Not only is it fairly low power consumption, but it’s also really low on cost.
Do you remember what it was like near the year 2000? Freshmen I knew took out loans at 10% interest rat to get a computer. It was obsolete in two years, but they still owed thousands of dollars, and were still two years away from starting to pay for it. Honestly those were probably a $1500 setup that ended up at costing $3000 that didn’t last more than two years.
Those were the times!
Most people deal with computers as another appliance/commodity and not as fodder for humble brag.
It something a lot of old computer geeks don’t seem to grasp.
$2000 and if it’s headed for use in a business machine or server type usage running Linux it will perform like a Pentium 4 once all the Meltdown and Spectre mitigations are enabled!
Well done Intel!
Even though these vulnerabilities aren’t specific to Intel they are the most seriously hit.
Even Linus is throwing another hissy fit over the performance drop.
AMD have been handed a clear advantage here, lets hope it’s utilized and improves the competition.
Sauron,
To be fair though, the 9th generation intel processors do include silicon mitigations for some of the meltdown/spectre vulnerabilities, which is a partial improvement on earlier processors.
https://www.tomshardware.com/news/intel-9th-generation-coffee-lake-r…
Anyways, it’s true that the software mitigations pose significant slowdowns for some workloads. I predicted at the beginning that performance would take a beating as they continue to unravel all the performance gains had by speculative execution. The latest round of mitigations in linux have caused significant performance loss.
https://www.phoronix.com/scan.php?page=article&item=linux-420-bisect…
So much so that linus himself was questioning whether the mitigations are even worth enabling at all. His current thoughts are that the mitigations should be disabled by default so that linux will run fast by default…
http://lkml.iu.edu/hypermail/linux/kernel/1811.2/01328.html
That certainly gives us a lot to discuss here on osnews!
Linus is correct though, the mitigations should be off by default. On a normal home desktop they’re not needed and just bring things to a crawl in some cases.
The biggest problem at the moment is that no-one seems to know how to turn them all off, only some of them.
I know my Linux machines don’t need them, there’s nothing on them a hacker or anyone else would be interested in, unless they want to know what playlist I’m listening to or which film I watched last.
At least Linus is looking at turning them off by default.
Sauron,
I completely understand that POV. Spectre vulnerabilities are not easy to target anyways. Yet if the official stance is that computers shall be vulnerable to some exploits by default for long term periods (because performance reasons), then it seems likely that hackers will focus their efforts and continue to refine attacks. It’s only a matter of time before it starts happening in the wild and users start becoming victims, then what?
I see no easy answer here, but it’s something to think about.
Hi,
That’s one specific mitigation (“STIBP all the time”) that has high cost and arguable benefits, that both AMD and Intel say shouldn’t be used all the time. Linus isn’t saying any other mitigations should be disabled by default.
– Brendan
Brendan,
I know we’ve already incurred performance losses from the other mitigations, this is just the latest mitigation for recent exploits on top of everything else.
Has the industry ever recommended remaining permanently vulnerable to known attack surfaces before? Refresh my memory if I’m wrong, but I think this is a first. Now that we’re here, going back is a major dilemma because we don’t want to give up performance. However if these issues had been discovered decades ago, I suspect we might have taken CPU technology in a completely different direction to avoid the situation we find ourselves in.
It is the first, other than recommending people use Windows. /joke
The argument is that the people most concerned with what this fix protects against would have already disabled SMP in the kernel, and those systems would actually perform better than having applied this fix.
So it’s not that we should remain vulnerable, but that there’s a better fix.
kwan_e,
I realize many security issues with shared resources go away when we disable SMP, but whether the fix is disable-SMP or STIBP, it’s the same problem: better security -> performance loss.
Also ~30% overhead for STIBP might still still be better than disabling SMP for some workloads. I admit that both solutions make me grimace.
I’m curious what this will mean for PCI compliance, HIPPA compliance, SOX compliance, etc going forward. Should we have secure infrastructure on speculative multicore CPUs with “acceptable risk vulnerabilities”? Or does the goal of uncompromising security rule out the use of highly scalable high core speculative CPUs?
The most likely thing is that most will be lazy and kick the ball down the road as long as they can until actual hacks start taking place. The more things change, the more they stay the same, haha.
No he didn’t, actually. He’s actually been pretty measured about it.
Or maybe his email filter script is really good.
Thom Holwerda,
I understand people are accustomed to leaps in progress, but physical limits are posing a greater barrier for this type of architecture than ever before and sequential processors are nearing the end of the line. A slow down might be inevitable (for air cooled CPUs).
I think the short term future is more cores, but we’re already pushing those limits. So beyond that I believe more massively parallel architectures like GPGPU/FGPA hold the most promise for the future. Or conceivably quantum computing with all of it’s quirks, if that ever becomes viable for the masses.
Edited 2018-11-20 06:17 UTC
Intel has for decades had a grip on the market through the Wintel monopoly. That always makes companies complacent and they yield to the temptation to just milk their – captive – customer base. Just remember IBM in the mainframe days. And then nimbler rivals emerge who have more incentive to innovate and push the envelope. For IBM that was Microsoft, for Intel it is ARM. Microsoft itself is now in complacent mode, since a decade or two actually.
14nm is fine.
The fact that they’ve been reusing the Skylake uArch for the past three years isn’t.
I get the vibes of the Pentium 4 era when Intel kept increasing clocks until they realized that couldn’t be done forever.
Intel may have the old architecture, but I’m still seeing a lot of benchmarks giving even the last gen Intel’s CPUs better fps rates on games than the new Ryzen’s, and this is what sells personal gaming computers.
Thank you very much for sharing this post. Carry on it. I know some place where a lot of essays ^aEUR“ . If u have no writing skills then u can just use this service ^aEUR“ https://www.topessayservices.com/“