Intel first launched its 8th-generation branding last year. In the mobile space, we had the U-series Kaby Lake-R: four-core, eight-thread chips running in a 15W power envelope. On the desktop, we had Coffee Lake: six-core, 12-thread chips. In both cases, the processor lineup was limited: six different chips for the desktop, four for mobile.
Those mobile processors were joined earlier this year by Kaby Lake-G: four-core, eight-thread processors with a discrete AMD GPU on the same package as the processor.
Today, Intel has vastly expanded the 8th generation lineup, with 11 new mobile chips and nine new desktop processors, along with new 300-series chipsets.
Intel’s naming scheme is a bit of a mess, isn’t it? At this point I really have no idea what is what without consulting charts and tables. Can all the bright minds at Intel really not devise a more sensible naming scheme?
A tad the author of this advertisement forgot:
Those chips still contain the bugs, which have been found about a year ago. Meltdown can cost you 40% performance if shit hits the fan. The other ones (Spectre) are usually below the 5% margin and therefore barely noticeable.
Didn’t Meltdown and Spectre give barely noticable performance differences on 8th gen chips?
It does seem Intel is getting more aggressive and succesfull in packing more cores/performance in the same power-envelope, effectively competing with AMD.
I don’t see any of that scaling down to the ARM power-envelope though
“Intel^aEURTMs own tests on 8th-gen and 7th-gen laptops put the performance drop at 14 percent, while 6th-gen Skylake takes a hard 21-percent fall. Our tests put the 5th-gen Broadwell at 23 percent in the hole.”
It is safe to assume that Intel did everything to make them look good here and it’s still a punch in the stomach. Likewise, those weren’t the worst benchmarks. The impact can be much bigger in other scenarios and these include real-world situations:
“The results for the SYSMark 2014 SE Responsiveness test are particularly worrying, showing that, as I expected, the biggest effect that the Spectre/Meltdown patching will have is on web browsing and overall system responsiveness, and that means that many of us will feel that our computers are running more sluggishly after applying the patches.”
Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible. In this case the hype and (failed) patching have been far worse than the actual issue.
(for shared/virtual servers in datacenters the story is different, but since this article is about 8th gen, 6-core mobile CPU’s we don’t have to discuss that here)
avgalen,
I don’t own any 8th gen chips, so I can’t comment on that.
However I certainly wouldn’t say that spectre and meltdown have had 0 impact on regular people or that there are no known exploits or even that the performance is negligible. Working exploits have been published, even in javascript! These are not theoretical, they work and anybody can use them to attack unpatched systems on multiple operating systems.
Unfortunately the performance impact of the mitigations is quite significant for workloads that exhibit a high rate of syscalls. Workloads with high IOPS fair much worse than workloads that spend their time performing computations without syscalls. So the performance loss really depends on the workload.
https://spectreattack.com/spectre.pdf
All eyes are looking at the OS, which makes sense, but individual applications can also be vulnerable to the indirect jumping code patterns even on a patched OS. All critical application code should be carefully audited too, but that’s a huge undertaking. So IMHO we’re going to be haunted by spectre for a long time, there’s no trivial fix short of disabling speculative execution, which is fundamentally responsible for these exploits.
Do you have any link to a non-theoretical exploit?
All I could find was things like this: https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma… or https://meltdownattack.com/ (“Has Meltdown or Spectre been abused in the wild? We don’t know.”)
And yes, for professionals working with special software on old OS’s with old cpu’s on shared virtual servers Spectre and Melt
But for people using their computer at home for editing, games, browsing, taxes, hobby, video/audio consuming … they never noticed anything
avgalen,
Well, the most dangerous vulnerabilities are those that succeed without the users ever noticing, wouldn’t you say? It’s not easy to detect because the vulnerability is using totally legitimate functionality to snoop remote address space leaked by the hardware.
We might say that computers are already fast enough for most user needs, so the performance lost to mitigation is irrelevant. If this is the way a user feels, then that’s fine.
This is exactly what seems to have happened. Most people just won’t notice anything like a 10% drop in CPU performance. I would consider myself an experienced poweruser and of course I want “the latest i7 instead of an old i5″…but if you would give me a machine and would ask me if it feels like “the latest i7 or an old i5” I would very likely fail to identify it without running some test/benchmark
avgalen,
AV software vendors have been working at it for years. You may be able to detect an off-the-shelf attack, but as soon as you allow for original attacks is when AV scanners fail. There are too many possible attack patterns and we don’t have a magic function F(x) that tells us whether X is an attack.
If a user asks you “is this app safe to run”, you can scan it for known malware patterns, but short of conducting a full code review, you can’t know for sure. Even with a code review, some clever attacks might sneak by a code reviewer.
Edited 2018-04-09 23:12 UTC
The second part is true because programs can obfuscate what their code is going to do. However at runtime this particular functionality should be detectable because there is only a few very particular calls that a meltdown/spectre enabled program can make and those calls are normally not used. I don’t have a direct link for you anymore but I got this information from Steve Gibson (https://www.grc.com/inspectre.htm) during a summary of either “This week in tech” or “The New Screensavers”. However he had several podcasts almost dedicated to this topic so “how to detect a Spectre/Meltdown attack” is likely described in detail in https://twit.tv/shows/security-now/episodes/646?autostart=false or one of the previous/next podcasts
avgalen,
This statement is not accurate, any discrepancy in the timing of a CPU’s branch predictor can be statistically measured regardless of the function. Any syscall that speculatively changes the branch predictor’s state in protected memory is potentially vulnerable.
With Spectre, we’ve been focusing on speculative execution of indirect addressing because those cause the most detrimental leaks. however weaker forms of this attack can exist in almost all code because the code was never designed to be resilient to speculative timing attacks. The side effects of CPU speculation was something that developers never thought about. We could fix this pretty easily by disabling CPU speculation entirely, but that comes at the expense of decades worth of performance gains.
You keep trying to downplay this, but IMHO this is one of the most profound attacks we’ve ever seen against computer systems and it will have long lasting implications for computer performance versus security.
Edited 2018-04-10 14:21 UTC
Sir Alfman, we meet again for an interesting discussion where you send me interesting links and in-depth information that broadens my horizon. I say this specifically because of my previous interactions with _LC_ on this topic. I really appreciated how the moderation downvoted him and upvoted both you and me, even when we went several threads deep!
Let me make it clear that I think MeltDown and Spectre have enormous impact inside the techlabs of all chipmakers and kernel-programmers. A wall that was considered a fundamental element of security was completely broken apart in the beginning of Januari and the first patches were unreliable junk that probably caused more harm than good. Because of all the attention everyone went into overdrive and most attack vectors were quickly closed some way or another and the big performance penalties didn’t turn out as bad as we thought they would.
Fundamentally nothing has changed though since January. For most people the wall is still at least partly broken and they are theoretically vulnerable. Luckily we now live in a time where there are many layers of security so before we have a piece of software on our pc/server that can really do harm you have to break through a couple of walls. The AnC issue that you mentioned and MeltDown/Spectre are all sledgehammers that cause big cracks in our walls but they aren’t universal keys that unlock every secret either.
The AnC article that you mentioned has a great “how practical is it to exploit the vulnerability that you found” that was mostly missing from the whole MeltDown/Spectre discussion. I have tried to add that in this thread that was about adding cores to a mobile CPU. If we were to have this discussion in a thread on a kernel-hacking site I think you and I would be pretty much next to each other discussing all the ways these vulnerabilities could be used together to hammer down our security walls. I also completely agree with your statement “IMHO this is one of the most profound attacks we’ve ever seen against computer systems and it will have long lasting implications for computer performance versus security.”
Almost all major attacks the last few years are happening either aimed at users or on misconfigured devices (empty/known root-passwords). MeltDown/Spectre (just like RowHammer before) turn this system upside down and require hardware manufacturers and kernel-programmers to rethink one of their security pillars.
I am not downplaying the long term implications of hardware-based attacks. I have been downplaying the immediate dangers for normal users in the last few months. I never heard anyone complain about lower performance and I haven’t heard of any attacks based on these vulnerabilities. MeltDown and Spectre were presented as wildfires but in reality they are slow-burning fire licking at our foundation piles.
avgalen,
Maybe you are right. When these sorts of things are first published, the news is all over them, and then all the “excitement” dies and the news moves on to the next episode regardless of closure. The more things change, the more they stay the same. Snowden comes to mind.
“Spectre and Meltdown have had 0 impact on regular people. There have been no known exploits and performance impact has been Negligible.”
Yes, of course! And the monkeys who were breathing in the exhausts for Volkswagen & friends were really only enjoying their trip to this health resort…
Apparently Intel’s advertising department is taking us for idiots.
I am happy you got downvoted. You were basically calling me part of Intel’s advertising department which is ridiculous (just look at my post history) and you back up your wild talk with 0 facts or links.
I support 50-100 of my own users at our company and we do the support for about 50 other companies as well. Outside of our support department and “the tech media” I have never heard anyone talk about Spectre/Meltdown and I haven’t heard anyone about “suddenly my pc feels a lot slower” when the patches started to roll out.
https://www.virusbulletin.com/blog/2018/02/there-no-evidence-wild-ma…
There is some sample code, that got copied and incorporated into even more Proof-of-Concept code, but there hasn’t been anything dangerous going around in the wild.
Remove yourself from the tech-bubble, look around, and you will see that normal people never noticed
It’s annoying to reply to a guy who is lying bluntly:
http://www.tomshardware.com/news/meltdown-spectre-malware-found-for…
“February 1:
… Security company Fortinet announced that it has found dozens of malware samples that have started taking advantage of the proof-of-concept (PoC) code for the Meltdown and Spectre CPU flaws released earlier last month.
…
Malware Makers Are Adapting Quickly
The security research team at AV-test uncovered 119 malware samples between January 7 and January 22 that were associated with the Meltdown and Spectre flaws. Fortinet analyzed these samples and discovered that all of them were based on the previously released PoC.
…
Riskware/POC_Spectre
W64/Spectre.B!exploit
Riskware/SpectrePOC
Riskware/MeltdownPOC
W32/Meltdown.7345!tr
W32/Meltdown.3C56!tr
W32/Spectre.2157!tr
W32/Spectre.4337!tr
W32/Spectre.3D5A!tr
W32/Spectre.82CE!tr
W32/MeltdownPOC
…”
And this hasn’t even started yet.
Of course, users get to feel it. In a multitude of ways:
“Root Cause of Reboot Issue Identified: Microsoft issues emergency Windows update to disable Intel’s buggy Spectre fixes” …
That is not a multitude of ways, also, it is exactly what I said in my 2nd post “In this case the hype and (failed) patching have been far worse than the actual issue.”
Again, deliberately treating people as dumb:
You are citing from ‘virusbulletin.com’. They live entirely off Microsoft – and Intel.
I can give you quotes from ‘companies’ claiming that the Volkswagen exhausts are beneficial to your health.
“The article you linked to and the article I linked to are based on the same facts. However 1 of them is spreading panic and the other is analyzing the situation with a calmer mind and takes similar historic facts into account.”
That’s the shareholder’s perspective you’re talking about…
“As you should have noticed by now, 2.5 months later there hasn’t been any known outbreak in the wild.”
Meltdown and Spectre allow you to read a victim’s memory contents ‘secretly’. For example, Meltdown allows them to pick bank account details from memory via a browser (Java-Script).
What kind of “outbreak” do you expect? Do you expect them to inform their victims, that they have managed to acquire sensitive data and how they managed to do so?!?
I would have expected more news about Spectre/Meltdown after Januari. I would expect a “site X suspected of using Spectre/Meltdown to attack users”.
Those bright minds don’t get to come up with public naming schemes, marketing teams do…
I think in fairness it’s not a completely straightforward ask; but doing so ‘successfully’ depends rather on what you consider as the goal. It’s assuredly not clarity – it’s in shovelling more, higher margin parts.
Those that want or need to know will still find out what they need and then choose accordingly.
The rest get bamboozlement; to degrade the choice into the bigger number their wallet can handle. I won’t call that ‘brighter minds’ but it’s definitely marketing.
It’s a well known fact that marketing people are as bright as a 40w bulb in a brownout
It’s good to see that Intel is finally realizing that there actually is practical demand outside of the server and workstation markets for high levels of parallelization. A bit late for me to actually consider them for a DIY build for the time being, but still good to see.
Also, regarding the naming, last I checked, for the 8th generation CPU’s Core i3 means 4 cores with no HT, Core i5 is 6 cores with no HT, and Core i7 is 6 cores with HT 9so 12 threads), with Pentium and Celeron being cheap-arse crap that’s not even worth what they sell for.
Just curious, what’s your workload for high core count on mobile?
Well, multimedia work for one. Most audio and video processing stuff can benefit from parallelization very well (either by processing multiple channels in parallel, or running multiple effects passes simultaneously). There’s a lot of laptops out there that have very good displays and audio hardware, but still are inferior to a real multimedia workstation because they have a sub-par CPU.
There’s still a pretty big market for gaming laptops, which quite often will benefit from higher core counts.
Increased core counts are also a good thing for software developers, building anything beyond a trivial piece of software is very easy to parallelize.
Put differently, pretty much anything that a normal workstation system would run that benefits from increased core counts. A lot of that type of stuff is not done much on mobile not because there’s some issue with doing it on a laptop or similar device, but because it’s far less efficient due to the reduced core counts.
That’s the whole point of the i3/5/7 naming scheme. So you can feel like you made a good purchase and quality chip. I’m typing this from my “ultrabook” i5, which is really just a dual core cpu that gives it 6 hour battery life. Without the i5 in the name, not many people would justify the purchase of an ultrabook.
The fastest CPU that one could buy in the winter of 2010-2011 for the Core i7 2820QM quad core CPU running at 2.3 GHz with a 3.4GHz TurboBoost. The fastest laptop CPU available 7 years later at the end of the 2017-2018 winter is the Core i7 7920HQ quad core CPU running at 3.1GHz with a 4.1GHz TurboBoost.
They used to double the CPU performance every 1,5years. It’s been 7 years and they are close to doubling it, but not there yet. Moore’s Law is dead and buried for 4 cycles already
Meanwhile, Apple’s A11 2W CPU is already at 50% of the performance of Intel’s 45W CPU. I can’t wait for benchmarks of the A11X or A12.
Intel has been downhill since they got beat by AMD with the K7 and Hammer architectures. Here’s how I remember it:
* When the Pentium 4 came out it got beat seriously by the K7 and the RAMBUS memory.
* They decided patch the Pentium 4 with the EM64T architecture (AMD64 to be more precise).
* They moved to LGA775 which they promised would be the last socket you’ll need for a while, with Core and Core2 around the corner. That sort of screwed some people that bought Pentium 4 Laptops with 1hr battery life (like the ZD8000) only to discover that the Core 1 is not available on LGA775 and that Core2 or the dual-core Pentium 4 require small updates to the motherboard (profile 5a and 6 vrm, etc.). That was low even for their standards. Keep in mind, that up until then people were actually used to upgrading the CPUs and Intel even offered OverDrive CPUs.
* The Core 2 architecture proved to be successful, it moved from dual-core to quad core easily, it was a lot less power hungry, and it finally beat AMD. AMD was getting dangerous since they also bought ATi.
* Then came the Intel Core iX architecture that screwed everything. Server-wise it scaled from quad-core to 18-core, but for low TDP and high volume it was a complete mess. 8 Generations of Core iX with over 300 cpu models and they only doubled the performance.
* It was even worst for the clients. You were guaranteed that you won’t be able to upgrade your CPU since newer CPUs depended on the newer Chipsets. So you had to buy a new computer for that 20% performance improvement instead of just upgrading the CPU.
Upgrades mattered, they would allow you to extend the life of a computer from 3 years to 5 years:
* You would start with a 33MHz 486 on Socket 3 and upgrade that to a 100MHz Pentium
* Buy a 66MHz Pentium and be able to upgrade it to 233MHz or even the crazy 550MHz K6-2?
* Buy a 180MHz Pentium Pro and upgrade it to a 333MHz Pentium 2
* Buy a 233MHz Pentium 2 and upgrade it to a 600MHz Pentium III (not more since the power reqs changed).
* Buy a 500MHz Pentium III and extend it to a 1GHz Pentium III.
Companies like Evergreen gave you even bigger performance jumps, you could go from a 16MHz 286 to a 48MHz 486. Or from a 25MHz 386sx to a 75MHz 486. Those were 4-6 fold performance improvements of the performance. They don’t exist anymore since the sockets and the buses are proprietary. Back in the day, the the bus designs were implemented in chipsets by Chips&Technologies, ALi, Via, SiS, AMD, Intel, NVidia so you had competition.
VIA Technologies produced one of the most important chipsets in history, the MVP3, used for Super Socket 7. It was revolutionary it even supported DDR about 2 years before the first DDR products appeared on the market. No motherboards implemented it, but VIA included it. They AMD K6-3+ and K6-2+ could have been paired with up to 768MB of DDR200 memory in the era of 32-64MB PC100. Only in the later K7 motherboard designs and late P4 designs we’ve seen DDR.
I can’t understand why Intel still exists. They produce shit CPUs that are full of bugs, they innovate at a snail pace and they engage in dubious practices to keep selling new boards without actually delivering an improvement.
“I can’t understand why Intel still exists.”…that one is easy, they got away with rigging the market for half a decade and only got slapped on the wrist!
Look at the transcripts or the coverage from the Intel VS Amd trial some time and you’ll see that 2 billion dollar pay out to AMD? Was a sick joke, they had 1.- Bribed OEMs not to sell AMD, 2.- Paid benchmark companies to use their compiler because 3.- they rigged their compiler to detect non Intel chips and bork their performance…and they got away with it scot free, they paid less than what they made in 9 months of the big early 00s PC boom while getting to keep all the profits they made from the rigging.
It would be like robbing a bank of a million bucks and being told you have to hand 10k back and then you can go on your way…who wouldn’t take that deal? which is why I was amazed when people were shocked that Intel is refusing to patch their own bugs in a lot of their older chips, I mean why should they? They have already seen the law isn’t gonna do anything about them no matter what they do, hell they could refuse to patch anything that isn’t under their 3 year warranty and I doubt they would even get a warning from the EU or USA…its a joke, once companies get that size they are “too big to fail” and too big to bust.
I will give this to you, make of it what you will:
Intel has been pushed by/is intimately intertwined with secret services.
I know for a fact that Motorola was brought down by “an inside job/rogue management” (they had them drive into a wall – just Google ‘Iridium Motorola’).
In return, Intel has been providing “their friends” with not only back-doors, but undetectable surveillance capabilities (Google ‘Intel ME backdoor’ and read between the lines).
You have a similar constellation with Volkswagen and the German secret services (which is a bit harder to explain as you likely lack a lot of background information). Likewise, Volkswagen got away easily. In Europe, they barely touched them. Instead, they “retro-actively” changed the laws in favor of VW.
So now, don’t expect Intel to get hurt by “the authorities” (or even courts for that matter). They have powerful “guardian angels”, therefore they get away behaving like those mobsters.
Conspirationist! Heretic! Unpatriotic behavior! Anti-democratic thoughts! How dare you? Blah-blah-blah, you know the rest of the litany…
It was to be expected that this got censored.
You can tell as many lies as you want (see “Intel’s advertisement department” further above), but the truth cuts deep and brings up censorship reliably.
Because your competence in the computer architecture field seems to be stuck at the level of old 90s ‘puter admagazines fluff articles.
You are absolutely right. My expectations from Intel are completely unrealistic. I can understand that lithography has stalled. I can understand that the original quad core i7 from 2010 had 700M transistors and while Moore^aEURTMs law dictated that we should have chips with 22B transistors by now, we are TDP limited to about 3B in a laptop friendly 45W.
But Intel should publicly acknowledge that the x86 architecture has reached its limits and adopt another CPU architecture that can scale beyond what they are offering currently.
They barely doubled the performance in the past 90 months and I haven^aEURTMt seen them providing alternatives and it doesn^aEURTMt look like they have any medium or long term solutions. They crushed competition with anti-competitive practices. That competition could have provided the innovation needed to take us through this slump.
AArch64 can buy us at most another 3 years.
You should understand the problem before you rush with an uninformed critique of the solution.
I recommend you start by learning about the differences between micro-architecture and ISA.
A LOT has changed in this field since the early 90s. Trust me.
There still exist quite a few architectures competing with x86. In raw performance, they aren’t faster…
And don’t assume it’s even likely “to take us through this slump” …all technologies eventually plateau (be happy it happens now when computers are fast enough for most needs and not in, say, Pentium 100 times)
Such early Pentium 2 as 233 would likely be on LX chipset, which is limited to 66 MHz FSB, so no Pentium III 600…
And if you loathe Intel for bugs, you should absolutely hate Via…
/ I’m sort of glad what happened to them for all the buggy chipsets we suffered.
K7 hit Intel very hard and their decision to go with Rambus didn’t help either. I was unclear, my bad.
Well, the naming scheme has to be confusing because they have to somehow hide the fact most advanced the last couple of years have been glacial. Also, hide the fact they artificially restrict chips by fusing off features.
It’s how they make money… you sound like you’d prefer for Intel to not try to extract as much as they can from buyers, and to give all the features they can. That seems kinda… ~communist
(ekhem
http://www.osnews.com/comments/30130 )
Thank you dear moderators. Kicking osnews.com off my bookmark list.