The disclosure of the Meltdown and Spectre vulnerabilities has brought a new level of attention to the security bugs that can lurk at the hardware level. Massive amounts of work have gone into improving the (still poor) security of our software, but all of that is in vain if the hardware gives away the game. The CPUs that we run in our systems are highly proprietary and have been shown to contain unpleasant surprises (the Intel management engine, for example). It is thus natural to wonder whether it is time to make a move to open-source hardware, much like we have done with our software. Such a move may well be possible, and it would certainly offer some benefits, but it would be no panacea.
Given the complexity of modern CPUs and the fierceness of the market in which they are sold, it might be surprising to think that they could be developed in an open manner. But there are serious initiatives working in this area; the idea of an open CPU design is not pure fantasy. A quick look around turns up several efforts; the following list is necessarily incomplete.
Absolutely. The thing is that companies will not do this for a multitude of reasons, including probably some serious “security by obscurity” decisions they’ve done in the past, or just plain fear of competition from what others can gather.
Gotta keep that monopoly tight or the investors get nervous.
No one stops you from using some freely available processor, physical or virtual. There are several available today. Making a new one running in an emulator or FPGA is trivial unless trying to do something innovative. Many people design processors for fun, from 8 bit CISC cores to 64 bit VLIWs.
(It is of course not trivial to make a realistic system for the masses)
But software is more expensive than the hardware in almost all cases. So you want the processor to run the software you need (or crave).
Open source software helps a bit. But you still have to conform to the expectations the source code have integrated- commonly Unix systems (or close enough) with a lot of external software. That limits the practicality of changing anything, from how the processor works to what instruction set the processor should have.
This also limits what types of protection is reasonable. AMD64 dropped the (admittedly flawed) segmentation based protection for a pure virtual memory based one – the one Unix and Windows NT expects (and most other systems). Why? Nobody used it.
X86 isn’t a problem as such. The monopoly isn’t really one. Emulation of the instruction set is unlikely to violate any patents and many parts of the ISA is already outside patent protection.
But what makes x86 still viable isn’t a form of monopoly, it’s software availability. X86 is supported by the majority of existing, still used, software. Alternatives aren’t – but again if the limitations of current open source software are acceptable it helps a bit. Not completely though but you’ll not hear that from OSS fanatics.
I think it’s somewhat irrelevant, since the vast majority of software is written in high level programming languages and does not directly depend on a specific ISA. If somebody started selling ARM or SPARC x10 cheaper and running x10 more performant than anything x86 could offer, then most software vendors would quickly rebuild and qualify their products for those platforms. It is all about market uptake and how many sales can be made.
Edited 2018-01-20 15:39 UTC
I agree in theory but in practice it isn’t as simple as that. High-performance software is likely to be designed around the SIMD vector support of x86 for instance.
Except that a very large number of programs written in high-level languages still use (either directly or indirectly) a not insignificant amount of code written in assembly language, which is pretty much by definition not portable, and quite often ends up being the performance critical section of the program (that is, it’s in assembly because it’s performance critical). Go take a look at OpenSS, FFMPEG, or pretty much anything else that does high-performance crypto or DSP stuff if you don’t believe me.
As an example, Windows doesn’t run on anything but x86 and ARM (today at least, back in the NT4 days it ran on almost everything), and there’s still shitty support on ARM from most vendors despite the fact that ARM CPU’s are already significantly cheaper than x86 and are generally higher performance per-watt (which is what really matters for most of the big name users out there who drive the industry).
“No one stops you from using some freely available processor, physical or virtual. There are several available today. Making a new one running in an emulator or FPGA is trivial unless trying to do something innovative. ”
I’m on the idea that -just as the CPU industry, FPGA industry was also driven into strong oligopoly, with strong codependencies to the military industry complex.
Status Quo of West Side Electronics Industry is not casualty -Never has been, from memory. Our fragility is deliberate. Out of ambition of very small, very big pockets, interest groups.
Diamond dust tomography exists well before microelectronics industry.
Security through obscurity is only for the average Jane and Joe.
Only difference here is that Jane and Joe won’t know what’s wrong with your design.
Obscurity and secrecy are part of the big problem keeping West near to saying good-bye right now.
People doesn’t swallow pills, nor red, nor blue, if trust is DEAD.
I’d love to buy open source CPUs!! As the article mention, this is not a new thing, in fact, vendors like IBM or Sun/Oracle’ve been promoting the open source approach to CPU development for at least 10 years (google OpenPOWER or OpenSPARC). Obviously nobody cares about Power or SPARC anymore… and that’s why IBM/Oracle open sourced them in the first place. There’s no value on them.
The problem is, x86 CPUs aren’t a commodity yet, there’s a lot of value on them and people are willing to pay PREMIUM prices for new intel or AMD “innovations”… so forget about intel/AMD open sourcing CPUs. xD
Edited 2018-01-20 06:04 UTC
What about RISC-V ? You have almost everything you can dream of to start with here :
https://opencores.org/projects
FreeBSD supports RISC-V for some time now:
https://wiki.freebsd.org/riscv
Also, Linux probably supports RISC-V also.
But, AFAIK, there is no application RISC-V CPU in production, yet.
Yes, Linux has preliminary support for RISC-V (it’s enough that you can boot a system and so some stuff from userspace, but it’s still not really feature complete compared to x86, ARM, POWER, SPARC, or MIPS).
As far as not having any physical implementations, that’s not really an issue, RISC-V was designed to be essentially trivial to synthesize as a soft-core on an FPGA or similar device. Despite that, I’m not entirely sure that there aren’t any physical implementations, Western Digital has been talking about using it for their disk controllers (that is, the logic on the disk itself), and I very much doubt that they wouldn’t use an ASIC implementation of a RISC-V core for that.
I’m sure it has been done on small scale, but there is another question. Anybody feeling clever enough to release open source x86 CPU design, verify end-to-end and then have it manufactured so that it can compete with Intel/AMD? There is nothing stopping you really, apart from huge amount of time, money and resources and then you could just give it away into public domain and feel really smug (or stupid) about yourself. I’m sure Intel/AMD would love to manufacture this incredible CPU without having to spend millions on R&D. Thom you can start the revolution and we’ll see how you get on.
Ever heard of patent, nda, royalties ? The x86 isa is under embargo by Intel, only amd can use it in exchange of 64 bits support in return. Why do you think Cyrix, Via and other have quit the x86 market ?
There’s a better chance with arm, especially armv8.
Btw Quallcomm will buy nxp, which bought freescale some time ago. There’s a load of semiconductor portfolio here, why not asking them for an improvement of the 68k architecture and opening it ?
Assuming there were no patents around x86, it would be difficult to come up with a brand new, open source design that could compete with Intel/AMD designs. They put a lot of investment into R&D and have very smart hardware engineers working full time. This is part of the reason other commercial CPUs (e.g. ARM) are not so prominent in the data centres. There is a lot of software that still needs good single thread performance and Intel/AMD CPUs tend to offer better performance at a better price. I really hope ARM will close the gap and we will see more ARM servers from various vendors. Now if you start looking at things like RISC-V, there is a lot of hype, but not many viable hardware platforms that can be used by enterprises.
Yeah that will not happen. And while the 68k was a very nice design I don’t see it as suitable as a new open ISA.
We already have open source UltraSparc designs that are quite capable. The patents on SH2 have also run out.
And there is an open implementation of SH2: J-Core.org
Right. Why don’t we see effort put into these designs, which already have proven and mature hardware, OS, and compiler support?
Because until recently there hasn’t been much interest in open silicon outside of the hobbyist and research space. The angel funding J-Core, and commercial adoption of RISC-V are examples of this changing
Sorry, this ain’t gonna happen. This is as smart as trying to make open source space shuttle or death star … There are so many resources and knowledge needed to design, verify and manufacture a processor than can compete with anything from ARM, AMD or Intel that some random cave dwellers that have idea of making open source processors can never do it. In big companies which design CPUs there are specialists even for “simple” things like transistor layout, packaging, integrated circuit routing, simulation design, verification design, verification of verification, etc. – this is not something that you can learn from stackoverflow. This shit is done by multiple PhD level folks.
There is also lots of multi-million dollar software required to simulate and design such processor – and this software does phone home, so no company is stupid enough to release such software to a public.
Current open source hardware “state of art” is 50 years behind of everything else.
People can always try to redo an open 6502 to start with, then improve the design iteratively in the next 200 years to reach something like the 286.
You should really check out OpenCores.org, because (https://opencores.org/project,lattice6502) those 200 years went by pretty fast (https://opencores.org/project,ao486)
But you are right that it isn’t a 286. I couldn’t find an implementation of that there, so I had to settle for a 486…
I was the one providing the ‘opencores’ link above, so I know what is available or not. It was mostly to point out that if an open source core implementation have to be found, it is there already. The problem is not to have it and improve it, the problem is there is no simple ‘make build’ command for that kind of things.
OSS do not always pass conformance testing before release, I suspect there is something more to do in the QC of a chip that just ‘release half baked and patch later’. That’s probably why engineers get paid more than a teenager’s room in their parents’ flat. Real and ambitious fantasies requires more commitment than just posting them on the internet.
Availability of 6502 is crucial if we want Bender and Terminators!
An RTL level design and a physical design are two different things. People have used open cores to run Linux on an FPGA. There’s nothing special in that. The problem is to make an ASIC that actually runs faster than 100 MHz.
I would argue that there are three thresholds when you are talking about fabricating CPUs, Below 100Mhz, below 1GHz, and above 1Ghz.
I agree that below 100Mhz is relatively easy, but the pool of people with sub Ghz experience is growing, and it isn’t that bad. The real trick is Ghz and higher.
But this is also where the open source analogy comes into play. Open source started fairly modest, compilers and other tools, but it grew till we felt confident using it for key infrastructure. I very much think open silicon is going to follow a similar trajectory
It makes sense that in any field, as tools and experience increase. The difficulty and barriers to entry decrease. The larger an open source hardware project gets the more able to target larger projects it should be.
I have no problem with table and knife at the kitchen. And they’re quite old. I TRUST this pair.
The subject line says it all. We would be better off if one of AMD or Intel would open source their existing CPUs. But that is not likely.
The time and cost to develop the fabrication masks and the cost to setup labs to do the testing. Testing prototype cpus, and sample cpus would likely cost into the hundreds of millions.
And like every design, software or hardware, there thousands of states in which the CPU can be found. A state is where the cpu is now after executing an instruction, and what is the next instruction to be executed. Think of the states as a set of points, a polygon, where the transition from one state to the other is a connected line. Some lines just can’t exist, and some lines are yet undiscovered.
Back to rolling your own design. You must find funding and you can’t be in a hurry. The x86 started with the simple 8088 back in 1980. Almost 40 years later we still find bugs in CPU’s.
A physical CPU is a finite machine, Isatenstein. If bugs remaining on the 8088 DESIGN, is about not enough testing.
Think of it as of a very long stair-cased signal processor.
Sounds like a great idea. Anyone have tens of millions of dollars sitting around to donate to fund this effort, including hiring people with expertise to accomplish this? I’m not aware of any chip fabs that are capable of 10-20nm production that are interested in volunteering their services, equipment, expertise, and materials for this venture.
Why are people down voting yours and other comments? Do they not believe in the freedom of speech and giving everybody a chance to express their opinion without being labeled as “Inaccurate” or “Troll”? Why does OSAlert (and other forums) provide tools to do such things? None of the comments in this thread are awful or offensive. When people attach negative labels/scores to other people’s views just because they disagree with them, such people act in a very undignified way and they should be ashamed of themselves.
Maybe some adults find juvenile sarcasm to be trollish enough for a downvote.
People who look down on other people don^aEURTMt end up being looked up to.
It seems you live in a fantasy…
Oh well, good bye OSAlert. Apologies if I ever offended anyone here with my juvenile views. I should have done this much earlier.
I am not saying it is identical argument, but it is similar to the argument from the early days of open source. You don’t need volunteers, money can come from a variety of sources, including commercial.
The J-Core project (http://j-core.org/) is being angel funded to produce a implementation of the SH-2 processor (why? because all of it’s patents have expired), and now the SH-4 (since they expire this year). This provides a firm foundation for adding other extensions of bit depth, coprocessors, etc.
Open CPUs will probably first find their foothold in the embedded space. For example Western Digital ^aEUR” which ships over a billion ARM cores in their devices ^aEUR” announced their plans to transition to RISC-V late last year (https://riscv.org/2017/12/designnews-article-western-digital-transit…).
Eventually someone will see the opportunity to make a high end RISC-V, or J Core, or whatever and raise the money to produce the silicon.
fmaxwell,
That’s the problem. I think it would be possible to successfully design an open CPU in an advanced university setting, but many research universities will claim ownership over the results for themselves.
Maybe someone could do it at home if people were willing & able to donate a lot of time to the cause, but x86 would be extremely tedious since it has so much legacy baggage these days. Going for another architecture is possible, but then you’d need to compete on two fronts for both hardware AND software. Incompatibility with popular software has killed many alternative architectures in the past (including intel’s own itanium).
In any case, I think it’s feasible for an open CPU to reach it’s *design* objectives without too much investment, but then comes the problem of fabricating it. Nobody has fabs that can hold a candle to intel, who have spent billions to build their mass production facilities. It seems to me that on this level, an open CPU would be at a large technological disadvantage regardless of a successful design.
Edit: It would be nice if intel itself would take interest in an open CPU, but would that be in their business interests? Hypothetically it could happen if some governments started mandating it out of security concerns. We know that the security concerns are well-founded, but it’s not clear the government would be on the same page with regards to fixing them (ie NSA, GCHQ).
Edited 2018-01-20 19:18 UTC
Don’t think that would ever happen unless the world forces them.
And in the end neither Meltdown nor Spectre are x86 specific – Meltdown will get fixed, Spectre solutions will apply as much to x86 as any other architecture. So no need to replace their winning horse any time soon.
Megol,
Meltdown is arguably an intel specific bug, not something other CPUs would necessarily be prone to (ie AMD doesn’t have that bug). Spectre is a more fundamental symptom of speculative execution. Eliminating those leaks is difficult because the mere fact that the speculation engine succeeded or failed to improve branching performance can be measured and therefor leak information about it’s state.
In theory, the only way a speculative engine could leak ZERO information if it had the exact same side effects every time it ran INCLUDING the time it took to run, but herein lies the dilemma! How can it take the same amount of time to speculate branches successfully versus misprediction? What good is a speculative engine if it is not allowed to return a result early?
Edited 2018-01-20 21:21 UTC
While true you don’t state the general case: the only way no information leakage can occur is that an algorithm processing some data always have to take the same time, always require the same amount of power, always load each part of the computer in the same way and always leak the same electromagnetic signals.
The problem with speculative exploits isn’t as much that it leaks information but how fast and how exact that information is leaked. If it’d take 100 years to leak a byte of data with 80% chance of being correct most wouldn’t have a problem.
Spectre can be solved by making sure speculative information isn’t leaked. That one can do timing attacks via caches isn’t a problem then as the information bandwidth is severely decreased and isn’t a problem in most cases.
One way to do this would ensure that speculative data reads can’t influence non-speculative data, doing a cache fill due to a speculative read will not cause non-speculative data to be removed.
Dedicated resources for speculative data preferably on each cache level would mean there are no visible flow of data due to speculation.
One could still point out that even with such a design in place one could e.g. in theory detect that data have flowed into the L3 cache as a known fill to another processor sharing the L3 cache happens to be slower than it should theoretically be.
(Sorry for the wall of text!)
Megol,
Not at all, this topic deserves to be discussed at length. But I’m afraid there may not be a satisfying resolution this time
Edited 2018-01-22 22:05 UTC
If ever. :<
Megol,
You know what, google, would be in an excellent position execute an attack. Their ads and beacons are so prevalent across the web that the odds are extremely high that a target’s machine is running google’s javascript code.
It would be highly illegal of course, but just think of the possibilities if google exploited their access. They could spy on prosecutors or obtain secrets from business competitors and even politicians. The attack leaves no traces on the target and the network connections are routine outbound connections initiated by the user’s own machine. With traffic over HTTPS, it’d be even harder even for qualified IT to notice anything is amiss.
Now you’re getting paranoid… (though at least you didn’t note how Microsoft could easily do a similar evil masterplan via Windows Update connections ) Oh well, maybe such times. :/
zima,
Yes it’s true, it takes a level of paranoia to believe they would do it…and I was this close to bringing up the device update backdoors, haha! (It annoys me that tim cook and other executives sometimes go on the record saying things that directly contradict this fact, it proves they’re either misinformed, or deceitful).
Obviously the pushback would be brutal if a corporation actually got caught breaking in to computers systematically. But there would be very little evidence if a small team operating in secret were conducting highly selective targeted operations. Heck a rogue employee with high level access could pull it off.
I’m totally in paranoid territory here, but even an entry level employee or spy might be able to use a zero-day attack from inside the corporate firewalls to obtain the HTTPS keys needed to perform fully authenticated man in the middle attacks. The odds are pretty high that all major corporations have spies, right?
What’s needed is quantum encryption to prove nobody’s in the middle. I don’t believe there’s a way to copy entangled atoms, is there?
…but enough paranoia for today
I just read this: regarding the recent exploits, intel is recommending partners stop applying the CPU updates until further testing due to stability issues.
https://newsroom.intel.com/news/root-cause-of-reboot-issue-identifie…
There may be some choices..
1. Russia (after the incident with USA based on software privacy issues they redeemed the Elbrus line). Last I checked the recent 8S release was show cased last year in a functioning state.
2. China (they’re already working on their own CPU architecture after the incident with Microsoft and Inetl products no longer being deemed “safe” by chinese government).
Not saying that any of these will be opensource but there are alternatives (from hardware perspective).
Main problem remains with software availability and compatibility.
Tens of millions is off by a couple orders of magnitude.
Tens of millions dollars not the case.
Risc-v is from 120nm to 7nm. TSMC 16nm has been used a few times with Risc-v in 2017.
https://cseweb.ucsd.edu/~mbtaylor/papers/Celerity_CARRV_2017_paper.p…
The above example is a small batch 16nm production in 2017.
So at 16nm small volume cost is not horribly bad we are talking well under the 1 million dollar figure.
Reason for jumping over 10nm and straight to 7nm is TSMC is after items to produce to test out their new production lines.
The reality was Risc-v production was at 16 nm in 2017. Risc-v production will be at 7 nm in 2018 at least for some projects.
The reality is with the automation turning out Boomv2 on 28nm was only 2 personal. One being technical and one being management. Even the 16nm was only a team of 4.
One of the shocking effects is due to Risc-v being open hardware and using a newer tool called Chisel the cost of doing a production chip in personal is way lower.
https://riscv.org/wp-content/uploads/2015/01/riscv-chisel-tutorial-b…
Basically a job that would require hundreds of staff with the automation chisel does only requires a handful. Its quite surprising how much of the cost of silicon design was humans doing tasks that could be automated.
How does open source address the root cause of Spectre and Meltdown? The same performance strategies (speculative / out of order execution) could just as likely have been employed by engineers working on open source. Linux used a monolithic kernel that is probably inherently less secure than a micro kernel and did so for feasibility / performance reasons. Same mission goal trade offs will happen no matter what the governing IP license. Agreed the ME secret Minix running at -1 security level would not have happened in secret. So I am all for Open Source, but let’s be realistic about what it fixes and does not fix.
As far as I’m aware open source X86 will be viable in two years time when AMD64 expires. Pretty sure all the features since then that have been patented are for cpu extensions and anyone can develop a vector maths co-processor unit to handle a lot of that lifting. Things like alternative memory interconnect types shouldn’t be that important. Can someone who actually knows about cpu design jump in? In terms of getting that magic 99% of software to work, the base amd64 extensions should be enough. Heck X86 32-bit is already patent free.
It could be problematic to create non-compatible extensions for several reasons including encoding space. The VEX encoding and similar is still patented and will be patented for a while.
X86 is also a complicated target. Intel, AMD and VIA have learned how to avoid corner cases and handle quirks without paying with too much performance/efficiency. Any newcomer will have several years before they begin understanding how to implement things properly.
The best bet would be doing a translation based design and decode x86 instructions to an internal instruction set. That would still be harder than doing something else. Transmeta didn’t succeed in the market, partially because things were more complicated than they thought at first.
RISC-V have succeeded in getting support from more groups than any other open processor design, in fact that is one of the problems with it IMHO: the extensibility and willingness to extend the ISA can lead to a family of only partially compatible ISAs.
Still think that RISC-V is the only real alternative in the near future. It’s a boring design though.
No, it isn’t time, and for a couple of reasons:
1) The many-eyes fallacy – we know from recent experience that many eyes don’t prevent major bugs – Heartbleed, shellshock, etc, all major bugs that many eyes didn’t discovery. The fact that virtually all architectures with this type of speculative execution is vulnerable means a significant share of people who are actual capable of understanding the tech were involved, and all independently designed a variety of different processors that are susceptible to the same vulnerability.
2) Open source software means an intrepid developer can release his own patch, and everybody can patch their software against vulnerabilities whether or not the patch is accepted upstream. This isn’t something that applies to processors. You can’t patch a processor in the same way. Even engineering a patch would take an exorbitant amount of work from a large team, as the whole processor design likely has to be considered.
Now, processor design can benefit from open
source – Intel’s AMT stuff and similar technologies should definitely have the specs open, and the software running on them should at least be user replaceable, if not open source on their own. But, actual silicon design just wont’ see the same benefits of open source that software does.
That’s not a problem with open source. That’s just a chicken and egg problem. You can’t patch a processor in the same way because those processors weren’t designed in an open way which would allow people interested in the problem to work on it.
It would behoove us to have an open source architecture more widely in use, if nothing else to mix things up in the embedded space. RISC-V seems well underway there, and the real benefit would be to have a useful, power-efficient SoC for all sorts of purpose. Can you imagine that running Contiki OS?
Obviously that would make a nice alternative to the firmware currently running our x86 systems. If one needed a reasonably fast x86 processor, I’d think an instruction translation unit for x86-64 melded to an OpenSPARC core would be interesting.
I’d like to see a solid RISC-V core with an ample FPGA setup! Hello hardware acceleration!
Edited 2018-01-20 20:14 UTC
OK so aside from simply being open, what would be the technical advantages of RISC-V compared to other established ISAs? If we ignore licensing cost, are you saying RISC-V would be faster and more power-efficient than ARM CPUs? I’m not a hardware engineer, but I very much doubt it. I suspect there is interest in RISC-V but mainly from companies that want to save money on ARM licensing. They will contribute just enough to get a CPU design for their specific needs, but they won’t drive this technology forward, since it wouldn’t make them any money. If you ship this CPU only in your disk drives, why spend money on R&D for use cases in data centers? And because RISC-V uses BSD license, how many of these companies will release their designs/modifications?
The advantage to RISC-V is that it’s a very cleanly designed architecture, with a capable base and several optional extensions, that lends itself to highly power-efficient, small, simple designs. It’s not going to compete with x86 or POWER anytime soon, but I’m thinking there are plenty of niches for it to dominate.
Dasher42,
For my needs (fast servers), I’d love to have cleaner architectures like RISC-V, but I need the high performance and commodity pricing that generally only comes with scales of economy. I believe most of the market won’t budge until new architectures can beat the support, price and performance of incumbent technology. The problem is that it’s difficult to deliver any of these up front: widespread support is non-existent before popularity, prices are high before scales of economy, performance will suffer before having access to the best fabs.
Maybe you are right that RISC-V can come and fill a niche that is being ignored, you’ve got to start somehwere after all. What would some of those niches be? It may not be fair, but even if RISC-V is the better architecture there are large economic challenges to overcome before the world may be ready to fund it significantly.
Edit: bare in mind that I actually want alternatives to succeed. But if we can not overcome these challenges somehow, then RISC-V could end up another market failure. How do we solve this?
Edited 2018-01-21 04:22 UTC
Fair enough! You’re in the domain of the most performance per watt, at scale, with server farms. Here’s where I see RISC-V getting its start:
Embedded microcontrollers and sensors. This is the kind of market where tiny versions of 68k processors are still prolific. I’m thinkining smart grid, automation, that sort of thing.
Firmware. We need an open-source version. Further, some emerging architectures build processing units into parts of the ISA, like inline with the RAM, and de-centralize it. Why not RISC-V?
From there, one can imagine smartphones coming into play. ARM is alright, but I think the Meltdown/Spectre moment is coming for these highly integrated phone/broadband chipsets. RISC-V could have a role to play here.
By this point in evolution, more performant and parallelized RISC-V implementations could crack the server farms, starting small and working up.
Just my forecast.
IoT is waiting for this…
and in the meantime I have fun playing with my R10000 mips64 Sgi Octane: https://www.youtube.com/watch?v=AU_RV8uoTIo
Edited 2018-01-20 23:01 UTC
and here is the mips64, r10k Sgi Octane video I was working on: https://www.youtube.com/watch?v=AU_RV8uoTIo
Mira el trompito! Mira el trompito! Guille! Guille!
like article said there are already some who exist….
big problem is the money need to made cpu
Edited 2018-01-21 02:56 UTC
Its like Western digital switching to Risc-V for a lot of things. This saves Western digital a lot of payments.
Risc-v has already been tapped out at 7nm.
We know the absolute smallest is 0.1 nm and that is a single carbon atom. Single silicon atom is 0.2 nm.
Even with carbon it is a question if using electric based chips if 1nm can be passed.
The thing to wake up is when we hit the limit there will not need to be the massive on going investment making new fabs. Instead it will then come optimising and cost cutting.
The problem manufacturers are facing is crosstalk.
When lines are at 7nm, there is a problem of induction causing cross talk.
As the cells drop in size, so must the cpu voltage. And silicon or germanium conductivity problems arise.
Perhaps it will be a reality, but I think it will not be for several years, perhaps even a decade
And at half that size, electron tunneling starts to become significant. Tunneling is already an issue at current sizes for current leakage, leading to higher power usage.
Hi,
For CPUs there’s 2 very different categories. There’s high performance CPUs (e.g. with out-of-order, speculative execution, etc) which have become so complex that making them open source wouldn’t make any difference at all (because almost nobody would be able to fully understand them or find the “security vulnerability needle in the haystack” even if they bothered to look). For these CPUs the only thing open source would do is increase the cost of hardware (e.g. manufacturers like Intel going nuts with extensions and patents to protect their ability to fund investment in the R&D needed to continue improving performance).
Then there’s low performance CPUs (typically embedded in things like microwave ovens, hard disk controllers, etc). For these CPUs it’d make no difference (for security) if they’re open source or not because they’re so simple that there wasn’t a security problem in the first place. The only thing “open source” does in this case is save the manufacturer a small amount of $$ (e.g. not having to pay someone like ARM a small licencing fee). This is exactly what we’re seeing for RISC-V – Nvidia embedding it into proprietary GPUs (and using it for job control, with proprietary firmware) to save themselves a little $$; and Western Digital embedding it into some hard disks (with proprietary firmware) to save themselves a little $$.
Mostly the article is marketing hype – open source advocates using fear (created by spectre/meltdown) to peddle snake oil to fools.
– Brendan
Brendan,
I agree with you that the chip manufacturers may have trouble finding ways to make open hardware fit in their business models. However your last point went downhill. Open hardware advocates are most certainly not “peddling snake oil to fools”, that’s an insult. You should be fair and admit the call for open hardware existed long before spectre & meltdown. While these recent incidents did bring new media attention to the problem, the desire for open hardware existed long before this.
For me personally, I’ve long wanted control over my CPU’s proprietary management processor. It was revealed in the past year intel CPUs with AMT/vPro had some pretty serious vulnerabilities that remained open for about a decade. Openness is not just about security though, there’s a lot of potential for owners to make better use of their processors than intel offers. For instance: I’d prefer the AMT to be accessed through a VPN. I would have implemented this feature myself if intel didn’t block me from doing so on my machines. This would add considerable security over intel’s stock software, but of course intel’s CPUs are cryptographically locked to its closed & proprietary software.
Going beyond intel, another issue with proprietary hardware is being dependent upon vendors for any updates (ie drivers/firmware). I’ve encountered this problem repeatedly and it infuriates me knowing that the manufacturers will neither fix it themselves nor allow others to fix it for them. Oh how I wish open hardware advocates were in a much stronger position to demand openness from all our hardware vendors.
“The only thing “open source” does in this case is save the manufacturer a small amount of $$ ”
Every Coder like to build over firm foundations, think of energy plants, electric grids, hospitals, banks, etc.
Whatever needing of massive performance, not going to to choose open RISC, but open MISC parallel computing arch.
Whenever security flaws are detected there is a common cry for Open Source.
But there is nothing “hidden” in the latest security flaws detected. The behavior of the CPUs (x86, ARM, Power) is well described and could be exploited.
ARM manual even state that a speculative cache filling is not considered a security problem!
Maybe the ARM architecture is not “Open Source”, but there are at least three companies with an Architecture License of ARMv8-A (AFAIK: Apple, Qualcomm, Nvidia).
All these companies have massive CPU know-how and did not add a protection against these exploits.
So how would “Open Source” help?
What a stupid question.
“It wasn’t open, and that did not help. How would opening it up help?”
How “open” is open? Even though the Linux kernel is “open” there are only a few people in the world to understand parts and even less everything of it.
The ARM architecture is close to the wider public, but at least a few engineers have full understanding of it.
Even though it was open to them, they did not see the problems.
Or, as I cited from ARM documentation, did not consider it a security problem.
This is the problem. Risc-v there are more people who full understand its cores than who have full understanding of ARM cores.
Interesting enough Risc-v boom cores were designed without the defects as well as like Qualcomm out of order arm64 chips. So not everyone making ARM chips agreed with ARM that it was not a problem. Qualcomm redid there out of order starting from an A55 very much like how boom is done from risc-v rocket.
The Linux kernel is insanely complex beast due to the number of platforms it in fact support.
We are starting to see risc-v design in parts that address issues that were not able to be worked around when using arm or x86 in multi core.
Maybe there is advantage opening up your ISA and CPU design to universities who in their silicon design courses will dissect it over and over again and have a Monmouth number of people who really know hows it works.
Risc-V being open hardware cannot not be compared to Linux Kernel. You don’t have universities as course work dissecting the Linux kernel over and over again like Risc-v.
I agree (for the moment). If RISC-V gets as complex as an Cortex-A75 then it might stop.
Thanks oiaohm. Knew the problem was known, or at least suspected, even at Intel.
Stakeholder pampering, maybe :/ Probable cause of INACTION.
Massive kernels are not the future. Hardware to evolve around this.
DeepThought,
If I recall they have said cache timing was out of scope in the context of an ASLR address leak. But that’s really a different beast than meltdown and spectre. I’m not aware of any documentation that would even imply the meltdown behavior, I’d honestly be surprised if it were explicitly documented. If it is though can you cite exactly what you are referring to here so that we can read it?
https://www.youtube.com/watch?v=f-b4QOzMyfU
The video above is a planned 7nm Risc-v with 16 primary cores and 4096 other cores.
Do watch this at 19.15 mark and note their plan include doing GPU in the 4096 cores. So Risc-v ISA for everything. All those core are to be boom v2 based that is out of order.
So this is going to be a very interesting chip of 2018. This could also explain why Intel is paying AMD for a GPU to embedded with their cpus.
https://www.youtube.com/watch?v=toc2GxL4RyA
This the boom v2 at 28nm 2 people 2 months and chips produced. Also note that the rocket chip that is the in-order shares core design with the boom v2.
https://www.youtube.com/watch?v=ZOGpNTn1tpw
This label system also very interest it fixes the problem why you cannot do dependable real-time on arm, x86… Multi core and it already tested with Risc-v.
Also it gets interesting when you wake up the person design Risc-v vector extensions also designed AVX for Intel.
Of course none of this has the current intel, amd, arm cpu design issues.
Interesting regarding “doing GPU in the 4096 cores” …so it seems that RISC-V will successfully do what Intel tried, and failed, with Larabee.
Edited 2018-01-23 17:36 UTC
The whole discussion have lead me to presentations on the Mill architecture that made my weekend.
They support software speculation and have found out that their compiler ( or rather last stage program loader/specializer) was indeed affected by variant 2 of spectre. As result they have fixed the problem in the compiler.
How cool is that?
Edited 2018-01-22 15:53 UTC
Mill is really interesting on paper, and the people behind it have impressive pedigrees; but to my knowledge there is no implementations (hard or soft) that independant people can test. As far as I can see, but the promised 2017 demo didn’t happen; though I would love to be proved wrong on that
I keep an eye on them but until there is some independent analysis I wouldn’t get too excited.
Still reading their papers and watching videos is very intellectualy refreshing.
Open processors are a wonderful idea!
open source hardware is way worst than proprietary….linux community have no idea they have hackers working with them
You do know that open source projects such as Linux do have source control, right? So even if they may have hackers working with them (and your use of the word in this context implies you’re a bit of a numpty), we can see the source they contribute and figure out what it’s trying to do.
There are several examples that illustrate that the many eyes -> shallow bugs idea is not true.
OpenSSH is a good example that most technical people remember. But it isn’t the only one.
Properly executed a weakness can be introduced by an undercover operator “accidentally” doing something that can be used later.
E.g. if someone in a TLA have detected the possibility of using a Spectre type attack just writing a piece of a code so that GCC will compile it to a known weak spot -> success!
And nobody would blame the operative so they can keep contributing together with the other aliases of that person or group.
Care to provide some proof?
Sure you can open source a design on paper, but that^aEURTMs only half the battle, the other half that Intel develops are the means and methods of manufacturing that chip in thousands and thousands of hours of testing and validation, and millions of dollars of equipment tooling and process testing. Who is going to do that on a design that they don^aEURTMt own in propriety?
Semiconductor manufacturing is a massive undertaking. It^aEURTMs not easy. It^aEURTMs amazing that they have the control and quality over their products that they do. Intel and AMD deserve to be well paid for their efforts, regardless of this spectre issue.
Not really.
https://en.wikipedia.org/wiki/Product_binning