Microsoft has ported Windows 10 and Linux to E2, its homegrown processor architecture it has spent years working on mostly in secret.
As well as the two operating systems, the US giant’s researchers say they have also ported Busybox and FreeRTOS, plus a collection of toolkits for developing and building applications for the processor: the standard C/C++ and .NET Core runtime libraries, the Windows kernel debugger, Visual C++ 2017’s command line tools, and .NET’s just-in-time compiler RyuJIT.
Microsoft has also ported the widely used LLVM C/C++ compiler and debugger, and related C/C++ runtime libraries. The team wanted to demonstrate that programmers do not need to rewrite their software for the experimental chipset, and that instead programs just need to be recompiled – then they are ready to roll on the new technology.
I had no idea Microsoft was working on its own instruction set – even if only for research purposes. The Register has some more information on what E2 is like.
The Register understands from people familiar with its development that prototype E2 processors exist in the form of FPGAs – chips with reprogrammable circuitry that are typically used during the development of chips. For example, a dual-core implementation on Xilinx FPGAs exists, clocked at 50MHz. The team has also developed a cycle-accurate simulator capable of booting Windows and Linux, and running applications.
Qualcomm researchers were evaluating two EDGE chip designs with Microsoft: a small R0 core, and an R1 core running up to 2GHz fabricated using a 10nm process. The project, we must stress, is very much a work in progress.
It seems to be a radical departure from the norm, and I’m very interested to see where this will lead.
It’s not that much of a departure – ARM have a 2016 whitepaper on using an FPGA to prototype Cortex-A SOCs.
I have a vague recollection that the M0+ was originally designed to be put into FPGAs – but I can’t find any reference to that specifically. (The Wikipedia page on the ARM Cortex M suggests it’s the M1 that does that and that Altera, Microsemi and Xilinx have soft cores available for putting into their FPGAs)
There is also the j-core open processor project that has implemented the SH2 (With a few extra bells and whistles) in VHDL.
Their plan is to do a version of the SH4, at some point.
Using an FPGA is quicker, easier and cheaper to prototype than purpose designed silicon (But will run slower) and far more representative than writing a simulator.
FPGA is an irrelevant detail.
The interesting part is the EDGE architecture and the fact that MS can boot Windows and Linux on it.
BTW there is a commercially available EDGE/TRIPS-like Russian processor Multiclet – http://multiclet.com
EDGE seems like something that works best on ultra-parallel code at low power levels, but is likely to have trouble having competitive single-thread performance. Though we don’t know anything specific about MS’s implementation, so they might have ways working around that.
Yeah, I fail.
A few have run Linux on their own ISA – but barring emulating an existing ISA, nobody other than Microsoft is going to get Windows to run on them.
SH4 ? Hooraaaay !
But… there was also a 64 bits SH5 down the pipe, where is it ?
The last patent on the MMU of the SH4 run out last year, which was the reason they hadn’t done it… (Although the last I saw, they were planning on implementing something simpler)
And just having a standard, reconfigurable core that can run Linux is useful.
The use of FGPAs isn’t the point of the article. FGPAs have been used for decades to prototype chip designs. The salient point(s) and the departure from the norm is the design of the chip and the compilers and how they do scheduling and branch prediction differently.
Sadly j-core has been very silent of late making me worry if they are still active. There has been no activity on the mailing list since February.
Their SH2 work is out there in the open, but I am not sure if we will see their SH4 implementation.
“Your booting image is freeeezing”. Nooo.
Edited 2018-06-19 23:55 UTC
Feel free to fling as much sh*t my way as you want but this is a really REALLY bad trend we are seeing, why? Because X86 is about as bog standard and open as you can possibly get and it looks like APPL and MSFT want to replace that with locked down ARM chips that run only on approved hardware with their approved OS that can only get programs from an approved store they control.
Say what you want about X86 but we have 3 CPU vendors (for those that do not know Centaur is still alive and doing fine, they are just focused on embedded and car computing) and plenty of vendors making everything from mobos to RAM sticks to GPUs and laptops and by and large its all compatible. If I buy a new Windows 10 box and think than Win 10 sucks? I can run literally dozens of different OSes on that hardware with ease, I can also mix and match parts from dozens of vendors and it will work just fine.
But what we are seeing with APPL and MSFT is a trip back to the 1980s, where everybody ran their own custom CPUs, nothing was compatible, nothing was upgradeable, and it was all a proprietary nightmare. Most of the younglings here don’t remember those days but I do and can tell you they SUUUCKKKEEEDDD, the amount of E-waste was truly insane as to switch brands you had to replace your entire setup, from external floppies to printers because NOTHING worked cross platform, and the vendor lock in meant that just switching OSes could cost you thousands of dollars.
So before you ARM guys cheer about “Yay another ARM conversion” ask yourself this…is the switch to ARM we are seeing now giving consumers more choice or less? More control or less? Because it sure doesn’t look like that its doing the world any favors where I’m sitting, not when I’m sitting next to to 2008 desktops both running 2018 OSes while half the phones my carrier is selling RIGHT NOW can’t even run the latest release of Android and most likely never will thus making them insecure before they are even sold.
And remember just because MSFT has multiple OSes running on their test benche doesn’t mean they will let YOU run any of those,see the difference in privacy controls for windows 10 Home versus win 10 Enterprise to see how that works.
Completely agree. Couldn’t have said it better. I’ve watched this trend appearing for the last couple of years and it doesn’t instill any confidence whatsoever. It’s obviously about control and lock-in, two things that shouldn’t be allowed in general computing.
Microsoft are heading towards a subscription setup and making folk pay every year to rent their OS, no thanks!
Go ahead and try making a modern x86 chip right now and see how “open” it is. You will get sued into oblivion by Intel in a heartbeat. The good patents, the ones that actually matter in a modern core, are no where near close to expiring.
Licensing ARM is at least possible, licensing x86? Not in a million years.
x86 has 2 vendors (no, Centaur really doesn’t count, they basically make Pentium IIs) and that is literally all that there will ever be. So we are eternally 1 bankruptcy/buyout away from a full on monopoly.
ARM has Samsung, Apple, Broadcom, NVidia, Qualcomm, just to name the ones that have full architectural licenses. If you count the ones doing bog standard ARM, its more like 50 companies.
I don’t know, ARM sounds a lot more open to me…
ps. – Of course none of this has anything to do with this Microsoft EDGE stuff (its not even remotely related to ARM), but you did bring ARM into this…
ps.ps. – As far as Microsoft and Apple locking down their platforms to only run approved stuff, that literally has nothing whatsoever to do with ARM or x86, and is equally possible to do on either (and has already been done on both)…
Edited 2018-06-20 01:22 UTC
Uhhhh News Flash: The ARM corp’s ENTIRE BUSINESS MODEL is to sell designs that companies can lock down, be it APPL, MSFT, GOOG, etc etc etc.
That is their entire model man, it doesn’t matter how “open” you think the ARM CPU is, its entire model is based on selling to corps that lock it down with proprietary drivers, binary blobs, its nothing but a black box!
Meanwhile with X86 I can run BSD, OSX,Windows,Linux,ReactOS, nobody can lock my bog standard X86 hardware to a single vendor so I as a consumer have OPTIONS, the same goes for hardware as I can buy RAM from multiple vendors, GPUs, PSU, HDD or SSD, I can mix and match as I please. Hell even with laptops its not hard to get RAM, HDDs, SSD, replacement parts like screens, and plenty of shops will do it for you if watching a YouTube “how to” vid is too difficult for ya.
Compare this to ARM, what are we getting? Other than those toy maker boards nothing but more and more locked down locked in vendor locked bullshit, that’s what! We are getting tablets sealed with epoxy, cellphones you can’t even get a single OS upgrade on, both of which you can’t change the batteries, its literally EXACTLY what Stallman talked about in his “black box computing” and I’m the windows guy that always considered Stallman a whackjob, so when even I notice this crap? Its pretty damned bad!
What we are creating is an e-waste nightmare, systems literally designed for the garbage dump because you can’t even get patches for units being sold TODAY, and instead of people going “okay this is REALLY stupid and needs to stop” what we have is GOOG,APPL, and MSFT coming out with their own vendor locked units to make an even bigger proprietary mess!
I personally don’t give a flying flipping f*ck if ARM is the second coming of Christ, what I DO care about is that with the old system if something broke or a vendor refused to support it? Well who cared, I can go somewhere else as the former Vista box running win 10 and former Win 7 box running Ubuntu under my desk can attest to, but with ARM devices? I already have a mound of e-waste in my work drawer ready for the dump NOT because the hardware is busted but because no current OS will run on them and trying to replace the soldered on proprietary battery will cost more than its worth…that isn’t good for our planet man, its only good for corporate profits.
Stop buying shit hardware aimed at consumers, then. I have an ARMv8 server with a standard boot process (PCIe+UEFI), standard console (USB-serial 115200 8n1), real gigE, SATA II, and USB 3, all with open drivers, and it even takes standard ECC DDR3 memory. There is a standard “OS platform” that software devs can target for 64-bit ARM much like for x86. Windows (apparently internal builds for now), Linux, and FreeBSD all work, and other BSDs are coming along. I can run Firefox, albeit a bit slowly, over x2go, plain X11 remoting, or XRDP. At this point all that’s missing is CPU cores with a bit more performance and at least 16 lanes of PCIe to drive a graphics card, and you have an open workstation platform.
I have no idea why this is voted down. Does the fact that OS/Vendor neutral ARM boards exist offend someone? Because they do in fact exist…
Yeah, they have a way to go no doubt, but all the infrastructure for them exists now. Even Linus has said on a few occasions lately he is happy with how ARM is progressing on this front (and he wasn’t very happy about it previously).
The downvoters are probably also ignoring the fact that all those companies making standardized hardware *really* don’t want to lose market share, and that any of their hardware supported in the mainline Linux kernel *already* works on ARMv8 if the silicon’s there. That covers everything from SD/MMC chips to high end GPUs and 40GbE NICs.
The keyword here is can. The point is you can lock down x86 just as effectively, and the tools to do so have been in place for years. If you want be the next APL or MSFT and create a locked down software ecosystem, x86 will do the job just fine. Those companies just haven’t fully committed to doing it yet for business reasons, not technical ones.
Look at the iMac Pro for instance… That is an x86 machine, and as far as the hardware goes, it is capable of being locked down just as much as any ARM device Apple builds. The only difference is they still allow users some toggles to disable specific protections because OSX’s pesky legacy software distribution model.
Look at any modern x86 motherboard… If you could not disable SecureBoot in the BIOS, how open would it actually be??? Microsoft can (and does) restrict certain x86 versions of Windows to only running signed executables. They have chosen not to do that on all versions yet. That isn’t openness, that is just a matter of self interest in placating their existing userbase. Its just a matter of time really.
Both platforms have binary blobs for lots of their drivers (wireless, GPUs, etc.). Both platforms currently have no 3rd party chipset support (its almost all first party, with ARM being SOCs for the most part). Both platforms have security enclave chips specifically design to facilitate locking them down, all it takes is an OS that chooses to use those facilities…
This idea that x86 is “open” is a complete illusion. Its not open, it has all the bits required to lock it down just as hard (or harder even) than ARM. The only reason it isn’t locked down already is legacy software inertia. That is not a property of the hardware platform, its just historical baggage.
There are also lots of “open” (in your terms) ARM boards out there. There is just not much market for most of them yet, but there are at least 20 I can think of, some of them pretty performant (nVidia Jetson TX2). Point is its easy to make an “open” ARM board, just don’t lock it down!
The only difference between the 2 platforms from an “openness” point of view is that anyone can license ARM, x86 only has 2 manufacturers and they maintain a duopoly over it (VIA/Centaur really doesn’t matter unless you want to buy a CPU for a toaster)…
It won’t be long before SSE2 patents expire, at which point you’d only have to license x86-64 from AMD to make a chip. (SSE2 was first released 2001, but I do believe the patents were filed in 1999)
Granted, you wouldn’t have AVX or any of the other extensions, but if you do something nifty (and popular enough), you might convince Intel to share.
It’s possible. Not likely, but possible.
Your leaving out SSE3, SSSE3, SSE4, SSE4.2, SSE5, AES-NI, CLMUL, RdRand, SHA, MPX, SGX, XOP, F16C, ADX, BMI, FMA, AVX, AVX2, AVX512, VT-x, AMD-V, TSX, ASF….
But fair enough, you don’t actually need ALL of those to make a competitive x86 product (hell you probably don’t even want a few of them), but the 1st few are getting pretty damn important.
However, at some point lack of some of these extensions to the ISA are going to become very problematic for any new chip unless they have some serious market momentum to allow them to strongarm their own replacements into the market (and the majority of the market is resistant to such extensions, just ask VIA).
As for AMD being able to license ANY of their x86 related stuff themselves, the rumor is that this is forbidden in their cross-patent agreement with Intel (rumor – no one really knows for sure because its secret). In other words to get an x86 license from AMD more than likely requires blessing from Intel too (and the x64 patents are pretty essential, and there is about 5 years left before any of those expire). VIA has x64 license because of existing licensing agreement, how long that will go on is anyone’s guess, probably as long as the pose no real threat to Intel or AMD.
In short, most of the x86 patents might have expired already, but they are rapidly getting replaced by new ones that will serve as hurdles to any serious effort by a 3rd party…
There are two aspects.
– The architecture licence and the new instruction set. Trading the 64bits mode, AMD got free access to Intel’s instruction set extensions. This agreement is non transferable and would need to be renegotiated if AMD is absorbed (by Samsung or Apple for example).
One of Itanium’s goals was to screw AMD and make an ISA noone else could implement. Luckily it failed miserably.
– The microarchitecture patents. All the crazy complex stuff about prediction, prefetching, all the special optimisations, some of them specific to x86, some needed for all high performance CPUs.
I am afraid that making any modern CPU will step on a few patents from Intel, AMD, IBM and ARM, just like GPUs were totally dependant from patents from nVidia, Sillicon Graphics, 3Dfx, ATI, ImgTech (and FPGAs depended on Xilinx patents)
I wonder if Apple secrecy, never bragging about their achievements in publications or at HotChips conferences, is not a way to silently infringe on other’s patents when they design their CPUs.
I wonder, what allowed in the past quite a few companies to license x86? It seems there were around a dozen making x86 chips in the 80s and 90s…
But the margins were great, and that is what matters.
Are you truly implying that we should never explore alternatives in computing? You’re stating that the x86 instruction set is all we will ever need. That’s, to put it simply, idiotic.
Nothing in this article states that Microsoft is going to force a replacement of Intel chips. Nothing in the article states that what they are doing is anything other than (at this stage) pure research.
Iff the article stated that Bell Labs had been magically resurrected and that they were working with Linus on the same exact chip design, instruction set, and compiler changes I’m sure your commentary on the situation would be completely in support of it. You saw “Microsoft” and flew into a frenzy.
Can you blame him? Looking at the history, isn’t such reaction well deserved by Microsoft’s own actions? You wouldn’t expect murderous drug addict with numerous repeat offences to suddenly start doing things for the good of the humanity, would you? It’s only natural to distrust Microsoft. It would be very naive to think otherwise.
Edited 2018-06-21 06:23 UTC
Bell Labs is still alive, owned by Nokia.
https://www.bell-labs.com/
GRR!!! x86 and arm have only something to do with it. Its about the other standards in place around it, how to boot and how to find perhiphials. In your standard x86 set up, those are BIOS/EFI and ACPI.
There is a similar ARM standard that uses EFI and ACPI, which has very little hardware adoption at the moment, for reasons that confuse me. But RHEL, windows are supposedly on board for OS support.
So that its not x86 instruction set that’s magical, its the standards and the industry that surround it. ARM could have that, so could this edge thing.
If Microsoft move the state of the art then they are entitled to a lock down so they can recover the investment. Otherwise why would they (or anyone) bother and we would all be stuck in the age of the abacus. This is IP principle 1.01.
Now if they achieve a lock down by abusing a monopoly position it is an entirely different matter.
In any case I don’t think the Microsoft business model is to write software for only Microsoft hardware. They would be more likely to adopt the ARM model and charge a license for the ISA. They might call it Edge too given they are so creative at naming … not!
lapx432,
What do you think a patent is? A patent is a government granted monopoly. Despite the fact that it is endorsed by the government, it’s still a monopoly and causes similar harms to other kinds of monopoly abuse.
You joined osnews late last year, which probably means you weren’t around for any of the “IP” debates we’ve had in the past. Usually the consensus for the tech industry is that copyrights are good, but patents are detrimental.
Edited 2018-06-20 23:19 UTC
There was so much innovation in the years prior to the rise of “Intellectual Property” laws.
Yeah, the 1780s were a hotbed of innovation. Then the first US Congress (1790) up and ruined it for everyone.
Odwalla,
Hmm, I don’t think your talking about computer & software technology like I was
Anyways, as indicated earlier, copyrights are generally good for software because they protect expressions without making the ideas behind the expression exclusive to the person expressing them. For example, many people can implement a multimedia codec using their own code that they work on themselves.
Contrast this to patents, which grant a monopoly to one party who can use the patent to block competing services/products regardless of whether the competing products use the owner’s code or implement it independently.
The idea with the patent system is to encourage more participation, which can work in a small market where no one is choosing to innovate in the first place. However the bigger and industry gets and the more developers there are, the worse innovation fares under the patent system. Not only does administrative overhead grow exponentially, but it encourages business models based on litigation rather than innovation. This was not the intention behind patents, but is what we’re seeing unfortunately.
Edited 2018-06-21 01:45 UTC
Not to mention that patents were conceptualized in a time where, if someone patented an automatic loom, it was probably possible to come up with a different (if possibly less efficient) design.
If someone patents a codec, any implementation capable of exchanging data with it will violate that patent.
I think the U.S. founding fathers would have erred on the side of the first amendment (“Congress shall make no law […] or abridging the freedom of speech […]”) in this overlap between the two domains that has emerged as a result of digital communication technology.
After all, U.S. copyright is supposed to enrich the public domain by promoting “science and the useful arts” (ie. science and mapmaking) and patents are a mechanism intended to move trade secrets into public record.
Edited 2018-06-21 02:57 UTC
Alfman,
Appreciate the thoughtful reply. I agree IP is a double edged sword and may be past due an overhaul, especially in software. My thought was simply that if someone (even someone as mis-trusted as Microsoft) is prepared to invest in creating a 1 Teraflop microprocessor then I am prepared to give them a “regulated monopoly” for a reasonable period. This promotes investment and disclosure of inventions. “Reasonable” in tech should probably be 7 years max as 25 is an eternity. So work to do for sure. However within 7 years I bet any tech patent can be improved or broken.
The other way works too, people inventing for the love of it, but it did not work as well. In those days it was ironically only the interdependently wealthy who could carry out research, i.e. the 1%. Even more ironically the invention of IP has created another 1% and that is the motivation for a lot of people. Check out “The Most Powerful Idea in the World”.
https://www.amazon.com/Most-Powerful-Idea-World-Invention/dp/0226726…
There is a picture of a steam engine on the front, but that is not the powerful idea, It is IP. But it is an old idea and is now part of a system that is taking us back to Feudal times (at least in the US) in terms of wealth distribution and an impenetrable social strata.
lapx432,
The thing is ideas aren’t naturally mutually exclusive, having billions of people thinking the same ideas isn’t a problem whatsoever. However the moment we start to give ownership to these ideas, it creates an artificial “land grab” with haves and have-nots. The patent system is supposed to reward rare ideas, but the vast majority of patents (at least in my field) are filed to deliberately grab up the land so that other people can’t benefit from the same ideas that they also came up with. This is why, in my opinion, the patent system is far more harmful than useful.
People often talk about reforming it, but I think it’s reasonable to ask if we need it for software at all. When you have so many people thinking up the same ideas naturally, then where’s the benefit in granting exclusive ownership to ideas in the first place?
Alfman,
The reforms you are taking about go beyond just patents IMO. My personal theory is we need to move beyond money as the measure of value. People’s value needs to be what they have done for society or the planet and not what they have done for themselves. Open markets and competition have this effect already but money is too base and creates too much of the wrong motivation. I bet the solution will involve software, but not bitcoin. Something else that evaluates using multiple factors. Maybe all this social media with its “likes” is actually going to form the basis of a different societal feedback loop. This news feed is one of my favorite reads and I don’t think anyone is getting paid to entertain me. How can this kind of mechanism drive more than just chat?
lapx432,
Wow, you just expanded the scope of this discussion a hundred-fold, haha. If you stick around, I’m sure we’ll have tons of interesting discussions to cover all of these things
Are you interested in doing more than just chat? If so, maybe we can find something to do. The trouble always seems to be persuading others to help.
Edited 2018-06-22 16:41 UTC
Alfman,
Happy to explore. PM seems to be disabled here. Any ideas?
lapx432,
It’s not the first time somebody asks, I wish osnews had a feature to exchange contact details. In the past I’ve asked staff to pass along my personal email, however they never seem to do it and I don’t really want to publish it here.
Technically we could exchange information via a diffie helman key exchange, haha.
Maybe I’ll setup a little website that we can use.
Alfman – try this I bet there is hardly anyone looking at this stage and I can ditch the account any time.
[email protected]
Do you have an answer why China doesn’t strictly lead in tech innovation? (and they ignore IP laws / patents… but the alternative source of innovation didn’t really materialise, other than making inexpensive knock-offs)
zima,
Interesting question, I’ve got a few theories for you:
Obviously the US has benefited from a much stronger domestic economy, which helped. This is changing though and I wouldn’t take this lead for granted much longer. Chinese factories and developers are becoming both more sophisticated as well as more self sufficient.
A second reason may ironically be the fact that western products like MS windows are “free” over there eliminated the scarcity of operating systems and productivity software in china. Therefor developing them domestically wasn’t a big priority. It’s often said that microsoft benefited enormously from copyright infringement in china, and this is the reason why. If copyrights had been strictly enforced, it would have created a scarcity opening the door to way more Chinese competitors much earlier on.
Edited 2018-06-23 02:00 UTC
Hm, though instead of developing alternatives domestically, the Chinese mostly still base them on imported ~western tech such as Linux/Android, Libre Office, MIPS…
That^aEURTMs a load of bullshit.
locked down ARM chips
Device manufacturer can decide to left it open. ARM chips can do both.
Today ARM can already cover almost all markets:
embedded, mobile, desktop and high-end server.
The interesting part here (to me) is the experimental CPU architecture. They are not simply inventing a competitor to x86 and ARM, but trying out an entirely new architecture model, called “Explicit Data Graph Execution”. The article doesn’t describe it terribly well, but this Wikipedia page does a little better:
https://en.wikipedia.org/wiki/Explicit_Data_Graph_Execution
TLDR version – they break the code into small “atomic” blocks that can execute in an on-demand manner on massively parallel cores.
This seems like an intriguing approach. But from my perspective, the real problem is with writing the code in a way to take advantage of it. In order to have any benefit, the code has to be designed with these “atomic” code blocks in mind, and the inputs/outputs/etc. for each block must be defined precisely.
Personally, I think it is a good direction for software development to move in, but I fear that changing the current mindset (and programming languages) is a far larger challenge than designing the hardware. They don’t give any information on the performance of the ported software, so we’re in the dark on whether their compilers can make something efficient from the unmodified source. Stay tuned, I guess…
P.S. As much as I hate Microsoft, I do like a lot of what their Research department does. Unfortunately (or otherwise for my schadenfreude), very little of the most interesting stuff makes it into products. But whatever information they release helps advance the state of the art.
Its conceptually similar to the Mill (https://millcomputing.com/), but how similar I don’t know. The Mill isn’t going for massive parallelism though, just a very wide execution engine. Its not exactly the same idea, but they are closely related.
There are a bunch of really interesting videos on their site too if your interested in that kind of stuff. It sounds to me like Microsoft is a bit further along than Mill, but that isn’t surprising given the difference in funding…
Edited 2018-06-20 02:01 UTC
Pro-Competition,
Yes, I agree. Software engineers can be extremely stubborn and often reject modernization. It’s not always our fault though, many of our employers are vested in legacy platforms/technology and don’t have much interest rewriting their core applications, for better or for worse.
Another awesome feature would be CPU transactions with atomic commit semantics. This could be very powerful, but so long as we keep the emphasis on porting existing software, new features would have rather limited appeal.
That would be a powerful feature. They don’t mention it, but it should be possible, since they are using private scratchpad registers until the block completes. Interesting…
Remember the Symbolics/Genera lisp machines ? Their CPU was able to run lisp natively, which mean atomic operation and natural scalability.
Kochise,
Did they run lisp source code or a compiled version of it? Personally I don’t see a need to run source code directly on a CPU. Source code is for humans, even if it is a 1:1 representation, there are likely better machine representations for execution.
Anyways, I’m a little familiar with lisp as a purely function languages, although I’m not sure what you mean by atomic operations in lisp. I searched online and didn’t find much in the way of answers. Does it support transactions?
Another language I think maps down well to the hardware domain is forth. They took the notion of RPN reverse polish notation and built a whole programming language with it.
https://forth.hcc.nl/html/arrayForth_Cursus_v2.1NL/arrayForth_Cursus…
They ran a compiled version of LISP that was essentially a LISP equivalent of JVM, CLR, Lua, or Python bytecode. If you’re interested, Wikipedia actually has a good article on the history and design of lisp machines.
That said, there actually have been chips designed that run ‘source code’ natively. At least a few companies have made small MCU designs that use brainf*ck as their native machine code, and there have been a couple similar projects I’ve heard of over the years for other densely coded esolangs.
Here are some Lisp Machine related links
https://www.youtube.com/watch?v=o4-YnLpLgtk
https://www.ifis.uni-luebeck.de/~moeller/symbolics-info/index.html
http://www.symbolics-dks.com/Genera-why-1.htm
And Interlisp-D at Xerox PARC which has a percusor to Lisp Machines and CLOS.
http://www.softwarepreservation.org/projects/LISP/interlisp_family/
https://www.ics.uci.edu/~andre/ics228s2006/teitelmanmasinter.pdf
That’s another good point. Stack-based languages don’t get much respect, but they might be a better fit in these small-block, massively parallel situations. Food for thought!
Only if you don^aEURTMt care about performance. You need extra hardware to work around single dependency chain (to convert everything back to registers) just to match the performance of register-based ISA.
viton,
I think that’s mostly because stack based language incur overhead on CPUs that aren’t designed to handle them. They’ll have to copy values to and from registers in order to do computations on them, which is slow. However this wouldn’t be a problem with a CPU designed to handle stack based languages in the first place.
Most high level languages are already stack based, just at the function level rather than the instruction level. Obviously we use the stack for everything except for the very last stack frame where values get loaded into registers instead. Compilers are able to remap stack based logic into the registers implemented by hardware, but this results in overhead of it’s own especially for small functions. Consequently compilers resort to function inlining to improve performance on register based CPUs, but it wouldn’t be necessary on stack based CPUs.
x86 is known to be extremely fast compared to competing architectures, but I feel alot of that stems from disproportionate R&D advantages rather than having an intrinsically superior architecture. For better or worse, this helps keep x86 ahead. However if we want to judge architectures based on pure merit, somehow we’d have to account for this.
That depends. ThunderX2 is one step away from high frequency x86 and faster in memory bandwidth. At lower frequencies (2GHz) Apple cores are even faster.
viton,
I’ve yet to see benchmarks where x86 isn’t the front-runner in terms of speed. But in any case I agree that x86 isn’t intrinsically faster, it just benefited from a huge amount of R&D funding.
In case you missed it: you might be interested in stack-based Forth CPUs I linked in nearby reply to thulfram:
http://www.osnews.com/permalink?658937
Yes, Forth would be a great candidate for a built-in language for a CPU. I believe Forth, Inc. was trying to do this, but they were so busy defending their trademark that they never got very far.
Forth is a Threaded Interpreted Language (TIL) but there were many other TILs in the 70’s. A TIL is stack-based. You throw something on a stack, throw something else on the stack, and combine them into a word. Your program consists of a set of words.
In the 70’s I wrote a 200-byte TIL in Assembler that worked quite well on a 6502. In fact, many Basic interpreters (Sinclair Z80 Basic) were actually written in a Forth-like TIL. Threaded Interpreted Languages aren’t easy to work with, but they make good bootstraps to something else.
The last Forth-like code seen in the wild were several device drivers written for the One Laptop Per Child project. I think modern phones and computers don’t need something like this, but maybe something small like a watch or cufflink. 200 bytes for a complete OS and interpreter isn’t bad, though.
OLPC “bios”, Open Firmware, is written in Forth.
https://en.wikipedia.org/wiki/Open_Firmware
http://wiki.laptop.org/go/Open_Firmware
zima,
You’ve got to use a pretty loose definition of the word “interactively” in order to make that statement true.
Oh, it’s just that in our Universe, all interactivity must obey the speed limit of light.
Considering that Linux is highly tied to GCC they would have had to port it too…but it’s not mentioned at all…
Not on Android.
Google has done the work to compile it under clang and ejected yet another piece of GPL (gcc) from their source tree.
“Building the kernel with Clang”
https://lwn.net/Articles/734071/
“NDK Revision History”
https://developer.android.com/ndk/downloads/revision_history
Reading that it’s still very experimental – at least when that was posted back in Sept 2017. Seems they’re working towards support for llvm/clang…
that said, the main impediment is that the Linux Kernel uses a lot of GCC internal functionality for things like branch predictions where coders can tell the compiler which branch is more likely to be taken (likely() – https://kernelnewbies.org/FAQ/LikelyUnlikely) and other things of that nature. May be LLVM/CLang supports the same names; don’t know. But it’s one reason that Linux and GCC have maintained a close relationship over the years.
It clearly says in the article that they were working on it in secret. Do you know what “secret” means?..
Where corporations are concerned, not a lot usually!
Well, apparently this time it did mean something. If even _he_ had no idea… Can you imagine?
Sounds interesting but small alarm bells start ringing when people talk about the compiler flagging the data flows.
This is where the Itanium failed. Intel could never make a smart enough compiler to take full advantage of the processor’s long word design. E3 seems to take this concept to the extreme.
It will be interesting to see if they can/have solved the smart compiler problem with their approach.
ChrisOz,
First let me say that I agree with you about the lack of software and compiler support for itanium. Additionally what intel has done to extract parallelism from single threaded code on x86 is really amazing.
However it doesn’t mean there isn’t merit in shifting parallel optimization our of hardware and into advanced software compilers. A significant proportion of transistors are required to make this magic happen, and while transistors are cheap they are not free. They consume power, they produce heat. Also, with specter, we’ve learned that these superscalar designs can leak critical state information and pose a threat to security.
So while software optimization in 2000s wasn’t up to the task, software optimization today may be. AI has come a long way. I do think it’s good to revisit this topic periodically to see if our assumptions still hold. x86 dominance has been mostly due to software compatibility, but the rise of platforms other than windows in mobile and data center could open up opportunities for new architectures.
Edited 2018-06-20 13:53 UTC
More than just compiler optimization, I think we need to design the software in a new way; i.e. it will be a lot more efficient to write code in the desired “atomic” blocks than for the compiler to somehow infer the blocks from linear code. Maybe functional programming techniques can help here.
I don’t have the answers, but the questions are fascinating!
Pro-Competition,
Yes it is fascinating. While fast FPGAs are kind of expensive, the technology is more accessible than ever. Maybe some day I’ll give it a shot myself!
I keep thinking about it myself (but not doing anything about it). OT to this article, but I would love to try out RISC-V.