As computers chip continue to decrease in fabrication size, manufacturers such as AMD and Intel are researching new ways to overcome physical barriers. Die size, performance, operating frequency and heat are all major obstacles in the semiconductor industry. Sun Microsystems announced that in partnership with Luxtera, Kotura and Stanford University, it is working on an ambitions project to move data transmissions from electrical signals over copper wires to pulses of light using lasers.
Who else but Sun could produce something valuable from photons?
I would expect such risky and expensive research campaigns rather from Intel than from Sun. Ok, Sun have also built its own chips for the server market but was not really successful as you can see on the upcoming Xeon series from Intel.
At least they have innovative ideas like Rock and Niagara instead of beating the old x86 donkey for years.
Would there be any technical advantage in dumping the x86 legacy instead of on-going compatibility? If so, could you provide some more explanation and/or a link to an informative web page?
Thanks heaps,
Detlef
Why would you have to dump the compatibility? Sun didn’t when it introduced the new chips – the apps which worked on Ultra Sparc work just fine on Niagara (and will on Rock).
Dmitri
Which is sort of his point is Sun did not have to drop Sparc why would intel have to drop x86?
Sometimes it is good to try something different, even Intel did try several time to dump the x86 : i432, i860, Itanium, … with mixed results.
The Sparc is an interesting RISC design, with an original register windowing system.
Well, x86 may be fine for most applications, but virtualisation has always been one of x86’s troubles. Although the introduction of hardware support for it helps much, it is not going to be perfect since it was not designed with Popek and Goldberg virtualisation requirements in mind.
http://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requ…
Another fact is that the x86 is now a RISC chip core emulating a CISC chip. Why not just produce to a RISC design, instead of wasting cpu cycles on translating them?
Moreover, the sheer size of the mindshare in x86 is distorting the market. A few years before, I heard of peripheral devices using MIPS as their intermediate processor between the raw device/data and the system. Reasons cited for such a decision was cost and ease of development. Personally, having tried a bit of MIPS and x86 assembly, the MIPS architecture is easier to work with. If more resources had been diverted away from x86 to MIPS/SPARC/POWER, Apple would not have to move away from PPC.
Last but not least, it is the trouble of legacy. Although backwards compliance is touted as a feature, it can be a horrible curse. Many old code contain bugs that are next to impossible to eradicate. Also, it helps in maintaining binary blobs — a reason why Windows 64bit series are seldom sold is because of its reliance on 32bit binary blobs, which means much less drivers. While linux has had much ease in the transition to 64bit, nVidia drivers posed such a problem. In fact, the lack of nVidia drivers for PPC hindered mac to linux OS migrations in the past.
ps: been editing but simply cannot make links that don’t show the full url, only the title of document
Edited 2008-03-26 13:24 UTC
Although backwards compliance is touted as a feature, it can be a horrible curse.
Good point.
I haven’t checked, but I bet current Pentiums still implement the SAHF and LAHF instructions. I believe these were implemented in the Intel8088 to make it easier to port existing 8080 and Z80 software 30 years ago. If they haven’t already been dropped, they should be. Any OS that’s inclined to support them could implement an exception handler.
As far as I know, x86 based processors were beaten by MIPS processors (usually built into fine SGI machines) regarding iteration speed which is important for scientific applicances. In most cases when x86 industry came up with something “new”, “fast” or “revolutionary”, I could laugh: “Hmmm well, we do have this in non-x86 world for years already.” So I welcome everything that beats x86 in any regards.
The true application is for shark tank exhibits. These chips will be implanted into their foreheads. This was my suggestion for the next true killer application.
Originally, the US government limited https secure web access to 40-bit encryption so it could eavesdrop by brute force attack. The current 128-bit keys are supposed to be strong enough to ward off brute force attacks.
But if computers become readily available that are thousands of times faster than current computers, we may need longer keys to keep access secure.
But if computers become readily available that are thousands of times faster than current computers, we may need longer keys to keep access secure.
It has always been like that. When technology advances so must the security methods used to protect that technology. Sooner or later we’ll start using several thousands bits just for the encryption keys, and even that will probably be too little when quantum computing (or anything similarly fast and highly-parallel technology) becomes the norm. I remember having read somewhere that some people are even trying to create a protection scheme that would protect atleast somewhat better against such hugely parallel methods of trying to break the encryption, but I just can’t remember where. :/
http://www.naropa.edu/forum/member.php?u=15767
http://www.naropa.edu/forum/member.php?u=15769
http://www.naropa.edu/forum/member.php?u=16758
http://www.naropa.edu/forum/member.php?u=16760
http://www.naropa.edu/forum/member.php?u=16761