And yet another item on the iPad? Are we serious? Yes, we are, since this one is about something that even geeks who aren’t interested in the iPad itself should find intriguing. Steve Jobs said yesterday that the iPad is powered by an Apple A4 processor, but contrary to what many seem to think – it wasn’t designed in-house at all.
Bright Side of News has unveiled what the Apple A4 really is, and it has been able to do so thanks to a little chat it had with Warren East, CEO of ARM, during the GlobalFoundries event in Las Vegas. Warren East talked about a new member of the ARM family – this new member was Apple, Bright Side of News can now confirm.
So, what, exactly, is the Apple A4? Technically, it isn’t a CPU. It might be semantics (but hey, where would the internet be without semantics?), but the A4 is actually a system-on-chip, a piece of silicon that containts not only the CPU, but also the graphics core, memory controller, and so on.
The Apple A4 consists of an ARM Cortex-A9 MPCore, the same processor that powers the NVIDIA Tegra and Qualcomm Snapdragon. The graphics unit is a ARM Mali 50-Series. The key thing to note here is that this is all mostly ARM IP; Apple and P.A.Semi have little to do with it. Since Apple doesn’t have its own chip factory, this thing is produced by Samsung.
Because of Apple’s apparent disinterest in divulging specifications, we have to rely on pieces of information all over the web. What is pretty clear, though, is that the Apple A4 is pretty much a relatively standard ARM SoC, similar to what’s powering the Zune HD. It doesn’t give the iPad any specific industry advantage, as there are numerous similar chips that deliver the same kind of performance (Tegra, Snapdragon).
That it was ARM was known, wasn’t it? (at least for me .. how else should iPhone apps work?)
So to me it seems as if it just as much Apple SOC as the Snapdragon is Qualcomms etc.
ARM offers basic designs that you can customize .. Samsung, Qualcomm and Apple are pretty equal in that regard.
Very much so. Although since Apple’s website describe it as a “1GHz Apple A4 custom-designed, high-performance, low-power system-on-a-chip”, I can see how people might be under the impression it was done completely in-house.
I’m guessing that the “custom designed” part refers to the integration work done by Apple, the “system on a chip” part.
Edited 2010-01-28 21:54 UTC
Designing a new architecture is incredibly difficult. This shouldn’t be a surprise to anyone who knows that. New Chip architectures are going to be increasingly rare due to the insanely high startup costs.
http://www.theonion.com/content/news_briefs/frantic_steve_jobs_stay…
I don’t mean to get too nit-picky here, but assuming the A4 has a Cortex A9 MP as its processor, it isn’t the same as Tegra or Snapdragon.
Tegra has an ARM11 CPU, and Snapdragon has a Cortex A8.
Besides the core count (2 for A9 MP, 1 for the others), the fundamental microarchitectures are significantly different from one another. ARM11 and A8 are both in-order pipelines, while A9 is out-of-order.
The performance difference, even of a single-core A9 at the same frequency as the others, is consequently quite substantial.
That said, you can quite closely compare the reported A4 specs with Nvidia’s Tegra *2* design. That does sport an A9 MP, but with a GPU of Nvidia’s own design (of course). But overall these two are quite similar.
the tegra 600 series had the ARM11 cpu. Tegra 2 has a dual-core ARM Cortex – A9 MPCore
You’re just repeating what he said.
Yeah, but he moved his arm like this _/ __
FYI: Snapdragon isn’t actually Cortex-A8. It’s built on Quallcomms own Scorpion core.
I wonder if Microsoft has Windows 7 running on ARM deep inside its labs?
It probably has.
Windows NT was originally written on a custom architecture as to ensure no platform-specific code would seep into the codebase.
I’m sure Microsoft’s major products (Office, Windows) have at least some variants running on ARM.
Yeah, It will probably stay in those labs until Intel runs out of money from lawsuits.
It would not surprise me at all if Intel was paying Microsoft somehow to keep Windows x86 only like it was paying HP (well, offering discounts actuall, but isn’t that the same thing?) to stop offering AMD based workstations.
Why is AT&T allowed to pay Apple to keep iPhone only on their network but Intel is not allowed to offer OEMs money to keep only their processors?
Either way… Microsoft is pretty incompetent as far as coding security (Internet Explorer) or even coding an operating system (isn’t task management the main function of an operating system yet you have to “end process” 30 times before it actually stops and thats if you can even get the task manager open), but I don’t think they’re that stupid to not have Windows 7 running on ARM yet.
I’m sure their Windows 7 Mobile is running on ARM.
Because Intel is a monopoly. If AT&T was a monopoly, then they would likely not be allowed to do that.
Their mobile OS has been available on SuperH, ARM, and MIPS since it first released ten years ago as it’s based on WinCE which runs on the same three architectures plus x86. With more ARM netbooks coming out I can see the desire for Microsoft to make Win7 available for those platforms only if it will sell a lot of copies. As it is, they’re not exactly hurting for market share or money.
-Gary
Edited 2010-01-29 17:41 UTC
Quite right. I believe that NT was first written on the Intel i860, with which almost nobody ever made a general purpose computer, but which was used in UNIX workstation graphics boards (SGI, Sun, etc.) in the early 90s, plus for the NextDimension board for Next cubes.
We know they have the newest kernel running on Itanium (via 2008 R2). So a big fraction of Windows must still be portable.
Microsoft probably does have versions of Windows running on ARM. This will not help them.
There is a large corpus of x86 binary executable Windows software out there in people’s possession. This consists not only of things like drivers for their printers, cameras, phones and other miscellaneous pieces, but it also consists of unused licenses for desktop applications such as Office (for example, they have installed so far only one copy of Office from a three-license pack).
In addition, there is all kinds of specialist software, distributed as x86 binary executable only, from all kinds of sources other than Microsoft, which people expect to be able to use. An example might be a Windows utility, designed to run on a laptop, to set parameters on a high-end audio mixer console.
Finally, much of Microsoft’s historical lock-in attempts revolve around tie-in to the x86 platform. A good example of this is ActiveX:
http://blogs.msdn.com/iemobile/archive/2007/06/20/ie-mobile-support…
When they opt to purchase any Windows machine, people expect to be able to use all this software.
People won’t be able to use any of this type of software if they purchase a new machine which runs Windows on ARM. So people will have to switch to a new set of applications if they are going to buy an ARM-based device.
If they are switching to a new set of applications anyway, this represents an ideal time for people to just ditch Windows and finally be rid of all the problems and encumberances it brings.
Edited 2010-01-29 00:13 UTC
This is true for other OSs as well. For example, most comercial software available for Linux is x86 only.
So unless you have very general use cases, like most home users do, you are not going to migrate to other processor architectures.
It is true to some extent for Linux. Only on Linux, of all current desktop OSes, however, can you get a comprehensive set of applications for any architecture. There are upwards of 25,000 packages in Linux open source repositories.
One of the few applications that is commercial ONLY for Linux would be Varicad:
http://www.varicad.com/en/home/
Although there are other Linux CAD packages, Varicad is AFAIK the only one which can process .dwg files.
Having noted that, however, it should also be pointed out that high-end commercial-only applications such as these aren’t really in the picture when it comes to tablets and netbooks. One just isn’t realistically going to be running Autocad or Varicad under Windows 7 on ARM, either.
Everything else … every lightweight application class that one might actually WANT to run on a low-power ARM machine … already exists as open source and is already ported to ARM for Linux.
Edited 2010-01-29 09:08 UTC
Do recall that, some years ago, most major Linux distributions where available in x86 and PPC versions – Fedora, Ubuntu and Debian where, and I think Fedora and Debian still are? Simple fact is, Linux (and open-source OS’s and software in general) are much more amenable to being shifted from one platform to another than closed-source ones are – and this can be shown to be the case by simple example.
But unlike on Windows systems, on Linux closed source apps are a rarity rather than the norm. I have a number of Linux systems which don’t have any closed source applications on them at all.
While NT’s HAL allows it to be more portable it’s not quite true that NO platform specific code has seeped into the codebase. Some calls bypass the HAL and directly access hardware.
“Windows NT was originally written on a custom architecture as to ensure no platform-specific code would seep into the codebase. ”
Huh? No, not even close… NT was originally developed for a very specific processor: the Intel 80860. The fact that the processor flopped in all sorts of ways as a possible x86 replacement in the late 80s is what forced Microsoft to move NT to a different platform: MIPS. Which also flopped in the desktop.
NT became relatively platform “agnostic” (even though it was not the case since there were some serious portability issues across different versions of NT for different ISAs) as a side effect, not because it was part of the original design.
At the time of its introduction in the early 90s, NT had to be buzzword compliant. And at that time the prevailing wisdom was RISC=good, CISC=baaad. Do not confuse marketing with design goals…
They would be idiots if they didn’t…
Well, Windows Mobile versions run on non-x86, so I’d assume the regular desktop versions at least have the capability, if not the driver support…
There was an article a while back where some MS exec basically admitted Vista and then Windows 7 was all about cleaning up the mess they made with Windows XP – and that they would be moving on to porting the now clean and stable Windows 7 core – first to servers (and finish removing GUI items from core dlls), and then on to replacing/augmenting/integrating the Windows Mobile kernel.
So I actually doubt that they have a full working version of Windows 7 on ARM, just yet, but they seem to have said that that’s the plan.
First thing they *need* to do is get back to using a properly designed Ring 0, Ring 1, Ring 2 and Ring 3 setup. Rather than putting drivers and applications into Ring 0 (for performance and “no security”), keep them in Rings 2 and 3 where they should be. ActiveX should have *never* happened the way it did.
Indeed.
Microsoft may have a version of the NT kernel with a basic sub-set of the executive running on ARM, but I highly, highly doubt that they’ve got the whole of the executive and/or a significant chunk of the GUI environment ported over and running – even an incomplete test/scratch version.
It doesn’t matter if they have, or they have not.
Its the applications that matter.
There are only a very, very few Windows applications that are available as ARM executables.
Ergo, Windows on ARM is a complete fizzer.
The only thing we know is that the cpu is a cortex A9mp and the gpu is an arm mali 50. However, there may be more co processors inside the soc… things like a dsp?
And even if everything is external ip, integrating them together is not trivial (unless they followed a reference design from arm). Everyone else calls the soc’s their own (tegra, omap, snapdragon)
Personally I never expected them to develop something from the ground up. It is generally apple’s “thing” to print its label on some integrated circuit or make/demand alterations to the reference design as was the case of the intel cpu in the macbook air
Edited 2010-01-28 21:00 UTC
ARM was once part owned by Apple..
The A4 is based on an ARM reference design. The whole article is sensationalism dressed up as news. We truly know nothing. But the design will be custom, so Apple have every right to call it their own processor.
Its really marketing. How much does it really freaking matter? Unless you are writing low level code for it, None. If you really care that if its whether it should be called Apple’s CPU you have problems that go beyond technology.
otherwise could care less, I’d rather have a netbook than a Ipad
I’m looking forward to one of the dozens of Android based tablets (some with Nvidia Tegra2 and Pixel Qi tech) coming out soon to compete with the iPad. They all look better on paper than the iPad to me.
(Apple could win me back with some kind of unlock, but I really doubt they’ll play that card, until they lose the market.)
“Couldn’t”, you tool. If you’re gonna say something at least don’t say the opposite of what you mean to say!
Edited 2010-01-29 16:20 UTC
sounds like we have a apple fanboy here folks.
No, actually he could care less, but he chooses not to
Edited 2010-01-30 02:40 UTC
The article is misleading.
You are stating that Apple had nothing to do with the design of A4. In fact, Apple/PA Semi designed the architecture of the chip based on the licensed ARM cores.
This is no different then, say, the Snapdragon, which is also an ARM core based architecture.
To argue that people who architect chips based on licensed technology have nothing to do with the chips is just silly.
]{
Microsoft is a big company, I would be very supprised if they did not have multiple architectures supported deep within the confines of their R&D lab. The same goes for Apple. Bottom line is that if and when x86 becomes less viable (now?) both companies will be ready to jump ship on x86.
ARM is much more efficient then x86 in general and as ARM power increases, x86 is sure to suffer a slow and painful death in the low end computing market.
I would also argue that gamers and folks running high end workstations will likely stick with x86 for quite a while.
The problem for Microsoft and Apple is not that they would be unable to port their software to another architecture, but rather that their paradigm for distribution of software (not only their software, but also software from other vendors intended for their platforms) is in binary executable form only. This leaves them with a large corpus of existing software that won’t run on any new architecture.
I’m not so sure that the decline of x86 in the low end market will be as slow as you imagine. Already more ARM CPUs are sold than x86 CPUs.
http://en.wikipedia.org/wiki/ARM_architecture
Even at the high end … larger machines these days are often merely arrays of tightly-interconnected smaller processors. Google are a good example of this. A large array of x86 machines draws a lot of power, whereas an even larger array (numerically) of ARM devices might be able to achieve the same performance (for applications such as Google) at lower cost and much lower power consumption.
Edited 2010-01-29 00:33 UTC
Speaking again of the high end and Intel versus ARM, here is a high end CPU family from Intel:
http://www.phoronix.com/scan.php?page=article&item=intel_core_i7&nu…
http://everythingapple.blogspot.com/2009/11/imac-core-i7-power-util…
Now that is a considerable amount of grunt for a microprocessor CPU, but it eats 95 Watts!
OK, now consider a modest 32-bit ARM Cortex A9 CPU @ 2 GHz. Apparently it uses under 2 watts. One could run over 40 ARM Cortex A9 CPUS with 95 Watts. That comes to 80 cores. That is ten times as many cores … admittedly only 32-bit cores, and clocked slower, but still.
For some applications, those amenable to load sharing, perhaps this discussion might indicate where the future is …
Edited 2010-01-29 09:37 UTC
didn’t Apple handle this with universal binary’s and rosetta for the architecture shift from PPC to x86?
By “universal” binaries, they actually only mean PPC and X86 binaries, not ARM. They aren’t really “universal” at all.
They also mean only binaries from Apple, as I’m pretty sure that only Apple bothered with it. Possibly even less “universal” than at first glance.
Huh?
Apple has shipped Mac OS X computers using various PPC and Intel chips. They have not yet shipped one running on ARM. Why would their universal binaries contain ARM executables?
It’s not just Apple that ships universal binaries. Virtually every Mac OS X application shipped in the past few years has been a universal binary – even before the Intel transition a binary would typically contain multiple executables each optimised for different types of PPC processor. As a developer XCode essentially handles all this for you, making it harder to not build universal binaries.
Should Apple decide to ship ARM-based Macs creating native apps would mostly just be a case of rebuilding them (so you’d end up with an ARM executable in addition to the existing ones). The toolchain already supports this. Of course should this happen Apple would most likely also include a Rosetta layer to let those ARM-based Macs run existing Intel/PPC Mac binaries as they did with the PPC to Intel transition.
That is exactly the point … Apple’s “universal” binaries support, as you say, only “Mac OS X computers using various PPC and Intel chips”. Therefore, Apples so-called “universal” binaries won’t actually support ARM.
The Apple iPad uses an ARM CPU.
So … no OSX applications for the iPad. The iPad is just a big smartphone that isn’t even a phone.
Huh?
Who said anything about “ARM-based Macs”? This thread is about the iPad.
Edited 2010-01-30 14:08 UTC
Just because a binary doesn’t include a specific arch doesn’t mean it’s not a UB (yes I know that sounds weird but keep reading) .
Apple *DO* create ARM code ‘universal binaries’ – UB’s are merely a container for various different architecture code – It’s what the Magic Bit is for at the start of ALL binaries compiled by Xcode).
Also every single iphone application can be compiled to run on the iPhone simulator. The key thing to know is that the iphone simulator is exactly what it says it is – it is a simulator which runs x86 code not ARM – it is NOT an emulator.
It’s also highly likely there is a full OSX Snow Leopard build running on Arm deep in the bowels of Apple just as there were builds on 10.0 running on x86 for years before his Jobsness admitted it during a keynote.
Thingi
This is a direct quote from Steve Jobs from the keynote:
“iPad is powered by our own custom silicon. We have an incredible group that does custom silicon at Apple,” company co-founder Steve Jobs said during Wednesday’s keynote. “We have a chip called A4, which is our most advanced chip we’ve ever done that powers the iPad. It’s got the processor, the graphics, the I/O, the memory controller — everything in this one chip, and it screams.”
He did NOT state they designed a completely custom CPU: he very clearly stated what it was, and yes, it is standard practice from the lowest level (digital logic gates as well as analog circuitry) to the highest level to use pre-existing bits of known-good circuit design, just as it is standard practice (except for too many that end up with their code on thedailywtf.com ) in software to reuse existing known parts, and there’s still true design going on, it’s not like Apple took a pre-existing single chip and relabeled it, as you’d seem to think by your thread’s title. Does Apple design every chip they use? Of course not, and they never claimed that! Does Apple use custom silicon that they design how it all works together? They’ve been doing this since before you were born!
I find it difficult to believe that they just ditched their new GPUs that arrived in the iPhone 3G S and third generation iPod touch, especially when they have money invested in the company that provides them.
Wouldn’t it make more sense to use the license they already have to create a SoC with a non-ARM-based GPU to continue what they were doing?
I think Bright Side of the News were looking for their 15 minutes of fame, like so many others who jump on an Apple event. After all, there is an “expert” in every crowd.
PA Semi architects ARM devices…this isn’t much of a stretch considering the A4 is indeed an ARM chipset.
It isn’t much of a stretch and doesn’t take much thought, which is what most people are doing–not thinking much. It makes almost no sense that a company as good at money as Apple are to throw away money by abandoning.
The company makes purchases and investments for good reason. They have some of the best financial people in any business. There is no good reason to believe that there isn’t a PowerVR-derived GPU portion in the SoC.
There isn’t, you’re right. If motorola can use the PowerVR GPU on a cortex A8, why can’t Apple use one on a modified Cortex A9MP?
The ARM Mali 5X series are fixed point OpenGL ES 1.X GPUs. And the SDK supports GL ES 2.0. If this chip uses a Mali GPU – that should be the Mali 200 or Mali 400 GPU.
http://www.arm.com/products/multimedia/graphics/mali_hardware.html
Umm… How is it that the PA Semi/Arm/Apple history was overlooked before this article was posted? Of course Apple didn’t design the chip in-house! That’s like saying that Asus designs their entire products in-house.
I’m going to throw out there that running a “news” (and I use heavy quotes here) site implies current knowledge events reported.
Or is OSAlert a blog? Then you’re ok.
The truth right now is that we simply don’t know, but there are a couple things that seem inconsistent with other information.
First, having an Cortex A9 MPCore doesn’t at all mean that they’ve implimented dual cores — it simply means that the IP core supports multi-processor configurations. Granted that a single process might spin multiple threads, but going whole-hog on another CPU core wouldn’t benefit the majority of applications, and seeing as how Apple is so concious of their thick margins, I can’t imagine them going for a dual-core processor in a single-task device.
Second, it would be odd for them to have gone for ARM’s Mali GPU IP in light of the fact that they own a nearly 10% stake in Imagination Technologies — who make the PowerVR series of Mobile GPUs used in the IPhone and other devices. PowerVR’s tech is highly scalable as well, they offer IP that is 2, 4, 16 and 64 times as powerful as what’s in the iPhone3GS, and the 4x offering would be essentially tailor-made to scale the 3GS prowess to the iPad’s large display.
Its most likely that they had their PA-Semi folks optimize a liscenced A9MP Core for low-power operation (their specialty) and integrating any additional silicon needed. They’re familiar enough with ARM given that that’s where most of them came from before PA-Semi.
I made on a tech forums was right, it’s basically the same thing as a TI OMAP4 series chip.
Of course, am I the only one laughing at a single-tasking OS sitting atop a multi-core A9?
I’ve been wondering who REALLY makes it – Apple pulling fabs out of it’s backside seems fishy, so who is it REALLY? TI? Samsung?
No real reason for them to waste their in-house art ****’s on chip design for something TI can already provide out of existing stock.
Edited 2010-01-29 04:31 UTC
Apple is an ARM licensee, and they bought the whole PA Semi team not for the PPC tech but for their SOC design experience (it was cheaper to just get the whole team, that re-hire and start an inhouse SOC team).
Chances are that they went through Samsung, since Apple already sources their ARM chips for the iPhone through them.
BTW, TI went fabless, so technically they don’t make chips either. So it makes little sense for Apple to have their own SoC team, and then go through TI en route to the actual fabs.
If Apple is not using Samsung, then there are basically 3 major 3rd party fabs they may be using: either TSMC, Chartered/Global Foundries, or UMC. As I said, since they already have a fabbing partnership going with Samsung, chances are they do the A4 through them too. My guess is that the A4 is being “tested” in the iPad first, and then once the process if fine tuned (or moved to a smaller feature size) it makes its way into the iPhone. That approach allows Apple to get the design in a bigger form factor (read less heat and power issues due to larger size and larger battery) and then fine tune it for the more constrained iPhone design space.
It is a fairly good approach from a business/engineering perspective.
Reminds me of the Commodore days, though they went on to totally squander what they had.
No need to look all the way back to 1980. Fabless semiconductor design houses are the norm not the exception, and that has been the case for over a decade at least.
The only companies left that do their own processor design and fabbing are intel and IBM. And IBM will most likely get rid of their fab (and maybe processor design teams too) some time in the near future.
Edited 2010-01-29 22:13 UTC
I really don’t understand why Thom presents Bright Side Of News’s findings as facts, even though there are three clear hints that they may actually be false.
1.) They write the Cortex A9 was “identical to ones used in nVidia Tegra and Qualcomm Snapdragon”, even though neither Tegra nor Snapdragon use a Cortex A9, which implies they are not as technically adept as they pretend to be.
2.) They say Apple uses an ARM Mali graphics processor and fail to realize that it would be at least remarkable if Apple really used a slower GPU than what they could buy from PowerVR, considering they own 10% of that company (Imagination Technologies).
3.) They really compare Steve Jobs to Joseph Goebbels(!!) This alone makes them untrustworthy.
So while it may still be true what Bright Side Of News claims, it should be taken with a grain of salt.
Update #1, January 28, 2010 22:22 GMT/UTC – Following the request for comments, we were incline to update the story. First of all, we do not have concrete information about the number of cores inside the Apple A4 “CPU that it isn’t” i.e. A4 SOC. We were told that the ARM licensed its CPU and GPU technology to Apple. That’s it. Out of that technology, Cortex-A9 is intended for manufacturing in advanced manufacturing process such as 45nm, 40nm, 28nm and so on, while Cortex-A8 doesn’t have advanced video processing capabilities that Cortex-A9 has. As the time progresses, we’ll know more about what LEGO brick components did Apple use to create the A4. One thing is certain – it uses ARM IP throughout the silicon. Don’t shoot the messenger.
yes – instead we should shoot the people which are unable to distinguish the boring message (a4 = arm soc) from utterly baseless but interesting speculation (cortex a9 + mali).
my own speculation: i wouldn’t be surprised if this is just an arm cortex a8 + powervr sgx. same as in the iphone 3gs, just with a higher clock rate and maybe more cache and an apple-designed memory controller. but please don’t report this as fact.
(I quoted the original article, btw – there was an update posted at the bottom of the article with this text).
There is this thing called Time-To-Market. Since they did not go with an existing chip, they must have cut corners somewhere.
You have a problem with Goebbels abilities in the field of propaganda? They had at least half of Germany brainwashed for 12 years! If Jobs had even half of Goebbels talent, we would all use Macs.
Your point is as invalid, as stating that since Hitler was that bad, his artistic talents were negated.(He was a very talented painter BTW.)
this “Mali ” CPU is open speced at the HW level or closed like Imagination SGX silicon?
Edited 2010-01-29 07:28 UTC
Honestly, Apple’s claims are overblown and hyperbolic, but they did manage to get A9MP designs into the wild before their competitors, (Tegra and Snapdragon use A8 cores) so they do have reason to be proud of the SoC they’ve created. Further, contrary to what you seem to believe there must be *some* Apple designed silicon in there… how do you propose they connected the discrete units together? solder? string? shoelaces?
I thought this would be how many companies would operate. That instead of reinventing the wheel that they would start with a platform and then continue to extend and expand that platform until it doesn’t look like anything of the original.
Apple did, indeed, own a substantial part of ARM when ARM was spun off from Acorn Computers. This was when Apple used ARM in its Newton (which is the spiritual ancestor of iPad). Selling these shares again must be one of Apple’s worst business decisions (unless they did so at the height of the Internet bubble).
As for the A4 design, I also believe more in PowerVR than Mali — it is a more natural development from iPhone. The core is most likely Cortex A9, as stated, though.
As for Windows on ARM. WinCE already runs on ARM, but I doubt Win7 ever will. As others have said, Windows is nothing without all the applications running on it, and they are (nearly) all exclusively x86. In the long run, Microsoft is likely to host Windows on .NET to gain hardware independence, but they would need to convince a large number of third party software producers to make that move. And that will not be easy. In an interim period, non-x86 versions of Windows could run x86 in emulation, just like Apple emulated M68000 on PPC or like Digital emulated x86 in the Alpha version of NT. If a JIT-like strategy is used (similar to Digital’s Fx!32) and the most common shared libraries are ported natively, the speed penalty doesn’t need to be that large.
Similarly, I expect Apple to move their entire line from iPod to iMac to use LLVM for code distribution. As yet, LLVM isn’t mature enough for this, though, but it would make it a lot easier for Apple to switch processor (yet again). The most likely move would be to switch x86 to ARM for MacBooks and iMacs, but in the long run, they will need a 64-bit processor. I would be surprised if ARM didn’t have something like that up its development sleeve, though.
The speed penalty of running x86 applications in emulation on a low-power ARM CPU, such as could be found in a tablet device, would indeed be very large.
The emulated x86 applications would be completely unusable.
Not necessarily. Remember Transmeta Crusoe?
An ARM is a RISC chip. x86 is CISC, or more accurately these days (Core architecture) a RISC chip with CISC macros. Somebody could create a hardware x86 translater to convert the CISC commands to the RISC commands. Transmeta did the same thing, but in software, that was why it was slow–but the principle was sound. x86 would not be difficult to emulate, you just have to get over the behemoth that is Intel and give people a reason why they’d want x86 compatibility as a secondary function.
A similar type of thing is actually embedded in the Loongson CPU, isn’t it?
http://en.wikipedia.org/wiki/Dragon_chip
http://en.wikipedia.org/wiki/Dragon_chip#Hardware-assisted_x86_emul…
Even with 200 extra specialist macro-instructions (hardly RISC-like any more, and coming at a cost of 5% extra die area), Loongson-3 achieves only 70% the native performance when emulating x86.
Without all that extra help embedded in the CPU die, ARM would get nowhere near that. It would be a dog performance-wise when trying to run (emulate) x86 binaries.
Exactly why would anyone want to run x86 binary executables non-natively on ARM at dog-slow performance when there are 20,000+ native ARM packages (which would all work at full CPU performance) for malware-free, zero-cost, full-driver-and-peripheral-support Linux, which cover every conceivable use case for a tablet or a netbook class ARM machine?
Edited 2010-01-29 12:41 UTC
A lot of people do not seem to understand that the “reduced” in RISC refers to the cycles per instruction, not the number of instructions.
Some of the early RISC designs included more instructions in their ISA than their CISC counterparts.
This has nothing to do with the point made in the post to which you responded.
The Loongson CPU includes a lot of support in terms of extra macro-instructions in order to assist with x86 emulation, and still it achieves only 70% of native performance.
ARM does not include any such support at all, so x86 must be entirely emulated in software.
Hence, the performance of a low-power ARM chip running x86 binaries under emulation will be dismal. Hence, copies of x86 binary executable applications will be utterly useless the Windows on ARM.
Hence, Windows on ARM will not have the benefit of the existing corpus of x86 Windows applications.
That was the point.
You were using an arbitrary definition of RISC, and I was just simply letting you know that the “reduced” in RISC does not refer to the number of instructions in a specific ISA/microarchitecture.
I wasn’t refuting the gist of your point, I was simply making a small correction. Reading and comprehension and all that jazz…
I understand completely that a “Reduced Instruction Set Computer” involves instructions that “do less”, as opposed to a lesser number of instructions.
http://en.wikipedia.org/wiki/Reduced_Instruction_Set_Computer
Hence the need to have additional macro-instructions (that is, more-complex instructions built up from a short sequence of simpler instructions) in the Loongson CPU in order to better emulate a Complex Instruction Set Computer (CISC) which is an x86 machine.
http://en.wikipedia.org/wiki/Complex_Instruction_Set_Computer
So? What did I say that was in any way counter-indicative of this, that caused you to interject your know-all post?
None of this in any way, one way or another, has any bearing whatsoever on the point of discussion … which was that Windows on ARM makes no sense because the attraction of Windows lies only in the large corpus of existing binary-only x86 programs for Windows that people want to run.
People still won’t be able to run x86 binary applications on an ARM machine, even if they were duped into buying a “Windows on ARM” OS machine.
Edited 2010-01-31 02:38 UTC
Counting the number of different instructions on a CPU is not so easy as it sounds. For example:
CPU A has an ADD instruction, and ADC (add with carry) instruction, ADDI and ADCI (immediate versions of the above) and several shift/rotate instructions.
CPU B has a single ADD instruction with a modification bit that specifies whether the carry flag is used and a bit that specifies if the next 12 bits is read as an immediate constant or as a register that can optionally be shifted or rotated.
So which CPU has more instruction? You could choose to read every combination of bits that do not specify registers or constants as separate instructions or treat all the combinations as a single parameterised instruction. So you very quickly end up comparing apples to oranges when you try to count instructions.
RISC is usually read as meaning “reduced complexity instruction set”. This often implied lower cycle count per instructions, but it was not lowering cycle count that was the main motivation, it was providing the most performance given a limited transistor budget. In comparison, 80386 (1985) used 275000 transistors while ARM1 (1985) used 24000 transistors and ARM2 (1987) used 25000 transistors. Granted, the 80386 had address translation on chip, but so did the later ARM600 which used 33500 transistors.
In any case, given the transistor budgets these days, the RISC vs. CISC distinction is getting increasingly muddied, and the challenge these days is not so much getting the most performance for a given transistor budget (as transistors are cheap), but rather for a given power budget. These are not entirely unrelated, though.
It is Apple’s chip, and they are completely accurate to say it is.
Its an out-of-order dual-core design, so its not comparable to snapdragon etc. It’s the core2duo of the ARM world, the snapdragon being more the Atom.
I’m surprised if Apple’s using Mali instead of SGX.
Apple (and everyone else) use (M|S)GX in their other ARM products.
Apple brought a big chunk of Imagination a while back too.
Has anyone actually used a Mali?
Out of order? I haven’t heard that. The current claim is that it is a Cortex-A9 with two cores. This is largely equivalent to having two Cortex-A8s, though there is some flexibility in execution unit choices I believe. Since the Snapdragon is a Cortex-A8 which has been tuned a bit and clocked to 1GHz I would say that the comparison between the Apple A4 and a hypothetical dual-core Snapdragon is very fair.
the cortex a8 is an in-order-design, the cortex a9 is an out-of-order design. arm offers a single-core design called cortex a9 and a multi-core design called cortex a9-mp.
Ah, right you are, I had misread the datasheets.
No, what is pretty clear is that nobody knows anything about this processor and that you are spreading bullshit on the subject…
Apple created nothing. Apples only makes what already exists and put a beautifull face and ships with and
extorsive price. Nothing more than that.
Apple sux. A LOT.