“The name Snapdragon is fast becoming well-known among consumers as the chip to have inside your smartphone. Offering speeds of up to 1.5GHz at the moment, it’s certainly one of the fastest mobile chips out there. Qualcomm doesn’t want the reputation of Snapdragon to falter, though, so the chip manufacturer has just announced an update that will have smartphone and tablet users drooling. The next iteration of the Snapdragon processor line is codenamed Krait and uses 28nm manufacturing technology. It will be offered in single, dual, and quad-core versions with clock speeds up to 2.5GHz. If the huge increase in performance wasn’t enough for you, Qualcomm also boast a 65% reduction in power use over existing mobile ARM chips.”
Please world, stop going so fast.
Let us please understand the full capability of 1 and a half billion instructions per second, before doubling it. It^aEURTMs 2011 and people are still eeking more performance out of a 1 MHz Commodore 64.
All I think is powerusage and how long on a full charge ? If it was intended for server use, then I think it would make a lot more sense. Maybe that is what ARM intended it for, what the manufacturer does with it is their problem ofcourse.
The news item reads “If the huge increase in performance wasn^aEURTMt enough for you, Qualcomm also boast a 65% reduction in power use over existing mobile ARM chips”… so I guess they’ve taken the energy concern into consideration.
And I’m still browsing with a 1.6GHz laptop.
But in the end they might end up being able to run vista on a x86 emulator on a phone :p
Yes, but… this one goes to 11.
I don’t whether I need to a Mobile with this much power. But I need a PC with this power which consumes less power.
I think, these mobile processors will the data center futures.
32-bit addressing, though… hmmm…
No, PAE et al don’t fix the problem.
The ARM Cortex A15 MPCore technology features LPAE addressing up to 1TB of main memory.
http://www.arm.com/products/processors/cortex-a/cortex-a15.php
As I understand it, there is a built-in memory management unit. In order to take full advantage of the 1TB addressing, I believe all that is required (at least for Linux) is for the kernel to be built to take advantage of the hardware MMU.
Yes but look at Motorola Atrix lapdock and HD Multimedia Dock. You could fin some reasons for a quad core 2.5GHz CPU in a smartphone, once that smartphone becomes your laptop or desktop.
But I think that, as you said, ARM is pushing for netbook/tablet and low energy servers.
And there are a lot of other appliances where ARM is strong but extra speed is useful (TV, media players, car infotainment…)
are there any demo boards with this or any other qualcomm chips for hobbyist to play around with ala beagleboard and pandaboard? the potential with such a piece of kit for mobile computing is breathtaking
4 cores 2.5GHZ depending on how fast the prices fall that could push the next cheap netbook/sub-notebook price war.
This would be particularly dangerous for Microsoft while porting Windows would be trivial the third party stuff would take some time. This of course could open the door for Linux to make another attempt to push into the consumer computing space.
I await my cheap snapdragon Linux sub-notebook.
To become a competitor to the PC platform, ARM still have to get some standardized desktop architecture, though.
So far, the only thing standard in the ARM ecosystem is the instruction set… Err… Wait a minute… Which of the available instructions sets by the way ? If you only take the A-series, there are already four of them
Edited 2011-02-15 08:08 UTC
That’s one of the plus points of VM based environments.
Keep the bytecode as executable file format and let the VM/JIT care about the proper instruction set.
You can just as well re-build code which is not based on a VM if it’s designed with portability in mind to start with.
But this is more of a duck-taped workaround than a real solution. You move the complexity of adaptation to multiple nonstandard hardware to the VM or the compiler, which means making it much more complicated, and thus potentially slower, more buggy and in the case of a VM less secure.
Edited 2011-02-15 10:14 UTC
Exactly the opposite. With a VM you can do code validation as the verifiers present in the JVM and CLR attest to.
You cannot so easily do code validation with assembly, because of the lower level of allowed operations.
That is the main reason why Google is forced to restrict the NaCL instruction set.
Plus thanks to dynamic code profiling, it is possible to have a JIT generate better code for the actual running processor, as a static compiler would be able to.
The opposite of what ?
Well, I think I’ll try to reword this post differently.
This means that provided that multiplatform frameworks and device-independent coding practices are used, most programming language allow recompilation of code, so that it may run on a new architecture. This has nothing to do with the performance discussion, and I’m not talking about Assembly.
This means that using VMs or framework/compiler tricks (which one you use doesn’t matter for this part of the comment) doesn’t solve the problem of not having a single standard architecture. Why ? Because the myriad of workarounds that this brings still have to get somewhere. You just put them in the VM or the compiler, close the black box, put the screws back, and pray that it will work.
This is deceptive reasoning, because you’ve still added code somewhere. Adding code in these areas generally means adding bugs and reducing generated code speed, in a classic example of bloat.
In some areas (mobile devices especially), VMs are also responsible for enforcing system security. Making them more complicated may also mean that this part is more prone to failure, making the VMs, and thus the OSs, less secure.
In short, no matter what development tools you use, you do want to have a standard hardware architecture. Because it simplifies the whole stack, and thus results in code that’s more maintainable, which in turn means less bugs, better speed, and more secure VMs.
Edited 2011-02-15 12:50 UTC
Ah, ok.
Then please ignore my previous reply, I do agree with you.
A-series are ARMv7.
Cortex-M microcontrollers feature Thumb2, but not ARM instruction set. But these should not be compatible with A-series, because this is entirely different market.
I don’t think ARM for desktop will be available before ARMv8. It would be stupid to change ISA right after the beginning of possible “desktop intervention”.
Edited 2011-02-15 13:31 UTC
Well, from what I rode when I had a look at the ARM manual some times ago, inside ARMv7 there are four separate instruction sets : ARM, Thumb, Jazelle, and a fourth one whose name I have forgotten.
Support for each of those was apparently optional, as it could be probed via CPUID.
Maybe Jazelle RCT? After introducing Jazelle, ARM made some more improvements to the VM execution environment.
Now there is Jazelle DBX and Jazelle RCT, with different types of functionality.
You’ve got it all wrong.
ARM/Thumb(2) is the same instruction set but in different encodings – all the encodings can be used at the same program to save some space. Thumb(2) is size optimized, ARM is performance optimized.
Jazelle and NEON are ISA extensions.
You do not think of SSE as of separate ISA, isn’t it?
Jazelle support is mandatory in A-series.
It is optional in R-Series – DSP, hidden deeply in peripheral chips – like WIFI chip etc
Running JAVA is not what you’ll do with them.
End-user can’t run anything on it.
Your x86 PC has a plenty of ARM cores in it. Did you bother with it? Did you somehow faced with their incompatibility with x86? =)
NEON SIMD extension is optional in A-series, because it is a pretty large piece of hardware.
Edited 2011-02-16 07:41 UTC
I think the shapes of a Quallcom/NVidia/Omap war destined to fragment the platform slowly reveal.
All of them support different extensions for doing basically the same.
I wonder… We’ve already seen this very thing happening on x86 with accelerated graphics and network cards.
After years of crappy drivers and an endless amount of duplicated work, we have only recently started to recover from this mess, with vendors taking the first steps towards a potential future standardization by opening up their proprietary specs.
Aren’t we supposed to learn lessons from the past or something ?
Edited 2011-02-15 17:44 UTC
Oh, speaking of ARM vs x86, just discovered this while looking around on youtube…
http://www.youtube.com/watch?v=UiQ0AnlfBu4
If I did not knew how crappy Atoms are at running recent games, I’d be quite impressed. Nice concept, Razer guys !
Edited 2011-02-15 20:14 UTC
Although Microsoft has announced Windows for ARM, so far it is just an announcement. Vapourware.
If you want a 2.5GHZ quad-core snapdragon processor in your next next netbook/sub-notebook, right now you will have to have Linux in some form or another, either Ubuntu or Android most likely.
Not quite. They have demoed a working system on stage, too. See for example http://www.youtube.com/watch?v=xKc_XGuvNIk (it gets interesting after 1:00)
Sure, that’s not a shipping product, but it’s much more than a mere announcement.
Edited 2011-02-15 20:18 UTC
They have demoed it running on a system on a chip …
Pro Linux, Anti Microsoft crap again.
Actually, Windows on Arm has been demoed, but this quadcore 2.5 is just a paper announced. I think you are misrepresenting which product is vapourware
Bet you’ll be able to dial and text really fast on them…
Still gimped with embedded ram..in all likelihood its a massive handicap. Even though this will surely run circles around atom… it probably wont compete even with slow laptop processors.
Qualcomm did what Nokia should have for creating a next generation ARM SoC plus wireless/celluar capabilities. I believe Qualcomm settled with Nokia around 2008 their patent suit and also purchased a mobile GPU solution from AMD.
What Nokia needed was a complete hardware platform they could customize to serve both their low and high-end aspirations. By trying to ally with Intel, an enemy of most of the US celluar providers due to WiMAX, Nokia through the Meego alliance guaranteed they would have no platform at all by 2011 that would work for higher-end smartphones.
I am aware there are other ARM SoCs that are available to Nokia such as TI’s OMAP that Nokia has used some products, but since Windows Phone products so far seem to use Qualcomm’s Snapdragons, I wonder if Nokia is in for an even rougher ride than expected having to negotiate for even more concessions from a recent enemy.
I am convinced that an honest historical account, which may never happen, would conclude that it would have been far cheaper for Nokia to have chosen the harder path, of spending many billions developing their own ARM SoC platform and of then customizing their software by extending Symbian’s capabilities on a platform they controlled.
Uber powerful CPU’s in a phone or wristwatch is either a waste or an opportunity.
They could decipher a voice and facial expression as a means of inputting text messages for example.
[[The entry of text messages into cell/mobile phone has demonstrated the ability of the general public to survive truly awful user interfaces in an eager manner]].
All these new mobile devices are making my last computer look like a TI-83 calculator by comparison! I’m now able to fit more computing power in my pocket on a reasonable budget than I could have fit on my desk just four years ago!
Many people have more computing power in their pocket than most private PCs I know, because nealry all private PCs, and even private notebooks, I know are older than four years.
Once upon a time, I wanted the power of a Cray 1 on my desktop (and yes, I’m old enough to remember when the Cray 1 was introduced). Now I want the power of a desktop PC on my phone (with a dock to make it usable as my desktop/notebook PC).
With evolution of picoprojectors / camera gesture recognization you won’t need any dock