“Although the Tegra 2 chip with integrated dual-core processor has recently been released, NVIDIA is already poised to announce its successor at the Mobile World Congress next month. According to Mike Rayfield of NVIDIA, Tegra 3 may incorporate a quad-core processor with main focus of supporting Android smartphones and tablet PCs.”
Since ARM announced the Cortex A15 spec a few months back http://www.arm.com/products/processors/cortex-a/cortex-a15.php
And Nvidia announced Project Denver based around the A15 at CES http://pressroom.nvidia.com/easyir/customrel.do?easyirid=A0D622CE9F…
It’s good that they will be able to compete with X86 hardware, but I’m still not overjoyed though, since they still refuse to release docs on their hardware and thus may see slower uptake in the already established ARM server market.
Project Denver is not based on A15.
With architecture license, Nvidia has additionally licensed A15 core.
They keep their GPU locked up. If qualcomm and ati were to open up their GPU part or the SGX were to become more open I’d be happier.
The big positive here is that nvidia is helping push the envelop with ARM. It seems the other players out there aren’t as motivated to grab a bunch of the market with active marketing.
marvell already have tricore and quad core
since 2 years some compagny show some interesting product:
pegatron, nufront… nothing go out none oem want to sell product from this company, just hope that will change soon
check the video of the nufront
http://armdevices.net/2011/01/07/nufront-arm-powered-laptops/
imagine a quad core arm chip in a desktop or laptop
enough powerfull for majority of user
I don’t think going quad core benefits majority of users at all. What really matters is single core performance. Most apps are still single threaded. You only use more cores if you run more apps at a time or do sth really specific, like video transcoding, that can make use of more cores. Did you notice how long it took for the About Ubuntu window to appear? Ok, that may be due to slow I/O but it’s things like these that matter for the average user. Most of them, unfortunately, don’t even know what a core is.
Don’t get me wrong – I’m all for quad (and more) cores. But I’m a geek. I can tune my system to get the best out of it and I take pleasure doing that. And what they say in the video, that it’s a 2GHz chip, is certainly encouraging. So thanks for the video.
The “About Ubuntu” has always been dog slow for me, even on fast computers where everything else is snappy. I don’t know what the hell is that help app doing that takes so much time.
not alot application get benefit of all core…
anyway, not all task can be split on all core
so more core, will allow to start more application
please note, ubuntu is not very the best example when we need to talk about performance, usability…
I don’t think going quad core benefits majority of users at all. What really matters is single core performance. Most apps are still single threaded.
I dare to disagree. What apps do a regular home-user use? Well, mostly web browser, video player, perhaps Office for budgeting or whatnot, and maybe a few games. Well, all major web browsers these days do run in several threads, video players do run in several threads, Office isn’t CPU-bound anyways, it’s more memory-bound, and any modern game these days also runs in several threads.
Now, only games from those are really CPU-bound anyways so quad-core would indeed be a waste. Dual-core would still be somewhat beneficial though, the amount of which can be debated.
You are free to disagree. But in my experience you benefit from more cores only when the apps are heavily multithreaded and the workload is not I/O bound. Firefox, I think is not a good example – I often do a Ctrl-click on a bookmarks folder to open all bookmarks inside it in different tabs. It then freezes for several seconds. Behaves the same under Linux & XP. It’s my biggest gripe with Firefox. And for most games, if you haven’t invested in a high-end GPU the workload is GPU bound. And, BTW, where are those multithreaded ARM games?
Again, I’m all for multi-core. It’s just that I think that 2 cores ought to be enough for anybody
The issue isn’t with the overkill of CPU cores. The issue is with the under use of Cores for the OS and the Applications themselves.
We’re at the 4 and 8 Core or 6 core baseline in CPUs today and most applications are still holding onto 2 CPU leverage.
Corporations are like Ford. Just give them a few new features and call it a release. Do this for 10 years and people tend to stop buying a Ford.
“It’s just that I think that 2 cores ought to be enough for anybody”
No, 640K *IS* enough for anybody !
Kochise
Mozilla is working hard to minimize and optimize IO for Firefox 4 which is probably the main reason why it freezes for several seconds. _But_ having the GUI in a separate thread will unfortunately have to wait for Firefox 4+. They have a separate thread for Mobile Firefox 4 though.
There are no major web-browser capable of using more than one thread per web-page. The best you can do is start multiple browser processes and sometimes embed this processes into the same application, but this is similar to just starting multiple browsers and multiple applications in general.
Sure! But with Firefox, the UI doesn’t seem to be in a thread of its own. And this really spoils the experience.
I did notice that you used the word “Major”, but since WebPostive runs under Haiku-OS I believe it is required to use two or more threads per web-page.
On other systems, what Web-Browsers use more than one thread per web-page? Has anyone had experience with them? With a fast pipe do they run faster? Asking as I don’t know the real gains to be seen.
In my experience most games use 1.25 cores. Some games go all out and use 2.00 cores. But that is still only using 50% of my quad core.
I still don’t get why they can’t put a full core at work for the physics and a full core at work for 3d sound.
It is only use as much as required. At one point the game use more cpu time, at other point – much less.
If you throw specific cores to specific task – as naive programmers do, you end up with highly asymmetrical load on different cores.
Multicore programming in games/consumer software is not hard. Just split your data to chunks and feed it to asynchronous data-parallel job-system effectively utilizing all the cores. It will be effecient if you can break/relax time dependencies and manage input/output streams right. That means synchronous is bad. Async is good.
With big enough dataset you can utilize 10 or 100 cores without any work from the coding side.
Even simple jpeg image can be decoded with a set of tiny jobs providing a real speedup. Glyph rendering system could process different symbols on multiple cores, etc
So do you have an idea why most games use between 1.25 and 2.00 cores? I understand that you can have a game which can use 12 cores but only shows as using 30% of a quad core but I am underwhelmed by sound, AI and physics in games. Either game developers don’t care about sound, AI and physics or they are hardware bound right?
The physics engines aren’t that advanced yet, theres little reason to add more complexity to them since it increases graphics load to have more stuff flying around on screen.
The vast majority of PC gamers don’t have cards better then lower mid range. Sure, there no end of epeen waggling guys with their $1000 muticard setup, but they make up less then 1% of the market.
So as before, for games, most people have way more then enough CPU, the limiting factor is still the GPU.
If the rumors are true and AMD’s Llano APU really is a quad core with the equivalent to an HD57*0 grade GPU then maybe we’ll start seeing some changes if that becomes the minimum expected spec.
I disagree. Nvidia’s physx are mostly used for stuff flying around on the screen. But physics that are integrated in gameplay mostly change how objects in the gameworld behave. There are videos of crysis where they get less than 1 FPS because of all the physics calculations. I find games like World of Goo fascinating because physics are part of the gameplay.
What Crysis demo would that be? Just a random expo tech demo of the Crytek engine like everyone else’s where they show a ridiculous amount of simple object on screen like an unending downpour of rubber ducks into a tub of water?
In real games if you want more complexity it’ll cost allot more dev time(read money and delays) then shine up the graphics yet again, yet they still won’t do anything that’ll cap out the CPU since they aren’t going to fine grain it to the point of say a military grade flight sim.
This is why there is so little change in what’s going on in the games, yet every 6 months theres a new game that polishes the turd till it turns $500 GPUs to a slideshow.
Yes, it’s shitty, but for some reason the kids want the games to look shiny even if the game is as boring as solitaire.
Most people are going to hit the limitations of their GPU before the limitations of the CPU. To make the CPU a limiting factor in say an FPS game you’d take a GTX580 and run the game at minimum settings at 1024×768, you’d get hundreds of frames per second of low quality graphics, which are ultimately useless since your LCD can only output at 60 fps.
Like what most users do when they run a modern DE?
Yes! However, most apps don’t really do much most of the time. Usually, what you do is focus on one app, do sth there and expect an immediate answer. It’s good to have another core if some other app decides it needs CPU love at the same time but this only helps a little. These days, it’s mostly I/O that screws the desktop experience.
Depends which OS you use. Like you said most users are with setups where they see little benefit from more than 2 CPUs but for a number of Linux-Heads who tune thier systems or people like me using Haiku-OS the gains are there.
I paid more for my Toshiba Netbook than a number of other models out there because I get 8-10 hours use on battery, and even that can be a too short at times.
An easy to carry machine that can run 24+ hours on a single charge is worth it to me and others.
I would kill for a reasonably powerful *nix box thatll run 24 hours on a charge
When will these ARM based laptops running ubuntu actually be available?
We’ve been hearing about them for years, but all i’ve found actually available is extremely lowend stuff running wince and one from toshiba which runs an old version of android (which just isnt designed to be used with mouse/keyboard)…
Seriously, show me where to buy these nufront boxes and i’ll order one right now.
Tegra 1 has 1 core
Tegra 2 has 2 cores
Tegra 3 has 4 cores
…
So will Tegra 11 have 1K cores ?
pica
Yeap :
http://www.osnews.com/story/24062/What_Will_Power_Computing_for_the…
Kochise
I would think that at some point the cuda cores and arm cores would kind of merge. That way you would either:
– have an arm core and multiple cuda cores in 1 unit with tegra 11 having maybe 100 units.
– have special cores that do the work of arm cores and cuda cores