Over the past few years, there have been persistent rumours that NVIDIA, the graphics chip maker, was working on an x86 chip to compete with Intel and AMD. Recently, these rumours gained some traction, but NVIDIA’s CEO just shot them down, and denied the company will enter the x86 processor market.
Both Intel and AMD are working on integrating the graphics chip with the processor,which could seriously hurt NVIDIA. Intel makes its own graphics chips, and AMD of course has ATI to draw from. A logical conclusion would be for NVIDIA to start offering its own x86 processor.
Further fuelling the rumours of NVIDIA entering the x86 processor market is the news that the company has been hiring ex-Transmeta engineers, a company which used to make power-efficient x86 processors. It’s also been suggested a number of times that NVIDIA might buy VIA, the other x86 processor maker.
When asked about the rumours, NVIDIA CEO Jen-Hsun Huang was clear. “No,” he said when asked if the rumours had any basis in reality, “NVIDIA’s strategy is very, very clear. I’m very straightforward about it. Right now, more than ever, we have to focus on visual and parallel computing.”
Instead of releasing its own x86 processor, NVIDIA will focus on getting its GPUs into as many different device types as possible. “GPUs in servers for parallel computing, for supercomputing – and cloud computing with our GPU is a fabulous growth opportunity – and streaming video,” Huang said, “And also getting our GPUs into the lowest power platforms we can imagine and driving mobile computing with it.”
On November 5, NVIDIA posted strong financial results for the third quarter of its fiscal year.
I bet they will be dead soon.
GPU are not so interesting in computing, only on very specific tasks, and even there nothing so great that would give it a solid advantage.
@Phocean
“GPU are not so interesting in computing, only on very specific tasks, and even there nothing so great that would give it a solid advantage.”
Maybe on Windows PCs.
For OS X and some instances of Linux, the OS takes full advantage of the GPU leaving the processor to be dedicated to do more demanding tasks. in the end, it can and DOES make a system faster when properly integrated into the OS.
Edited 2009-11-09 22:03 UTC
Windows has damn good GUI acceleration. Desktop Windows Manager which is responsible for accelerating the Windows 7 GUI is taking up 26Meg of ram according to task manager, Skype is using 28Meg.
Windows 7 has pretty good GUI acceleration which doesn’t eat up my memory and is damn responsive on this Integrated graphics card.
Anyhow where does this myth come from that you need a graphics card accelerating the GUI otherwise it degrades performance horribly? … unless your app is using software 3d acceleration or heavily relying on the graphics card to help it with additional processing (Adobe CS4) any CPU in the last 6 years can do at least 1280×1024 any application degradation that you are likely to notice.
Windows, OSX, KDE4 and recently I believe also GNOME desktops use 2d graphics acceleration using a GPU.
I have an ATI HD2400 graphics card (a very low end card), but even this very modest and inexpensive graphics card speeds up the KDE4 desktop quite a bit through 2D acceleration. I am using the radeon open source graphics driver, which doesn’t have 3D functionality as yet, and it won’t have until Linux kernel 2.6.32 comes out.
PS: When kernel 2.6.32 comes out, since ATI GPUs are far faster than Intel GPUs, after a little while ATI GPUs will become the best option for Linux. They will have open source drivers integrated with the kernel, like Intel GPUs, but unlike Intel they will also have performance on par with nVidia GPUs.
Anyway … the speed of desktop graphics are nicely enhanced by 2D hardware GPU acceleration when it comes to operations such as scrolling or resizing/moving windows, and also for “bling” enhancements like pop-up notifications, hihglights, widget animations, transparency, fade-ins and fade-outs and shadows.
Edited 2009-11-09 23:25 UTC
Doesn’t Desktop Effects/Compiz require 3D acceleration, considering it uses Compositing support?
compiz requires 3D acceleration. compositing, however, does not necessarily. The compositing that comes with metacity is 2D only. The compositing that comes with KDE4 can either be 2D only or use the 3D drivers.
Adam
You’re wrong. Vista (w/ Aero), OS X (Quartz Extreme) and KDE4 use 3D acceleration to enhance performance with the GUI and reduce CPU load. OS X was first in this regard starting with OS X 10.2 Jaguar. 2D acceleration has been used since the 80’s.
I dare you to find me a card made in the last 15 years that is a simple dumb framebuffer with no 2D acceleration. 2D acceleration helps, but without 3D acceleration, your card is crippled in my book. I will not use a card if I can’t get 3D acceleration and no, I am NOT a gamer.
I don’t know, even though the NVIDIA drivers are closed source, they perform pretty damn well. Every open source 3D driver I’ve used has been underperforming buggy crap compared to the closed source drivers. Even the older Radeon 9200 drivers sucked compared to the official drivers. I just wish they’d hurry up with the 64-bit FreeBSD drivers.
And they have been since the dawn of personal computing. Nothing new there.
Don’t think GUI acceleration, think OpenCL and parallel processing. A bad ass video card coupled with a nice quad-core CPU is like having a Cray on your desk.
The GPU ain’t going anywhere. Even my integrated shared memory 9400M in my Macbook is useful with OpenCL.
Hi,
While I mostly agree with what you’re saying, the GPU *is* going somewhere. If you look at both AMD and Intel roadmaps, the GPU is going in the same chip as the CPU. To start with it’ll be just low-end and/or low-power systems, but that’s just a start.
With the memory controller built into Intel and AMD’s CPUs now, there’s a major performance advantage putting the GPU “on-chip” too; and if NVidia isn’t careful it could end up with no way to compete – too slow to compete with “on-chip GPUs” in high-end systems, too expensive for budget systems and taking up too much space on ultra-portable motherboards.
-Brendan
“… only on very specific tasks, and even there nothing so great that would give it a solid advantage”
Next time, please check the Nvidia Cuda website or gpgpu.org before make such comment
From my experience, for many problems a cuda program running with a simple gtx 260 gives you a factor between 80-100x faster than a 2 x quadcore 5020 (using openMP), and that even before doing substantial tunning for the coalesced memory access that makes huge differences in the performance. If giving two orders of magnitude in performance does not give a solid advantage, please tell Intel they can throw Larrabee to the garbage bin, that they don’t need to compete with Nvidia in the GPGPUs arena.
In more than 20 years doing scientific computing I haven’t witnessed such jump in performance by just plugging a card and do some recoding. Even if the learning curve is important, very quickly you get programs running much faster than before. We have ported around 20 programs in the last 3 months and the impact has been dramatic. Many programs that required 4 to 8 2xquad-core nodes (MPI+OpenMP) in a cluster to get results in a promptly manner are being replaced by their gpgpu counterparts and this let the cluster for the very long simulations.
Then, Nvidia is effectively having a substantial advantage here until Larrabee can come to reshape the market.
The big missing feature in Nvidia’s line is the support to ECC but that is going to be addressed in the next generation of GPGPUs coming next year.
Read up on NVidia tegra and then lets talk about dead soon again. The PC gaming market is not the one with the highest revenue, there are markets where you can literally sell billion of devices per year.
nvidia have to do something to compete in the PC gaming market. AMD and Intel will soon have the graphics integrated to their chips and I suspect it’ll be a hard sell for nvidia to convince consumers to purchase a dedicated card, a bit like the dedicated physics cards.
It’s not as though ATI graphics aren’t competitive with nvidia’s either.
I’m not all that confident of nvidia’s future.
I don’t know… I’m not really a fan of onboard-anything. I’d rather get specialized, replaceable PCI cards for added functionality than put yet *another* task on the CPU and eat into my main memory. I’m more into a piece of hardware doing one thing, not being weighed down by unnecessary burdens. Not to mention that I always liked nVidia products, and their drivers were always excellent (though it would be nice if they were open source for Linux and the BSDs).
And yes, I realize I’m just one person… but I’m one person who would not hesitate to get an nVidia GPU in my next machine (along with dedicated sound, ethernet, wi-fi, etc. cards) instead of going with the default Intel integrated video. I’ve done it before, I’ll do it again.
The first time you see and AMD or Intel CPU with integrated graphics beat the pants off of an add-in card, I’d bet you’ll be convinced.
If not, you are probably a member of a very tiny minority (and shrinking) of the enthusiast market that still upgrades your computer part by part over time.
I used to be that guy, but gave it up a while ago, because of the staggering pace of new tech. And the fact that if you time your upgrades to about a year and a half or so after the consoles refresh, you don’t really need to stay on the treadmill.
Only under very specific circumstances. And I can’t see this happening for a hell of a long time. As integrated graphics are now, they’re still too weak… but the biggest problem is that I’m just not a fan of the whole “shared system memory” thing as I said. Plus we’re only at dual-cores so far, and that’s not near enough for me to start embracing onboard components. [Don’t bother mentioning quad-core… yes, I know they’re out, but those are still priced high last I checked.]
Somewhat, but not quite. I just believe in the philosophy of a hardware component being designed to do one thing well, and other components (with their own dedicated memory and onboard processors) to do their own specific tasks well, offloading the extra burden on each other. Maybe eight cores will solve this (maybe…), but at the same time I expect operating systems and software in general to start making better use of multiple cores as well, which will–again–make it better to have separate, distinct, specialized processors for video and audio processing.
As it is, I will probably hold this opinion for several years to come. Having the absolute most powerful GPU is not my focus; at least not any more. I realized raw graphical power was a pointless waste of money years ago, and instead go for something that has plenty of power for what I need, and then some (but not quite top-of-the-line). My focus is getting the best use out of my processing power… which means not offloading everything under the sun onto the CPU, so that I can have it actually process what I want: the programs. I don’t want graphics and audio to get in the way of that.
You should look at the i5 CPUs. They’re very good value and pack a performance punch above some i7 CPUs.
They start at $100, these days.
http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010340…
(the a tag works for preview, but not post, so…)
Edited 2009-11-10 16:30 UTC
When it comes to dedicated graphics, I’ve favored nVidia for quite a while. OTOH, I opted to go with integrated graphics with my last laptop. The reason is that the integrated card is better than my old dedicated card from 4-5 years prior, and can do anything but gaming. My priorities have also changed over time, since I now rarely play new games, and prefer open source drivers and better battery life. For me, a discrete graphics card was actually undesirable, although my reasoning only really applies to laptops.
That said, I think nVidia has some potential in the x86 market for gamers. With most games, there is no benefit for having a CPU beyond the recommended specification, but the visual experience scales up almost linearly with graphics card power. So I see a market for PCs built completely around graphics performance, with the CPU being basically an afterthought. Kinda the opposite of having a fast CPU and integrated graphics, which is better/cheaper for general computing.
You’re one person indeed. But 95% of the market will do with anything that’s cheap and that works. You just gave the best examples yourself: sound (and you can add LAN, IDE/SATA/USB/other-ports controllers etc). There isn’t a need for dedicated HW expansion cards for 98% of the people. It’s way cheaper with those integrated functionalities, consumes much less space/power than an expansion card and most importantly, it just works.
Currently NVidia is at the enthusiastics/specialized markets only, and almost by definition, it’s a relatively small share of the market. Integrated HW sells to the vast majority of the market.
As a relative expert, gamer and developer myself, I can say that I’m quite happy with all my recent systems which had everything integrated less the GPU. Had there been a decent integrated GPU, I’d have probably bought it instead. It’s cheaper and it works.
So integrated GPU has a huge current and potential market IMO. Possibly less profitable per system, but overall presents a very good growth opportunity for a company like NVidia. I wouldn’t just dump the concept or potential market because some experts prefer discrete HW.
I think the trend is reversing for nvidia. Previously (currently) having an add-in card is a requirement for decent gaming performance, but going forward, once the shift to massive numbers of cores happens, having it all on the same chip will make more and more sense.
The add-in card I think will be just for the hard(er)core gamers.
imho AMD is totally irrelevant.
If Intel keeps doing crappy GPUs, there’s a (big) place for NVIDIA.
Until NVIDIA hw has good open source drivers then there is a large market for AMD (and Intel). I try my best to stay away from anything NVIDIA, especially GPUs.
Right, Linux/BSDs have what 2% of the desktop market. Yeah, real huge numbers there…
The costs of developing a top of the line CPU or GPU are staggering. Certainly a very low single digit of the overall market is not the main hopes for survival of any vendor trying to stay in business.
Really? Closed source or not nvidia has the best drivers in Linux land.
And those people who look if the driver is open or not before buying, are the minority.
Consoles and piracy are taking down the high end.
With more games being designed as multiplats there is less of a need for pc gamers to upgrade.
There’s a degree of truth in that I think, but not entirely. It feels as though the hardware is lasting longer, but that really depends on your usage profile.
I’m only just upgrading from an Athlon64 3200+ 754 and have been playing Crysis, on lower settings of coarse and been more or less “happy” as I still got to play the game just with less candy.
The game came out in 2007 and yet it is still used for benchmarks.
High-end games that push pc hardware sales used to come out more often.
No because the onboard graphics (integrated into a CPU) will never be as powerful as the GPU cards because of the power distribution and heat dissipation problems. By placing the GPU on the card they can have better cooling than if stuck inside the CPU die.
Of course things could be way better if there was a middle ground because placing the GPU on an external card also introduces many problems.
What you say about physics was true because the future consumers of physics cards had very little titles to induce them to buy one. The graphics card has worked its way up into the PC ecosystem such that almost every game uses it.
I am sure Nvidia would never get into the CPU market. There isnt any room, they’d have to license a socket,or release Motherboard/CPU combos (VIA has done both), etc.
An interesting question to ask Nvidia would be if their GPUs will become x86 compatible. That would go a long way to helping ease parallel computing development. I would love to hear that they are using some code morphing technology to make this a reality.
AMD recently released an openCL driver that runs on x86 processors. This allows any OpenCL code to run across GPU’s and CPUs. An x86 driver for a GPU would be a game changer.
That’s not going to happen. In GPUs if you have for instance a 16-wide SIMD unit – it will execute 16 identical scalar programs at the same time, with identical program flow, but operating on different data. There can be hundreds of these units and the programmer treats them just as if they are a single scalar unit. This does not map to x86 very well. Even Larrabee which Intel touts as x86 based does the heavy computing with such SIMD units that have little in common with x86.
Running GPU code on CPUs is nothing new. OpenGL has always had software(CPU) implementations. And they were always painfully slow – the kind of code that can run on a GPU, should run on a GPU.
The fusion of GPU and CPU will not happen anytime soon, but NVidia will have a very hard time to compete with the upcoming integrated GPU+CPU chips in the mass x86 market. They will be competitive in the hi-end gaming, CAD and scientific niche markets.
Edited 2009-11-09 22:52 UTC
GPU code “running” on CPU exists since the dawn of time. For the GPU->x86 part, this just won’t or can’t happen. The architectures are just way too different.
I’m not sure about the “all integrated” vision of the future. It could be okay for laptops (which are pretty much in that situation anyway) and low end desktops, but what’s the real point on mid-to-top end desktops? The price will prolly be the same, the performance will most likely be worse, it will generate new issues on heat dissipation and the like, and if not it would probably be because they cheaped the thing off (hence worse perfs).
This will just not cut it. The Nvidia plan, making every soft and their mother cuda-enabled, could be a way more “intelligent” business plan. I know I wouldn’t take anything other than nvidia now for both linux integration and overall good drivers on every platform, AND cuda (also on every platform).
By the by, you also have to consider that seeing how the CPU/GPU market is a warfare, you can bet your ass Nvidia wouldn’t announce anything of the like if they didn’t back it off hours after with a strong RTM plan. Any such endeavor would be denied and kept under the radar to be thrown in the faces of AMD and Intel without warning. And since they have already good results, they don’t need it to keep shareholders on their toes.
…I don’t know anyone else that has Genlock and Framelock facilities in their GPUs. They are still the only option for Multiscreen Multiprojector setups, and these are hight margin Quadro cards. Seems nVidia knows where its profits are.
The discrete PC graphics market is going to dwindle more and more. It’s just a matter of time before there won’t be enough high-end customers to be worth it. That, or they will need to scale back, and only focus on those.
However, mobile is only growing. Parallel computing, also, is only growing. Not only that, but there is a great deal of work that would be ideal for such processors, if they were easier to program for, and not so float-centric. nVidia happens to have some of the best support for that kind of work, right now (Intel will, with Larrabee, but I won’t hold my breathe). It is not a 32-way 10GHz x86 CPU that you really need to keep up that forum website you have. It’s for its database server to be able to utilize that highly parallel chip you got there in that slot.
The thing with Transmeta is not that they had x86 CPUs. It’s that they had CPUs that ran x86 through software (in the sense that your CPU ‘runs’ Java), and on a much bigger process, had a chip about as good as an Atom a couple years prior, with a fairly light transistor count; and, that was with an integrated north bridge. They had bad luck, and made bad decisions (going x86 only, when they weren’t x86 CPUs, and showed off how they could run other stuff, was probably the worst), but I think it’s safe to say that they had highly creative people, way ahead of the curve, designing those chips.
Competing with Intel, IBM, AMD, and/or ARM, on CPUs: bad idea.
Making GPGPUs better at things they normally suck at: good idea.
Making GPGPUs better at operations that aren’t traditionally number-crunching: good idea.
Making GPGPUs more efficient: good idea.
Making GPGPUs easier to program for, and creating better compilers: good idea.
…etc..
I’m sure nVidia could come out with a decent CPU design, but then they would need enough people to want to buy it, and avoid stepping on the wrong toes in the process. Intel (IBM, too) have shown in the past that they can afford more lawyerhours than you can.
But, if they do make a CPU, and it works well, I’d get one .
Edited 2009-11-10 01:02 UTC
Intel and AMD will come out with CPU integrated GPUs so nvidia should have an “alliance” with the GPU vendor which won’t: ARM.
I guess they already noticed that the desktop PC market is shrinking so they made their choice already and they will start to produce GPUs for mobile computers, handheld devices, nettops, netbooks etc. This is a growing market despite the financial crisis. And this is what the guy said.
Actually NVidia got their priorities right, the big market is nowadays handhelds and telephones we speak in volumes of billions of sold devices there, even ARM which just wants a few cents per phone earns a fortune (I dont see Intel with their much higher costs and energy consumption going anywhere there).
NVidia has its own ace card in this area the Tegra chipset, it stomps literally all existing graphics chipsets for handhelds. I a years time we will see telephones which can do full hd output thanks to it.
NVidia themselves say that they expect that Tegra will be their long run cash cow, and the indicators are very good, while Intel still does not get it, they recently licensed the PowerVR based qualcom core to get something in this area while their own chipsets still smell like ***** . Heck even Nintendo is jumping onto Tegra for their next handheld. We have yet to see the Tegra phones but that is only because NVidia is still working on the drivers for WinCE and Android and phone makers usually need about 6-12 months time to integrate a new chipset into their designs.
(Things are slower than on the PC side with its plug and pray system)
Are they even allowed to enter the market? I thought Intel licensed the x86 to AMD and Cyrix only?