If there was any doubt that CPU maker AMD’s principal reason for acquiring graphics chip powerhouse ATI was to build a mobile computing platform that would rival Intel’s Centrino, it was erased this morning with AMD’s announcement of a platform project it’s currently calling ‘Fusion’.
Is this not what we had before the GPU? A regular CPU that was the GPU.
Fashion/history repeats itself .
But I guess you always had some extra chip to do the low-level signalling and synchronization on the output port – DVI / HDMI these days, NTSC / PAL previously.
> Is this not what we had before the GPU? A regular CPU that was the GPU.
–> deleted my reply because I realized I was wrong
Edited 2006-10-27 10:41
No, the cpu used to render the graphics, now there’s going to be a graphics procesor onboard the cpu, meaning specialised hardware for specialised means, connecting to the cpu. This can be applied to more than just graphics as well.
Is this not what we had before the GPU? A regular CPU that was the GPU.
Not quite.
Even with SIMD instructions, the CPU still can’t match the GPU’s power at doing the same calculations on large amounts of data. However, GPUs cannot handle branching at all. The combination will allow better communication between the two (most likely in the form of coprocessors) and allow both to handle the type of processing they were optimized for.
Edited 2006-10-27 13:11
No, this is what we had back when Cyrix released the MediaGX chip: a CPU, a GPU, a soundcard, all on one die, in one socket on the motherboard.
Only now, we’re going to have modern, fast CPU, with modern, fast GPU on one die, in one socket on the motherboard. Something that hopefully won’t suck nearly as much as the MediaGX did when it was released.
The AMD-ATI merger will force Intel to improve its graphic solution , and Nvidia making a CPU will force ATI and Intel into some much needed competition.
I just wonder why Nvidia dont go after VIA ( CPU + CHIPSET ).
Lets hope they will be willing to fight on price too.
Nvidia do appear to be set to make their own CPU already. There was an article somewhere, Slashdot maybe, about how they would most likely make an x86 processor, but they may have some hurdles in front of them called Intel and AMD.
“they may have some hurdles in front of them called Intel and AMD.”
Thats why I mentioned VIA , its not as much the technology that will be the biggest hurdle IMO , but the patents.
What AMD-ATI will have an edge is with the Imageon chipset and Radeon(R) Xpress a mixup of all of those with a CPU could be able to bring a Centrino killer.
Imagine a Chipset that does Videoconferencing , picture taking , TV broadcasting , with decent PC graphic level and a fast and effcient power saving CPU all integrated.
Cant wait to see what fusion will be.
That’s what it’s all about, one chip.
I seem to recall a HORRID Cyrix that did this several years ago, the MediaGX. The system performance dropped to its knees under any sort of video load whatsoever. Will this be any better?
Edited 2006-10-27 02:31
Cyrix just sucked. Not much more commentary is necessary than that.
Its descendant, AMD’s Geode, is still with us today doing its thing in the embedded space.
Combining the two could be interesting for laptops,but I don’t think it will catch on for performance. The graphics portion would presumable share system memory, instead of using DDR3/DDR4 like on today’s discrete cards.
Well for some obscure reason Cyrix chose to use the 5×86 core (Downsized 6×86) for the MediaGx.
It wasn’t especially powerfull, but not as bad as you describe – compaired to its competitors, which was the cheap i486s and later the WinChip (Funny enough VIA later bought the Winchip and Centaur still develops it under the name of C5/C7).
NS sold most of Cyrix to VIA, but they did hold on to the MediaGX – for the Webpad (What Microsoft later invented under the Tablet name). Later the MediaGX was sold to AMD, that used it for the Geode line.
But most of the Geode products today uses a modified K7 core.
Yeah but having a powerful GPU and CPU integrated into one die centralizes more heat. I bet these chips will need liquid cooling units or massive fans with radiator heat sinks.
> Yeah but having a powerful GPU and CPU integrated into
> one die centralizes more heat. I bet these chips will
> need liquid cooling units or massive fans with radiator
> heat sinks.
AMD will probably not use the fastest generation of both CPU and GPU together. My guess is that they build low-power versions of both and use the combo for laptops.
It seems most of the readers here misunderstood the point of the article: they are talking about taking parallelism technology used in GPUs as an example and trying to apply it to CPUs. As far as I understood, they don’t actually mean to create a CPU with integrated GPU.
If you read carefully it clearly states that instead of mutliplying logical core inside the cpu, they will put a gpu unit: “In other words, AMD’s future “Fusion” platform, won’t just be an Athlon and a Radeon sharing the same silicon. “
If the article is right, this can be very interesting. CPUs and GPUs are both very fast in their respective fields of expertise and any chip that includes both techniques and is able to determine during run-time which code segments to pipe into which PU while maintaining code integrity will probably benefit huge time.
There are problems, however, heat has already be mentioned here, the potential switch to more advanced programming techniques, e.g. functional programming, could be a huge financial investment, and finally any chip that is able to determine what to do with a given code segment probably has to look at the code before executing, which sounds like double work. Nonetheless, it is interesting.
One thing which I am concerned about is whether or not AMD will make *nix drivers a priority. I’ve had so many problems with ATI graphics chips that I am very wary of building or buying a machine with one. If they don’t, then I’ll simply have to steer clear of AMD CPUs. And that kind of sucks.
With Intel Conroe Woodcrest leading the spec.org pack, and the quad core QX6700 out now and K8L still a long way off it seems that intel simply has to put the memory controller onchip now and AMD is more or less obsolete.
Intel had the money and the marketing engine to stay profitable and alive while AMD had the lead. The reverse will NOT be true.
Also, ATI’s video drivers are bad, so I don’t use their products either. Seems that AMD and ATI are destined to screw up.
AMD: Focus on beating Conroe. The E8000 is a 1333MHZ FSB 3.33 GHZ Dual core due out soon. Beat it. Or die.
I mean a long time ago (Pentium Pro?) a cpu had SIMD instructions. And they were designed for graphic operations (like add + clamp eg. 200 + 200 = 255).
Having 8 cores with only 2 threads running won’t be any faster than only having 2 cores. BUT if somebody can figure out how to effectively determine that some code is a loop that can be SIMD-ed, having 8 cores would be a lot faster.
However I think that only the compiler is able to “effectively” determine what can be SIMD-ed, not the cpu.
ok VIA and Nvidia have totaly different goals in mind. Via for embebded stuf or small low pwoer low heat computers. not power house gaming machines I know one of my little toys runs off a C7 its nice, but withouts customizing windows down to minimal ram usualge would be nno good for real games. also Via has everyhting it needs to be a complete platform, its own netwkr stuffm sound, and video S3. amoung its other commponents.