AMD claims that the microarchitectural improvements in Jaguar will yield 15% higher IPC and 10% higher clock frequency, versus the previous generation Bobcat. Given the comprehensive changes to the microarchitecture, shown in Figure 7, and the additional pipeline stages, this is a plausible claim. Jaguar is a much better fit than Bobcat for SoCs, given the shift to AVX compatibility, wider data paths, and the inclusive L2 cache, which simplifies system architecture considerably.
Some impressive in-depth reporting on the architecture which powers, among other things, the current generation of game consoles.
So the current gen consoles run on AMD chips? No wonder they’re so underpowered
I think you need to “check yo self fool!”
but seriously, the current generation of systems are not underpowered at all judging by some of the benchmarks. Infact, inspite of the limitation on RAM, the systems seem to be pacing pretty well. I’m mainly talking about the PS4 which has a much better graphics system than the Xbox One. The extra stream processors help quite a bit compared to the Xbox One’s amount.
Why the downvotes? You’re right. Compared to a average gaming PC the specs of the “next-gen” consoles are nothing to write home about. They can’t even do full 1080p in more graphics intensive games (BF4 for example). It’s the first generation we can mostly compare to normal PCs since they’re just x86 chips with mid-range AMD graphics and a few nifty tricks thrown in (shared memory or the sram in xbone).
The insinuation wasn^aEURTMt that the hardware is underpowered relative to ^aEUR~average gaming PC^aEURTM but that AMD being the chip maker was the reason.
Exactly which is ironic as the benches have been rigged for quite awhile now. look up “Intel Cripple compiler” to see how any bench that doesn’t list what both it and its components were compiled with is about as trustworthy as the old geforce FX benchmarks. Remember Quack.exe?
As for the jaguar? If its even better than the bobcat its a real winner. i have been building low power media tanks with the E350 Bobcat boards and it does 1080P over HDMI, has no problem doing basic web and office work (its a GREAT way to upgrade a Pentium 4 system on the cheap, less than $100 USD can turn an old power piggie P4 into a power sipping dual core) and as far as gaming? Just look up “gaming on E350” on Youtube for videos of everything from Crysis 2 to the Portal series running just fine on the Bobcat, with lowered graphical detail of course, it was made for low power HD, not gaming.
As for comparing it to a modern PC? No contest, I’d take my AMD hexacore any day over a jaguar but you have to remember with consoles its a trade off, most home users would balk at a console that used 95w on the CPU and 105w on the GPU like my desktop does, not to mention having enough cooling in a small console case would be problematic. But any way you slice it that Jag looks to be a great chip and I can’t wait until the jaguar duals and quads hit the channel in earnest. so far the only one we’ve seen here in the states is a jaguar quad motherboard for $150 but it sells out as fast as they can get ’em.
Plus gaming seems nowadays mostly GPU, not CPU intensive(?)
Here is a wikipedia article on the PS4 specs: http://en.wikipedia.org/wiki/PlayStation_4_technical_specifications
From that article it looks like the PS4 uses a combination of a Jaguar APU (combined CPU/GPU) and a separate dedicated GPU that is a custom Radeon 7870. The APU isn’t on the market but it would be a $150+ chip and the GPU is probably $250 so that’s $400 from just those 2 components to build it mostly.
Given that’s how much the PS4 cost I can see why it wouldn’t keep up with a $1,500 gaming rig but because its so specialized it would probably outperform mostly the same hardware in a PC.
It still runs circles around the PS3 and we are getting to a point of diminishing returns of just pushing more polygons anyway: http://i.imgur.com/GE9vfR6.jpg
When I look at screenshots from a game like Crysis 1 that came out in 2007 they still look decent to me.
Edited 2014-04-02 09:09 UTC
a complete 7870 card in retail costs only 150^a‘not
i’d guess the chip costs at most 30^a‘not
Those GDDR5 RAM kill that assessment.
And we still have no reliable API that is equivalent to PS4 low level api. (Mantle aint finished yet)
So not only PS4 have better RAM then is available to PC build (AMD APUs love RAM!), but also still should be able to push more data, compute more, and render more, then in Win/Lin/OSX.
PS … but I will go Lin + AMD APU next time I go shopping anyway
Edited 2014-04-02 12:54 UTC
then add another 80^a‘not for the ram (gddr5 is expensive, but it’s not that expensive)
you know, gddr5 is not superior to ddr3
they both have their weakneses and strength
there is a reason why we use ddr for the cpu und gddr for the gpu
Edited 2014-04-02 16:22 UTC
Yes. I know. Latency vs Speed (in oversimplification).
And APUs love speed.
However we can not test any other AMD APU with GDDR5… So here PS4 may out or under perform compared to exact APU chained to DDR3.
(If I’m wrong, and there are such test, they way, I would kill for the link )
Also since its GDDR5 RAM, AMD could build its controler from ground up offering features not present in current get APUs on PC (like fully hw memory coherency between CPU i GPU)
“we are getting to a point of diminishing returns of just pushing more polygons anyway”
Yes, if all you’re rendering is a single character then you are right. But who says the extra polygons have to be used on a single object?
Ultimatebadass,
Regardless of the number of objects you’d still get diminishing returns. Increasing polygon count results in smaller and smaller polygons that become ever-less visually significant. To what end should this go? At the extreme, one polygon per pixel would be pointless. Also, polygons become more expensive as their computational overhead becomes divided across fewer pixels.
I believe we are reaching the point where we might as well be ray tracing individual pixels instead of worrying about polygons at all. The problem with this is that *everything* in our existing toolchain is based on polygons, from hardware to software to the editors that produce the content. Never the less, I think the futility of pushing ever more polygons will help provide momentum towards raytracing.
Interesting blog post on this topic, apparently the amiga had a ray tracing demo in 1986 (the animation was prerendered rather than realtime):
http://blog.codinghorror.com/real-time-raytracing/
http://home.comcast.net/~erniew/juggler.html
Edited 2014-04-02 15:00 UTC
Well, polygon count is 90’s way of making wonders.
Now You need compute capabilities, little CPU involvement (as it still is needed for things that require sequential computing), multitasking, virtualization, bla bla bla, etc.
In other words. Yes getting more polygons is not so important anymore.
Doing more stuff with those though is.
At the extreme end of things, yes, but I think were still quite far from that point. I’m not saying polycount is the end-all of graphics. Just that the more detail you can push into your scene the better. Take any current video game, you can easily spot objects where the artists had to limit their poly-count drastically to get acceptable performance. Granted the “up front” stuff will usually look great (like a gun in a FPS game), but other non-interactive props will (generally) be simplified. We have a long way to go before we reach that 1 pixel = 1 poly limit.
I do agree that realtime raytracing is the way graphics engines should progress (related: https://www.youtube.com/watch?v=BpT6MkCeP7Y pretty impressive real-time raytracing). However, even if we replace current lighting/rendering trickery with realtime raytracing engines surely you will still need some way of describing objects in your scene? Handling more polys can go hand in hand with those awesome lighting techniques.
PS. I’m aware of NURBS-based objects but from my Maya days I remember those to be a royal PITA to use (“patch-modeling” *shudder*) and on current hardware they still get converted to triangles internally anyway.
OT: In 1986 i was still shitting in diapers, but I remember playing with some version of Real3D on my Amiga in the 1990s. The iconic 3d artists “hello world” – shiny spheres on a checkerboard took hours and hours to render just a single frame
Ok, but to that I might quote Priest: “we are getting to a point of diminishing returns of just pushing more polygons anyway”
I guess it’s somewhat subjective. I think we’re close to the point where it hardly shows. Many games intentionally target older GPUs where polycount was still a factor, but with bleeding edge hardware and appropriate models to go along with it, I suspect many people wouldn’t notice another factor of 2 or 10 on top of that.
I believe GPUs are reaching a plateau with the current approach of rendering polygons, this is why I thought we should move to raytracing as a next logical step.
I agree, given that today’s models mostly polygon based, that’s going to be very difficult to move away from.
Well, keep in mind they get converted to triangles because that’s what polygon based GPUs require. This triangular mesh would actually hinder a raytracer who’s quality would be better and faster without converting to triangles.
I’ve never been good at 3d modeling myself, but in the past I envisioned a sort of 3d VR system where you could model digital clay with your hands. The concept of polygons would be completely abstracted out. You could get all the benefits of 3d cad software: zooming, rotating, showing/hiding layers, etc, but the modeling would be entered intuitively the way a sculptor would do it in real life.
For that matter, the computer could scan in objects from real life such that you could get an arbitrarily high quality representation without having to CAD model it at all. It’s been years since I’ve brushed up on this sort of technology, way may already have it.
Ironically, your link proves that more polygons is better. And, you only reach the point of diminishing returns once you’ve passed a persons ability to discern between rendered images and real life.
I’ll be pleased if AMD step up their game. I haven’t bought an AMD based system for a while now, and I hate having an almost Intel monopoly.
Edited 2014-04-02 01:07 UTC
the Jaguar/Kabini E2-2100 which is a dual 1Ghz 128 GCN shaders @ 300Mhz. It’s not a bad Linux setup on the OSS drivers for $30 for the chip and mobo. Video acceleration works pretty good, haven’t tried the VCE engine yet to do on GPU transcoding yet though, yet that is apparently also working on these chips in the OSS drivers.
I’ll still probably end up selling it soon though to get a Socket AM1 ITX system and the Kabini based Athlon 5350 as it’s a 2Ghz quad version of the same with the GPU clocked to 600Mhz.
My only question though is how that will stack up to the Kaveri based A4-7300 which will be on Socket FM2+ and will have a dual channel memory controller and have the option to upgrade to the A10 series chips.
Edited 2014-04-02 04:39 UTC
To me is the up to four processors and the socket decision which brings market appeal to those of us
who live in developing countries.
This is a stamp size socket. Pretty one
This kind of architecture
could greatly benefit from
(socketed too)external L3 memory, speed matched.