Released a year before its main competitor, the Xbox 360 was already claiming technological superiority against the yet-to-be-seen Playstation 3. But while the Xbox 360 might be the flagship of the 7th generation, it will need to fight strongly once Nintendo and Sony take up retail space.
This new entry of the console architecture series will give you an additional perspective of how technology was envisioned during the early naughties, with emphasis on the emerging ‘multi-core’ processor and unorthodox symbiosis between components, all of which enabled engineers to tackle unsolvable challenges with cost-effective solutions.
As with the other entries into the series, this is great weekend reading. Incredibly detailed, covering both hardware and software, the games, the development tools, and so much more. Excellent work.
Xbox 360 had some unique features that made it a big success. Honestly, I don’t remember any other period when Microsoft was that innovative. And for most of the console generation Sony had to play catch up, which was really interesting.
Not only they started cheaper out of the gate, they did not actually have too many compromises. If I recall correctly, 360 was the first console with Unified Shaders, had interesting choices like FIFO data transfer from CPU cache directly into GPU, Unified Memory, ESRAM for hardware upscaling and post processing, and of course constant innovation.
That generation was different. First console to have downloadable games, first one to have an indies platform (XBLIG) where almost anyone could program the device, first console to have multimedia streaming (Netflix), first console to have generic USB discs as storage, first one to include “gamification” of the meta-game (achievements), all of which were transformative upgrades.
There was a huge difference between the early console, early UI and early games, versus what it reached at the end of the generation.
Unfortunately the story does not have a happy ending. They got greedy and started the Xbox One generation with many questionable choices and almost universal backlash. That erased a lot of goodwill, which they are still trying to rebuild.
And… here is the excerpt from their documentary on the “RROD” saga: https://www.youtube.com/watch?v=z2d6IMBS8oY
Microsoft went KISS and it paid of, they had a console ready 1 year earlier than sony. And the simpler architecture meant it was easier to develop for. And all the complexity in the PS3 programming model didn’t add that much in terms of performance, so all that design complexity turned out to be a really bad choice.
Microsoft did some bad decisions too with the x360, like backing the wrong HD disc video format. And the reliability issues also burned a lot of good before the Xbox one came out.
javiercero1,
I think the Cell processor was very innovative and the raw performance absolutely dominated the intel CPU used by xbox. Yet this novel design was also it’s downfall, it was too different and most game developing studios didn’t want to reinvent the wheel, they just want to hit rebuild and go. The Cell required some major software adaptations to take advantage of the much higher specs, and when they did boy was it amazing. Otherwise though running games designed for conventional hardware was a weak point.
https://www.gtplanet.net/playstation-3-cell-more-powerful-modern-chips/
https://www.ign.com/articles/2010/08/26/xbox-360-vs-playstation-3-the-hardware-throwdown
Even today we still see tons of titles leaving CPU performance on the table because they don’t want to refactor the software loops to be able to take advantage of more cores. Consequently we typically see a couple cores are pegged under load while the remaining cores are almost always idle. This may be changing slowly, but the industry has definitely dragged its feet on multithreading despite the fact that adding more cores is the easiest and most effective way to scale up hardware.
The Xbox’s Xenon used the same PowerPC core as in the Cell.
javiercero1,
Yes but “xbox” was using intel P3. Did sony know xbox 360 was going PPE when they were engineering the PS3? If not their goal was to beat intel CPUs. If the Cell processor is not the essence of your complaint about PS3 earlier then what is?
@ Alfman
I doubt their goal wasn’t to “beat” intel’s CPUs, the PPc core in both the Xbox and the PS3 was an in-order dual issue part, with significantly lower performance than a lot of the desktop Intel/AMD cores of the time. It was a tiny core. The goal was to target a specific price/performance envelope for the consoles of that time ($500ish). Using an embedded core was the cheaper route. And neither AMD nor Intel were interested in the low margin console business.
And yes, Sony knew that Microsoft was going to use a PPc part.
javiercero1,
What makes you say that? Granted it would take work (ie money) to optimize around its unique design, but going by gigaflops it seems that PS3 had more potential to unlock than intel CPUs. The issue is of course that publishers didn’t want to invest tons of money to make PS3 specific game engines. So most titles would never exploit the PS3’s real potential.
Could be, but do you have documentation to back that?
According to this chronological breakdown the work on xbox360 chips started in 2002, it was publically announced & released in 2005.
https://web.archive.org/web/20080922212025/http://blogs.mercurynews.com/aei/2006/04/24/the_xbox_360_un/
But the PS3 began development in 2001, so if anything I’m skeptical that sony knew what microsoft was going to do when microsoft themselves probably didn’t know. If there was any copying, the timeline suggests it would have been the other way around.
https://venturebeat.com/2013/06/30/a-frank-recounting-of-the-mistakes-sony-made-with-the-playstation-3/
Incidentally the article does express that the PS3 was too complicated & different for the industry, that’s something we all agree on. The performance was there, but it required a lot of PS3 specific optimization & design that most studios weren’t willing to commit to. Maybe Sony should have known better and foreseen that developers would not come around, but I still feel it was an innovative & powerful architecture nevertheless
Everybody knows everybody in this industry, so both Sony and Microsoft were aware they were using PPC. They both were using basically the same core from IBM, and each added a few customizations. It’s impossible to hide this when you’re working with the same supplier for your CPU.
I worked with the RSX team within NVIDIA, Sony screwed up with Cell because they expected to do most shader stuff on chip. The SPUs ended up being mainly a solution looking for a problem, so they were underutilized. Theoretical gflops are useless if they don’t align with the common use case for the chip.
Microsoft didn’t waste time with exotic IPs within their SoC and trusted their GPU supplier to know better than them when it came to shaders.
Even if Sony found out eventually after microsoft started, the timelines still suggest Sony started first and would not have known then what microsoft was going to do in the future.
Those SPUs were strong for graphics and the PS3 exclusives that used them tended to have better graphics than xbox 360 exclusives. The problem though was that most developers were not using the PS3 special processors at all because they were developing conventional multiplatform titles. Unfortunately that hindered PS3’s sizable gflop advantage.
https://www.youtube.com/watch?v=6nG4YgtIYNA
I don’t blame the developers for their choices though because from their perspective it was easier and more practical to build a multi-platform title by making the code more generic and avoiding custom processors when possible. I think this is the main reason both microsoft and sony shifted to x86. It wasn’t so much because of technical advantages, but because it had become a pseudo standard for practically all software developers with much lower barriers to entry.
I have no idea what your point regarding the PPC really is.
The PS3 had better graphics regardless of CELL by virtue of the RSX being a much stronger GPU than the Xbox’s Xenon.
I think by the end of it’s life developers were able to finally tap on to the PS3’s potential. But perhaps it was too late. People love weird architectures and odd programming models… until they have to develop on them.
javiercero1,
That is the point, the PS3 delivered when the work was put in by software developers, but most studios would not put in the work because they were more interested in developing for portable engines. These inevitably didn’t use the special processors and that took away it’s computational advantages.
Is x86 really any different? Haha. I think this is one of those cases where x86 got a lot of attention simply due to the ms & intel monopolies rather than because it was pleasant to code for. Honestly my IBM clone was down to the monopoly of the platform more than the architecture’s merits. Heck I had to learn about atari and commodore in hindsight because they had virtually zero presence in my town unlike windows, which was everywhere.
Terascale had much higher potential for example: the top terascale card is around 418% faster than the fastest NV47 in just about all tasks.
The last great console generation that had anything truly unique to set consoles apart. You have PowerPC versus Cell and each had their strengths and weaknesses compared to a PC.
These days with the exception of the Switch (which is really just a juiced up Nvidia Shield) you have “consoles” that are just DRM locked PCs, hell you can even buy the PS5 Pro hardware from China in prebuilt PCs where they just bought the ones that the GPU didn’t pass QA so they bundle it with an RX550 and sell it as a media box.
bassbeast,
That is actually not entirely true. They are still innovating, but things are not as visible as before.
Yes, Cell and Xenon were ahead of their times. However all those techniques became common place in a about a generation.
The SPUs became “compute shaders” and large vector units (AVX), and the 360 unified memory model is now very common in tablets, phones, Apple M1, and of course the Xbox Series as well.
This time around, they designed more for the cloud. The Xbox Scarlett CPU is actually used in their datacenters for this purpose. And it has custom silicon for streaming, and virtualization (it can virtualize 4 One S consoles at the same time):
https://www.eurogamer.net/digitalfoundry-2020-xbox-series-x-silicon-hot-chips-analysis
There are of course other small things. Like the audio engines. Xbox has Dolby Atmos built in, while it PS5 has its own solution for simulated 3D audio on commodity stereo systems. Both use advanced NVMe setups to improve IO pipelines (which will come to Windows as DirectStorage), and Xbox has a “Quick Resume” feature to suspend up to 8(?) games onto disk and resume them in very short time. They also have additional cores for “free” AutoHDR and other post processing (which seems to be at least partially ML based).
Virtualization and cloud will probably be the next frontier of gaming. And the current consoles are leading the way.
sukru,
Virtualization and cloud is typically associated with data center applications. I immediately thought of stadia when you said this…is that what you were thinking of? Or something else?
Alfman,
Yes, I would want to speak more about Stadia. Unfortunately I cannot speculate. And, I could be biased to say it offers the best technology in terms of the cloud. But the business strategy clearly did not work.
(Ignoring that)…
Xbox this time really seem to have invested in a cloud first design, and they famously mentioned Google and Amazon as the main competitors:
https://www.theverge.com/2020/2/5/21123956/microsoft-xbox-competitors-phil-spencer-cloud-gaming-amazon-google
(Again, ignoring the business side)…
It is really wasteful to have highly capable devices stay idle all the time. This is why “renting” a gaming machine makes sense, especially now when we can access gigabit fiber connections at home. Couple them with “edge” servers which are hosted in your ISP, not the other side of the world, and then cloud gaming becomes really viable.
A modern 3080 will use 116W at idle, and costs $700 at retail. It is a huge upfront cost, and about $200 in additional electricity costs over a year. (This is assuming you already have an otherwise capable PC in the first place).
Or, you can rent the same card from nvidia for $100 for 6 months:
https://www.nvidia.com/en-us/geforce-now/memberships/ . Which means you get the card essentially for free.
sukru,
I don’t even have a tenth of a gigabit at home You’re right that usage factors are pretty low, although I think most people would be running them around similar times and you have to provision capacity for that fact.
At least if the graphics servers were highly local, the service would not displace other valuable internet backbone capacity, but that would also require ISPs to buy new property and/or rent out new buildings since it’s unlikely their existing operations center(s) would have enough facilities already. I think it would be a very expensive proposition. It would probably work best in urban/city centers with high population density but in other areas it’s hard enough to get good internet service period, much less good service plus a high end field operations center, haha.
Are you talking about the whole system or just the 3080? I have a 3080ti and according to the UPS my whole system is running at 126W right now including my monitor. I don’t have a rig to measure the GPU alone, but this review suggests it’s at 16W idle and even streaming it stays below 30W.
https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-ti-founders-edition/34.html
At $1200/year personally I’d prefer to own than rent, ownership pays for itself quickly. It would have to be much less than $100/mo for me personally to consider renting a good value proposition. Also you may have to deal with session queues/limits depends on the resources available on their end. If no GPUs are available in your class then you have to wait in line.
https://www.reddit.com/r/GeForceNOW/comments/dww1in/any_reason_why_queuing_times_are_so_long_now/
To be fair I haven’t tried their service and they may over provision the service by more than they expect for any given day so at least ideally you never have to wait to use it, but I would expect increased wait times around holidays especially. The bigger problem with rental services for me is that you can’t just run your software normally, you are constrained to a curated catalog. I looked for productivity apps like blender and unreal studio for example and those are not available. I even tried looking up game & tech demos that someone might want to try at home and those aren’t available either. These things are arbitrary and may not be a deal breaker but it does highlight how rental services are limited compared to the real deal.
I can respect that you have a different opinion though and just because it’s not for me doesn’t mean it doesn’t work for other people.
Alfman,
It is $100/6 months, or $200/year.
I am not sure what the public terminology is for this. But there are some kind of “edge” nodes, that are used in “peering”. Meaning, some servers are placed, by say Google, into ISPs to improve operation speeds:
https://peering.google.com/#/options/google-global-cache
I am not 100% sure, but no money should change hands, but don’t quote me on it
Microsoft has something similar called “Connected Cache Program”:
https://peering.azurewebsites.net/peering/Caching
And I also am not sure which kind of servers they are going to host there. It could be just the static content and load balancers, while the more proprietary hardware sits in a “mini data center” not owned by ISP (speculating).
I looked that number up on Google, possibly was as mis reference to total load instead of GPU only. Then it could be comparable to a low power device like nVidia Shield. Yes, I know they are not the same.
Anyway, the major issue is third parties blocking games on the streaming platforms. Say, you bought Sony’s “God of War” on Steam, use Edge Browser to stream it on nVidia’s cloud, and it works. But as soon as you do it on an Xbox console, Sony has nVidia block that option.
sukru,
I totally misread the price, $200/year is much better and more interesting.
Colocating servers at ISPs is a thing although the facilities I’ve seen were quite limited. Technically I agree it does make sense to distribute content from within the ISP’s network, but many of the local operation centers don’t have space for a large data center for others to rent. GPUs require much more infrastructure to dedicate per active user. It suspect it would require most ISPs to invest heavily in new facilities to really accommodate it at scale.
A company that I was working for wanted to colocate at the ISP and the prices were outrageously expensive, but I concede we had no bargaining power. A large corporation naturally has more carrots and sticks to use as leverage, but even so the new facility and infrastructure costs would require a large financial outlay for the ISP, that money would have to come from somewhere.
While I don’t think it’s fair, the monopoly corporations often get local governments to subsidize costs over many decades at taxpayer expense when they open up new data centers, so financially that strategy may be one of their best paths forward.
I think one has to accept the idea of walled gardens when subscribing to these kinds of streaming services. It’s the nature of the beast. Perhaps one could get around that by renting a generic windows desktop on powerful hardware to use however you wish, just remotely. This would be more useful than geforce now is today and you could purchase and run any games (or productivity apps) from any store at all. You could even have the same hardware time share queuing system that geforce now has.
Alfman,
That bargaining power might be related to mutual benefits, and the asymmetric nature of the network.
If the ISP spends 10% of their uplink on Netflix streams, it might make sense for them to host that locally, and reduce the traffic costs:
https://openconnect.netflix.com/en/. Similar with other high traffic destinations.
It is a win-win for both parties. But that of course undermines the peer to peer nature of the Internet.
The ISP customers are assumed to primarily consume, and any attempt to run servers (even as simple as email), are hindered by technical means.
Or rather, ISP is now acting as your “data mainframe”, while your home PC is a terminal.
Anyway we have drifted a lot from 360 technical discussion
If you believe “cloud anything” is the future of gaming? I have shares in Stadia I’d be happy to sell you In case you haven’t noticed we are in the midst of a global recession (I’d argue its the start of another great depression, but that is an argument for another time) so the ISP aren’t gonna spend jack squat upgrading their lines and the USA simply doesn’t have the backbone to support cloud gaming.
As for the rest? Lets see…supported on PC, supported on PC, supported on PC and…yeah its the exact same stuff as on PC. In fact with DDR 5 making 64Gb the soon to be standard (heck everyone I know is already running 32Gb because DDR 4 is so cheap) and all Ryzen 7xxx series coming with RDNA 2 baked in thus making practically every new system from both Intel and AMD dual GPU systems, not to mention the crazy bandwidth increases with PCIe-5? What few gains the consoles made are gonna be quickly blown away. Say what you will about the older consoles but due to the exotic arches and bare metal design there were long stretches where anything less than a top o’ the line Pc couldn’t compete but ever since Xbone and PS4? That really hasn’t been the case.
The point of a console is to play games, no? What does the underlying microarchitecture matter?
It’s not like that generation was that “unique” either. Both the PS3 and Xbox 360 basically used the same PowerPC core for their CPU. The SPEs in the Cell turned out to not be a good idea. And both Cell and Xenon ran too hot in their first release. The graphics heavy lifting was done by off the shelf PC GPUs. The current generation are fairly similar in that regard, the level of integration increased significantly and both Sony and Microsoft just changed the same PPC core for the same x86 cores.
sukru,
I agree, but there’s still a big difference. One rack of CDN servers can serve up prerendered streams for tens or hundreds of thousands of customers, whereas a streaming gaming service requires each active customer to have dedicated CPU & GPU resources on the rack. Between streaming video services and gaming, the bandwidth might be similar, but beyond that streaming games requires a lot more CPU & GPU hardware, a lot more data center space, and a lot more electricity. So I think it’s inherently far more expensive for the ISP to do that.
You’re right, that is how things have evolved. I think there was a lot of innovation to be had on the P2P evolutionary track, but the advertising business models strongly preferred centralized control over content so that’s what won.