Intel and AMD both tried to ship iGPUs fast enough to compete with low end discrete cards over the past 10 years with mixed results. Recently though, powerful iGPUs have been thrown back into the spotlight. Handhelds like Valve’s Steam Deck and ASUS’s ROG Ally demonstrated that consumers are willing to accept compromises to play games on the go. AMD has dominated that market so far. Valve’s Steam Deck uses AMD’s Van Gogh APU, and the ROG Ally uses the newer Phoenix APU. Unlike Van Gogh, Phoenix is a general purpose mobile chip with both a powerful CPU and GPU. Phoenix doesn’t stop at targeting the handheld segment, and threatens Intel’s laptop market share too.
In response, Meteor Lake brings a powerful iGPU to the party. It has the equivalent of 128 EUs and clocks up to 2.25 GHz, making it modestly wider and much faster than Raptor Lake’s 96 EU, 1.5 GHz iGPU. Raptor Lake’s Xe-LP graphics architecture gets replaced by Xe-LPG, a close relative of the Xe-HPG architecture used in Intel’s A770 discrete GPU. At the system level, Meteor Lake moves to a GPU integration scheme that better suits a chiplet configuration where the iGPU gets significant transistor and area budget.
I’ll be testing Meteor Lake’s iGPU with the Core Ultra 7 155H, as implemented in the ASUS Zenbook 14. I purchased the device myself in late February.
Chips and Cheese
I’m absolutely here for the resurgence in capable integrated GPUs, both for PC gaming on the go and for better graphics performance even in thinner, smaller laptops. I would love to have just a bit more graphics power on my thin and small laptop so I can do some basic gaming with it.
This R&D is always driven by the retail carrot, which is the bulk production of low cost gaming capable hardware for kids can play CoD at school or Uni. But historically, much of the fundamentals that built the gaming segment came out of performance systems for business, CAD/CAM, Raytracing, Media Production, etc., etc., which seems to have been largely forgotten in recent iterations. I think they need to get back to the fundamentals and they find there is a market that can subsidise the expansion of early adopter hardware into mainstream retail. At the moment I feel the industry segment is too hit and miss to sustain multiple developers, the losers lose big, yet I’m sure it needs multiple developers to push the R&D and prevent the development stalling like it has in the past. Of course we understand some segments are moving away from high performance individual devices to cloud, like Autodesk Fusion, etc., etc., but lately there is a bit of a backlash appearing as people bemoan a cloud based experience that probably hasn’t lived up to expectations. So I’m all for some dedicated and powerful portable systems with integrated GPUs for business and engineering applications, let me have the kit to do what I want to do the way I want to do it, and not in some standardised way some anonymous Dev or System Admin thinks it should be done.
I would think that machine learning and AI ( Generative AI ) are the big GPU drivers these days. Some people are even suggesting that NVIDIA should just leave retail ( gaming ) behind.
The main issue with iGPUs is that they not only share RAM, but also share the power envelope as well.
That requires a lot of custom optimizations on games. Both SteamDeck and Rog Ally has made some progress in there in their system tools, but the games would also need to know that increasing CPU clocks come at the cost of GPU cores, and vice versa.
Though I do like the new paradigm shift. We have for too long depended on invalid and inefficient assumptions.
Indeed. Heat and power are problems, and they’re why I still have a larger desktop with more cooling. My DD laptop is is fast, but it doesn’t have the ability to move heat like the desktop. I’m pretty sure my laptop can run at full tilt for 2s before it turns into a molten pile of goo.
sukru,
Indeed, all iGPUs face similar bottlenecks. Even a low end discrete GPUs typically outperforms “high end” iGPUs. Personally I don’t see much value in them beyond the lowest tier of computing and I’d rather see the resources go towards better CPU specs. However Thom’s commentary does paint a use case for iGPUs that are powerful enough to run games on systems that have no dedicated GPU. Just because someone buys a low end computer doesn’t mean they don’t want to do gaming and I’m sure they’re going to try.
This is cool. I don’t stress my iGPU, but I’m all for more power. Now if they will start supporting GPU sharing for VMs, I will be very happy.
I Intel supports this with their iGPU, but I don’t think AMD does.
https://www.intel.com/content/www/us/en/support/articles/000093216/graphics/processor-graphics.html.
My analysis.
Intel, figure your stuff out. P-core+E-core?…. no. If you don’t understand what I’m talking about, there’s a reason their server CPUs don’t do this. Battlemage needs to take out the knees of AMD’s GPU business. Intel, who we thought might struggle, seems to be taking this seriously (or is it all marketing?). But, the whole E-core thing, was crappy (again, if that makes no sense to you, you might need to look into it more).
AMD, you’re going to have to price at a loss, sacrifice the 6xxx to the bin and start working on something to compete with your new competition, Intel. I just don’t see this happening. I expect Intel to be #2 in a short period of time GPU wise. Have no idea why AMD can’t develop effective battle(mage) plans. With that said, for now, you have a reasonable iGPU on a reasonable CPU (that doesn’t cause E-core grief). But, I think you’ll lose the iGPU completely to Intel in the next year or so (if you can get a good Intel CPU combined with a discrete Arc/Battlemage GPU for the same cost any Ryzen APU based system, well, you’re really in trouble).
Nvidia. Well, you’ve given up on gaming, mostly because anything you throw out there will be purchased. And since it’s not a part of your revenue stream (really)…. you’re sitting pretty. You have a card nobody is even going to attempt to touch, cards that are well supported for just about everything. Yeah, it’s all proprietary, but the world doesn’t care. Your downside is not caring about the “little guys” anymore. That and Jensen’s normal “insanity”. Nvidia can afford to “cruise” probably for 2 years even (if they wanted to).
chriscox,
I can see why some would say this. many prefer having performance cores. But intel’s motive there was to address the notoriously bad battery life of x86 laptops. While intel/x86 manage to hold their own on the performance front, they’ve been loosing on efficiency for a long time against ARM chromebooks, and apple’s ARM computers. The silicon gates they use to make x86 perform fast need a lot of power, and so they developed e-cores that trade in all the p-core complexity for more battery life. Whether you appreciate e-cores or not will depend on how import battery life is to you.
Actually Intel’s e-cores aren’t more efficient for the same work, they are mainly space efficient. This allows intel to offer CPUs with high core counts that help in certain applications. see https://www.techpowerup.com/review/intel-core-i9-12900k-e-cores-only-performance/7.html
ChrisOz,
Is there a specific reason you chose the very 1st e-core CPU generation? That generation was known to be especially power hungry, intel was coming out of a very rough patch and OS schedulers did have much time to optimize those hybrid CPUs. Can you make your point using more current processors?
The i9-14900k scores a lot better than i9-1200k on their efficiency charts.
https://www.techpowerup.com/review/intel-core-i9-14900k-raptor-lake-tested-at-power-limits-down-to-35-w/8.html
Unfortunately I couldn’t find current benchmarks data comparing e cores to p cores directly. I would point out that you wouldn’t do this for typical applications, you’d let the scheduler pick the cores in real time based on realtime conditions. Artificially constraining intensive jobs to e cores doesn’t necessarily represent real world efficiencies.
One good thing that came out of the e-core phenomenon is the N100 and similar Alder Lake SoCs. I have a MeLe Quieter 4C with the N100, which has four e-cores and no p-cores, and it’s an extremely powerful computer for its size and power draw. It’s not a gaming machine or workstation replacement of course, but for a general purpose PC it’s just fine and much faster than the Atom based S0Cs Intel had been putting out in years past. Unfortunately you have to run Windows 11 on it to take full advantage of it, since right now under Linux and other OSes the iGPU is less than stellar and the CPU underperforms as a whole.
I think there are a few different ways to look at this issue, for me the focus on integrated GPU is primarily driven by compatibility issues, put the GPU in the processor and it gets supported almost by default from any software vendor, they all have access to it.
It’s easy to claim discrete is better, but in terms of support by 3rd party packages it just isn’t, the Devs are always limited to the hardware they have access to, and often it’s far away from the hardware used by or available to consumers.
cpcf,
That’s an interesting point. I like to see the data on it. I kind of treat opengl APIs as generic and assume the same shaders will work everywhere. But I don’t do much with graphics as a developer so I’d like to hear from someone who does: how much needs to be customized for specific hardware? Obviously features might not work everywhere, like DLSS, etc, but it seems to me those can just be turned off without causing issues. Do game studios target iGPUs routinely during development or are they more of an afterthought?
I’d imagine the main benefit for most users would be lower power & costs, although personally I feel it’s worth buying a more performant discrete GPU.
Sorry I might have mislead when I used the term “compatibility”, because I’m not talking about things that do not work, in compatibility I primarily mean performance. I suspect things work almost globally, but claiming to work and working efficiently might not be the same.
cpcf,
Haha, yea. There have been times when I thought the software could have been better for users if the developers writing it hadn’t been spoiled by high end hardware. In CS circles this is often justified by saying hardware costs are cheaper than paying developers to worry about optimizing code. The problem with this view is that doesn’t factor in “external costs”, which can end up multiplying costs of suboptimal software across thousands or millions of customers. In the grand scheme of things, what’s a few thousand more spent on developer time compared to tons and tons of e-waste and co2 emissions? These are very different perspectives.