All of that makes Arc a lot more serious than Larrabee, Intel’s last effort to break into the dedicated graphics market. Larrabee was canceled late in its development because of delays and disappointing performance, and Arc GPUs are actual things that you can buy (if only in a limited way, for now). But the challenges of entering the GPU market haven’t changed since the late 2000s. Breaking into a mature market is difficult, and experience with integrated GPUs isn’t always applicable to dedicated GPUs with more complex hardware and their own pool of memory.
Regardless of the company’s plans for future architectures, Arc’s launch has been messy. And while the company is making some efforts to own those problems, a combination of performance issues, timing, and financial pressures could threaten Arc’s future.
There’s a lot of chatter that Intel might axe Arc completely, before it’s really truly out of the gate. I really hope those rumours are wrong or overblown, since the GPU market desperately needs a 3rd serious competitor. I hope Intel takes a breather, and allows the Arc team to be in it for the long haul, so that we as consumers can benefit from more choice in the near future.
I hope the best for them. We need more competition in the discrete GPU market.
Nobody should expect a stellar product from the get-go. It takes time, they face a massive challenges, and it’s notoriously difficult to break into a mature market. Most small companies wouldn’t have a chance, but with intel’s billions and connections they may be able to weather the market and come out with a sustainable product…hopefully.
Many people will vote against chipzilla, which I get, but I still think consumers stand to benefit more having more competition rather than less of it.
Quite frankly, AMD APU are just excellent enough to even care for the competition.
Kochise,
You could be happy with AMD, but even so more competition would probably help you too. That’s the thing about competition, it doesn’t necessarily mean I have to switch in order to benefit from its market effects.
More competition can also lead to different standards. Vulkan is a universal standard for nvidia and AM to adhere to, Intel not so much, Intel wants their own, much like 3dfx did with glide. And to be fair directx is almost dead at this point, as all game developers want portability and just about any console, pc or device support vulkan. Intel does NOT support vulkan at this point with the A380 and this card has around the performance of a nvidia 710gt or a amd 6400 as shown in many youtube videos. The flagship 780 has around the performance of a old 680ti or a 6970 in some games and much worse in all others. (and those are ten+ year old cards)
If intel can get their vulkan certification and drivers ready, it might stand a chance in the low end market. (try to get steamVR working on a A380, i dare you)
NaGERST,
Maybe, but I actually think it might go the other way around. When there’s too little competition it makes proprietary APIs much more viable. It’s what allowed cuda to become dominant. Healthy competition creates more pressure on everyone in the market to adopt cross device standards. As more GPUs entered the market, both consumers and developers were compelled to use opengl and directx, which worked across manufacturers. Meanwhile glide quickly fell out of use
I would also go for cross platform portability over directx (and metal for that matter), but I think directx is far from dead and most gamers do expect their cards to support it. I think the same logic I used earlier applies here: the more competition there is to windows & xbox for gaming, the less desirable windows specific APIs become.
I couldn’t confirm your statement, where did you learn this information? The official specs suggest that intel intend to support vulkan on the A380 if they don’t already.
https://www.intel.com/content/www/us/en/products/sku/227959/intel-arc-a380-graphics/specifications.html
Their previous integrated & discrete GPUs support vulkan too.
https://www.intel.com/content/www/us/en/support/articles/000005524/graphics.html
I couldn’t find information about steamVR specifically, but according to this review the flaws with this card have more to do with low performance than incompatibilities. They have some vulkan benchmarks too.
https://www.techspot.com/review/2510-intel-arc-a380/
Uhh did you forget your history? The problem with that theory is we have a VERY long record of Intel putting out an inferior product and then simply using bribes and threats to muscle their way in. From Netburst to the “Vista capable” debacle that was foisted on consumers so Intel could dump a bunch of horrible iGPUs we have seen time and time again if Intel can’t win fair and square they really have no problem cheating.
bassbeast,
It is my hope that we get more competition, but make no mistake: if don’t deliver a compelling discrete GPU product, it will fail. In the interests of having more competition we should all be hoping new entrants can succeed.
The APUs aren’t as ubiquitous as Intel iGPUs. The best Ryzen procs aren’t the Ryzen G procs.
Over all AMD cards are great for graphics, but they’re not great at GPU compute or accelerating ffmpeg.
The MB OEMs also aren’t great at picking a reasonable assortment of graphics ports.
Which is a shame, because traditionally AMD cards have had a very nice shader/compute density. Sadly their software stack to leverage the compute resources for GPGPU has been embarrassing.
There are gazillions of unmet niches in the GPU market. For instance I’d pay good money for a discrete gpu with twice or thrice the power of an integrated one, but with passive cooling. I grabbed a 460x in the day, to meet that need. But since then, and with the pandemic’s curse, no one is making low-to-mid range gpus anymore.
In the same vein, a discrete mobile gpu which isn’t a formula 1 racer but which doesn’t kill battery time either will also make a lot of people happy.
But the tech press and geek community is always going after the top benchmarks. If you cannot make it to the top and beat 3080ti or whatever is at the top stop right now, you are simply ignored.
Very few people need 38939 execution units, but the ones who use them are the loudest in the audience. The rest of us, who just play kingdom rush or reencode on handbrake only four or five times a year are being force fed gpus with cooling systems matching those of the mercury orbiter.
Yet, Intel will be expected to fight for the top spot, rather than to meet the needs of 98% of users. And Intel will be deemed to fail, for it is hard to enter into a mature market and reach the top spot in just three years, if you’re not a hustler who engages in insider trading with crypto currencies.
Seriously. I would like a single width GPU with 3-4x DP which can be powered off of the PCIe slot (<=75W) which can fit into a low profile slot. Being passive cooled would be a bonus. There are some recent AMD workstation cards which fit that description, but those are expensive for what they are ($180-500).
Flatland_Spider,
I agree. There are tons of people who would benefit from products optimized for lower end systems: low profiles, better cooling, more power efficiency, etc. This is extremely important especially for SFF cases. I think this market has been overlooked because the profits are higher at the high end. But assuming sales of high end cards finally start to decline (because crypto & other demand for high end GPUs are finally cooling off), then it might make sense to focus on low & mid market GPUs again.
It’s possible that intel could fill this need above and beyond their integrated graphics. A cheap but meaningful PCI upgrade might prove valuable. But it has to be cheap enough to provide value over AMD & nvidia. Another problem is what sukru mentioned with software not being optimized for intel’s GPUs.
SFF case is exactly why I would like one. 3x monitors and 2x outputs from my SFF desktop.
I’m not sure the low or mid market GPUs are coming back though. People can buy used N-4 cards if they don’t need the latest and greatest. That doesn’t solve the heat or form factor problem, but people who care about that are rounding errors.
Keyword: Drivers
The problem with modern games is that they expect handholding from driver software. In fact that would be a lot of handholding.
That is why Nvidia will release “Game Ready” drivers. Just as Microsoft used to release patches to make sure games run smoothly under Windows releases, Nvidia makes sure the games run okay with many different GPU editions on the market.
Instead of doing the right thing, and testing the games on available consumer GPUs, devs would take shortcuts. That causes significant performance issues on “non standard setups” (basically almost anything except their own workstation configurations). Hence with some optimizations and hacks released along with the actual driver code, Nvidia and AMD can announce new versions like with increased frame rates, expanded compatibility, SLI fixes, and such.
Unfortunately Intel has to start from scratch.
In an ideal world, the game devs would target the API correctly, and scale perfectly with each configuration. In reality, they are usually lazy and depend on the driver writers to actually optimize their games.
This is true. Nvidia infamously adds lots of hacks to their driver to get games playable.
AMD has the same problem. They don’t cheat as much as Nvidia does.
Yes,
AMD also had issues with Intel C++ Compiler.
Don’t know the latest outcome, but Intel had a “cripple AMD feature” in their compiler back in the day: https://www.agner.org/optimize/blog/read.php?i=49#49
Given it is free, and actually quite good, many software used Intel compilers. This was of course a problem for games, too.
How does NVIDIA “cheat” exactly?
Whenever I see this, I think it is just a shame of where everything headed. I miss Matrox the most, but remember when ATI was separate from AMD? And when we had S3, SiS, Trident, etc? Sadly they all slowly died off / were bought up. Nvidia seems to be the last one standing. Matrox is still around, but they focused more on the multimonitor / professional side of things.
Granted it is nice we have standardized APIs now, instead of having to find specific patches for say Voodoo’s glide, or Rendition, etc.
I bought the Matrox G550, was enough to play Unreal or Battlezone 98, but struggled with more demanding games (Far Cry, Unreal 2, …) because hardly shader support. Switched to ATI Radeon 9800 Pro and not looked back. The supposed “better visual quality” of Matrox was a hoax, marketing BS. The 9800 could display multimonitor just as well and also offer way better 3D performances. So I guess sometimes it’s pretty normal some dies dies (pun intended). Btw still have my VIA C7 with embedded S3 UniChrome Pro, basically same 3D performance than the G550, no T&L, no UVD, impossible to play FullHD content without lagging. But nice overall for office use. So it also all matters about what you’re looking for. For the C7 / S3 was low power, yet I added a PCI Nvidia FX5200 to get multimonitor and better 3D performance.
I had a matrox GPU back in the day too in my first computer. I had no complaints with it. It played the games I wanted to but the feature that sold me on it was both NTSC video capture and output. That was pretty cool since I wanted to do video editing. We had more choices back then, I miss that.
@Kochise Matrox definitely had superior quality on a CRT. Ghosting on the TNT cards were awful. Radeons had terrible drivers. Matrox cards got to a point where they couldn’t compete on performance, but they did come up with the bump mapping feature and practically invented multimonitor. And at the time the Parhelia came out, UT2004 on 3 screens was insanely good…
They are still around, just not for gamers. But for older builds, you can’t really beat a Millennium II with a Voodoo card…
Yeah, Parhelia was their last attempt in “performance” GPU. Quite frankly, never seen ghosting on G550 (supposedly for the reason aforementioned) or 9800 (hence not looking back ever since). They probably had a head start on picture quality, but nothing the likes of Nvidia or ATI couldn’t figure out. On the performance side though, Matrox… Hence when you expect “competition”, you have to have the right contenders. Not just wannabees. Where are the FOSH GPU with FOSS drivers ?
The Matrox had better image quality compared to the Geforce 4000 series. I owned both a 4200 and a 4600 as well as later on the 4800. They all had the screen fuzz issue when working with graphics. Worked rather well in games, but was easily outclassed even by a ATi 9500 and even more so by the 9700 and the 9800
A 3rd competitor with the same performance/price ratio won’t change anything. We’re on that way.
It especially won’t change anything when the GPUs are made at the same Fab as AMD… AFAIK intel still isn’t making these at their own fab.
There has never been a GPU maker that had their own fabs.
ultrabill,
I know people are impatient and have been suffering the duopoly for a long time, but the law of supply and demand works in the long run – provided competition succeeds. I concede their success is not a given at all, but one can hope.
cb88,
That’s a great point. We need fabs to be competitive too and they aren’t because of years of consolidation. it has created industry wide bottlenecks with insufficient supply chain competition. In theory Intel’s independent fabs should help, but their missteps in recent years means they have a lot of catching up to do on both process and volume. I do hope & expect intel’s fab operations to rebound eventually, but if they cannot it would be bad to have one less viable fab.
I bet the bottom having fallen out of the GPU market from the Crypto bust (finally!) has something to do with the lack of enthusiasm over these GPUs inside Intel.
They bet heavily on Doge, and Elon didn’t deliver. XD
This is exactly why I want these to exist. I don’t want to buy a bunch of Intel CPUs with iGPUs to get a rendering and transcoding farm when I could get better density with discrete cards in servers.
On Linux, AMD’s transcoding and rendering support is non-existent, and I’d rather not deal with the Nvidia blob if I don’t have to.
Imagine if Apple de ided to sell their GPUs that are bundled with their ARM chips… then again they are optimized for metal, nit sure how well they would work with Vulkan / DirectX
leech,
AFAIK none of the M1/M2 macs support eGPU. What would be the market? MacPro?
Granted it is measuring “compute” rather than “graphics”, but there’s a lot of data available over at geekbench.
https://browser.geekbench.com/metal-benchmarks
Even when sticking to GPUs that support the metal APIs, apple’s top score (94,583) has the same performance as AMD Radeon Pro VII (94,556). The highest performing metal GPU is AMD’s Radeon Pro W6900X (166,946) built for the mac pro. It outperforms apple’s iGPU by 77%.
All those CPUs also have opencl scores too if you are interested…
https://browser.geekbench.com/opencl-benchmarks
Apple’s GPU rank there is 135th.
There would need to be a substantial bump in specs in order to make it competitive against other discrete GPUs.
“All those CPUs also have opencl scores…” should be “GPUs”
Probably MacPro. 4x discrete cards and the 1x iGPU for 5x GPUs and accelerators.
The discrete cards probably would take a perf hit since they wouldn’t share the RAM with the CPU.
Flatland_Spider,
4x m1-max would go toe to toe with an RTX 3090 Ti on geekbench benchmark earlier (assuming it scales linearly).
4x m1-ultra could make for a good contest with next generation rtx 4xxxx series.
I guess it would have to be for the macpro or an eGPU for macbooks…I cannot see apple designing discrete graphics cards for windows and linux users, haha.
I’d say the opposite actually, a discrete GPU would have a number of advantages over apple’s iGPU approach. Scheduling a kernel is naturally slower on a discrete card, but actually running it there with dedicated memory is faster. By and large execution time is what matters most. Even traditional GPU IO bottlenecks are disappearing with things like DirectStorage.
I don’t think the differences are marginal either: shared memory can cause severe performance degradation due to memory contention. I’m pretty sure this is why so many games end up doing badly on M1 – apple’s design means that the CPU and GPU running simultaneously compete for the same resources. Discrete GPUs minimize contention with their own dedicated memory. This also makes it possible to scale up without worrying about the toll it takes on a shared memory architecture. On top of this you also get more thermal headroom with discrete GPUs.
Whether any of this matters or not really depends on the application. Many mac content creators are using the M1’s dedicated media accelerators such that GPGPU performance probably doesn’t matter to them and special purpose application accelerators are good enough. Me personally… I wouldn’t want to give up the GPUGPU performance I have today (10k cuda cores is mindblowing and I have 2X). I just love how I can get realtime raytracing in blender with an RTX card! On an M1 using metal API the same render takes a minute. They may optimize it more but there’s only so much you can squeeze out before you need to throw more horsepower at it. So ultimately I think apple offering discrete cards would be a compelling upgrade over their iGPUs for certain types of applications & users. Honestly though I don’t know who would buy a macpro for this privilege.
Yeah. People rendering video or something with graphics would be the market.
I don’t see Apple writing drivers for Linux. XD
They could sell whole systems as discrete cards for the datacenter crowd. That would be interesting.
That’s the thing. It’s not so much the GPU as Apple silicon has a lot of custom accelerators to offload various tasks. This is also the way Intel is going, by the way. Lots of accelerators in the CPU package.
GPU compute doesn’t matter to Apple. GPU compute is kind of broken for everyone besides Nvidia, on the Linux side, and I wish this wasn’t the case.
True.
I thought that was one of the things Apple solved by putting RAM on die. I may be incredibly optimistic and assuming things.
Flatland_Spider,
It’s a matter of bandwidth. Every time you add another active “processor” (whether it’s a CPU core, shader unit, compute core, NN tensor core, multimedia accelerators, etc) they all create more load and require deeper connectivity fabric. It’s not a question of if they’ll hit the shared memory bottleneck, but when.
Obviously apple’s design is very low latency and for CPUs low latency is hugely advantageous. They’re sequential and most instructions would block without data. To go faster, you need IO to complete faster. We mitigate latency with caches and speculative execution, but there’s no denying the benefit of faster memory.
But the situation with a GPU is extremely different due to the way they work & how they scale. Everything is designed to maximize the concurrent use of execution units. Unlike with a CPU, a GPU is designed to always have loads of data ready to be processed in parallel. So even if a memory request took a long time to process, it’s almost irrelevant because there’s loads of data being processed concurrently and the GPU is capable of doing zero cost task switching on every single cycle. Bandwidth tends to be the limiting factor rather than latency.
IMHO it could be interesting to see a discrete GPU with on-die ram and I would like to see someone try, but there would be tradeoffs. Since both the GPU and memory tend to get very hot, combining them into the same die space could end up decreasing latency but also leave less energy/heat budget for execution units.
In theory maybe we could solve the heat problem with active cooling (meaning below ambient), but assuming we’re willing to consider active cooling, lowering latency with on-die memory may not be the best use of it. It could be better just to use the new headroom to add even more execution units, bandwidth, and parallelism.