While AMD seems to have made up with Slightly Mad Studios, at least if this tweet from Taylor is anything to go by, the company is facing yet another supposedly GameWorks-related struggle with CD Projekt Red’s freshly released RPG The Witcher 3. The game makes use of several GameWorks technologies, most notably HBAO+ and HairWorks. The latter, which adds tens of thousands of tessellated hair strands to characters, dramatically decreases frame rate performance on AMD graphics cards, sometimes by as much as 50 percent.
I got bitten by this just the other day. I’m currently enjoying my time with The Witcher III – go out and buy it, it’s worth your money – but the first few hours of the game were troubled with lots of stutter and sudden framerate drops. I was stumped, because the drops didn’t occur out in the open world, but only when the head of the player – a guy named Geralt – came close to the camera, or was in focus in a cutscene. It didn’t make any sense, since I have one of the fancier Radeon R9 270X models, which should handle the game at the highest settings just fine.
It wasn’t until a friend said “uh, you’ve got NVIDIA HairWorks turned off, right?” Turns out, it was set to “Geralt only”. Turning it off completely solved all performance problems. It simply hadn’t registered with me that this feature is pretty much entirely tied to NVIDIA cards.
While I would prefer all these technologies to be open, the cold and harsh truth is that in this case, they give NVIDIA an edge, and I don’t blame them for keeping them closed – we’re not talking crucial communication protocols or internet standards, but an API to render hair. I do blame the developers of The Witcher for not warning me about this. Better yet: automatically disable and/or hide NVIDIA-specific options for Radeon owners altogether. It seems like a no-brainer to prevent disgruntled consumers. Not a big deal – but still.
There was BatmanAA for nVidia, then DeusExHR for AMD then TombRaider for AMD, and all the others, now it’s TW3 for nVidia, yeah, oh life is so sad for poor old AMD, how is that TressFx working out on my nVidia card? Oh wa-
Game tech lock-ins, it takes one to know one.
I have a similar opinion. At the end of the day Nvidia provide a feature which is better on their cards than AMD’s. This is a feature that ADDS things to the games that use it. If these guys thought that implementing their game with this feature rather than just not including it because AMD cards aren’t as good for that then good for them. Yes it does suck a bit from a consumer perspective but I personally would rather have a game that works spectacularly on certain hardware rather than mediocre on everything. Additionally it is an optional feature so it’s not like it’s always going to affect the performance on AMD cards.
Does it work better on NVIDIA’s card because it’s just tuned that way, or does it intentionally cripple performance on AMD GPU’s? That’s what I always keep wondering about, it certainly wouldn’t surprise me at all if NVIDIA did intentionally cripple performance way more than necessary.
When it comes to the tech world i am tempted to flip around the old adage about incompetence and malice.
Damn it, Intel was caught red handed some years back violating the feature detection spec for CPUs.
Rather than look at the actual register that was supposed to say what x86 extensions a CPU supported, binaries compiled with their compiler would check what was supposed to be a purely descriptive string.
change up that string and the same binary would perform just as well on a AMD or VIA CPU as it did on an actual Intel.
But without said string, the binaries would drop back to a code path that was more suited for 386 CPUs than a modern core.
the whole market is rotten to the core.
Indeed, I do remember that debacle and that is exactly why I can’t shake the feeling that NVIDIA is doing a similar thing on purpose. Would be nice if someone could actually prove that they aren’t doing such, but I’m going to assume they are until someone more knowledgeable proves the assumption wrong.
What is sad is companies like Intel pull that crap and don’t get penalized for it even when they admit they are rigging! And how much does this kind of rigging make a difference? A hell of a lot actually, as the link I’ll provide below from Tek Syndicate shows when you use non rigging programs suddenly those AMD cpus that the benches say are so far behind? Are trading blows with Intel chips three times the price!
I’ll say the same thing about this as I said when Intel admitting they were rigging, if you win fair and square? Congrats, you deserve all the increased sales and accolades that you have earned. But when they rig the market they need to get busted and be shunned by everyone as rigging the market benefits only the one doing the rigging, it damages the market by hampering competition, causes the consumer to pay higher prices than they should if there was fair market competition, it allows the one doing the rigging both the funds and motivation (due to lack of consequences) to do more nasty moves (see intel killing the Nvidia chipset business for example) and leaves the market as a whole much worse off.
https://www.youtube.com/watch?v=eu8Sekdb-IE&list=TLRt7C-qG9964
This is the test between Intel and AMD, and below where Intel admits rigging benchmarks…
https://www.youtube.com/watch?v=ZRBzqoniRMg&index=16&list=PL662F4191…
Sorry I had to use video links but Tek Syndicate is one of the few places that doesn’t accept money or favors from either chip company. For an example just how much Intel money effects reviews look at Tom’s Hardware and their “best Gaming CPUs” list where the writer even admits that most new games require quad cores…and then recommends a Pentium Dual over an FX6 that is cheaper! But this is not surprising as you look at their site without ABP and wadda ya know, its wallpapered in Intel ads.
Intel still does that. Anything compiled with the Intel compiler (such as many benchmarks and games) will deliberately cripple any non-Intel CPUs.
Can you name one AAA game for Windows which was compiled using icc?
AAA usually knows better, but they sometimes end up using precompiled third part libraries that were build with icc, because it produces faster results on Intel chips, and Intel is directly involved in and supports many performence oriented libraries.
It is hard to notice though, since AMD are usualy slower even when not crippled, and it takes a some effort to tell from a disassembled release binary which compiler produced it.
Also as a response to software that removed the Intel-only check on the binaries, Intel has changed the code several times breaking these tools, so that there now is no single easy signature of it, and no automatic way to patch it out.
I think you are kind of right. Both AMD and Nvidia have their own competing set of closed additions. In this case I guess AMD TressFX. Which uses OpenCL I believe, but OpenCL is generally better performing on AMD. However Nvidia has OpenCL and their own CUDA.
Nvidia is not naturally open, they like to keep everything in their control. AMD likes to play the Open card when they are at a disadvantage. Whether that is their natural state is up for debate.
In fairness AMD also makes up the console releases as well. So I guess no super magic floppy hair for them either.
I have seen Nvidia specifically lock out mixed AMD+Nvidia setups for hardware Physx. That also might be the reason.
For the record folks I have an Nvidia GTX 970 right now, previously AMD 7970 and Nvidia 560ti before that (and so on).
It apparently works on AMD if you override the game settings in catalyst and turn up one setting.
The problem is that NVidia makes GameWorks and they don’t give a rat’s ass about AMD.
Game companies know that hiring a _really_ good rendering engineer is difficult because you can count them with your fingers, so they go the canned solution and use GameWorks instead.
AMD could counter this by publishing open implementations of the algorithms used by NVidia, but AMD never gives a fuck about anything.
reduz,
Counting good engineers with “fingers” is an exaggeration. I was very good at this sort of thing. It’s exactly the kind of job I wanted but never managed to land during the tech recession and I got into web development instead (ugh). That’s the thing, there’s enough talent in this world to develop thousands of rendering engines, but just because we are willing/able to build them doesn’t mean the market can sustain it. The word that best characterizes the changes of the past decade is “outsourcing” with the aim of reducing business costs.
I suspect it’s probably still the case today that game shops are being overwhelmed by applications, they could likely find multiple candidates who’d be able/eager to build them a new engine, but it’s really hard to make a business case for it. Seriously, a company’s options are:
1) just outsource this to a platform that’s already free, supported, and available today, etc.
2) Increase costs by hiring engineers to design/build it internally, increase time to market, possibly contend with reverse engineering because nvidia decided not to divulge the technical programming information needed.
So it’s no surprise most companies just go with canned solutions.
Edited 2015-05-21 15:00 UTC
Its brain dead simple to check what graphics processor the system is using in either OpenGL or DirectX.
Seriously, how freaking hard could it have been to create a list of defaults that work for each type of card, would have taken me no more than and hour or so to set up such a system. Then maybe a few days with QA trying it out on different types of machines to establish what these defaults should be.
Its just not that hard guys
TressFX has an code that’s open for developers which allows them to tinker the code to fit both AMD and Nvidia. Hairworks on the other hand is nvidia only. Not only that, but the performance hit is definitely another area that’s of concern for AMD. here’s a quote from wccftech.com :
. In fact AMD states that the performance hit of TressFX is identical on both Nvidia and AMD hardware. All the while HairWorks clearly favors Nvidia^aEURTMs hardware by a significant margin yet it is still slower.
AMD attributes this performance lead to the open nature of the source code, allowing both game developers and Nvidia to optimize it to their needs.
Read more: http://wccftech.com/tressfx-hair-20-detailed-improved-visuals-perfo…
This just goes to show that AMD is and always will be the leader that pushes for innovation for the industry as a whole whereas nvidia is only concerned with it’s own brand.
On launch TressFX had the same FPS impact on nVidia cards, yeah they sorted out after a few patches.
Just like 3D Stereo ( HD3D ) on Deus Ex: HR, it took them a few patches to get it working on nVidia, and funny story, after launch the main stereo guy from nVidia, Andrew Fear, was pleading for Eidos/Nixxes on their forum to get in contact so they can help them open up 3D stereo for nVidia too.
It was pretty much known to those who followed development of the game. They spoke about it more than once. Still, they could use portable technologies.
I wonder what Linux version will use, since HairWorks uses DirectCompute and doesn’t exist on Linux even for Nvidia cards.
Edited 2015-05-21 06:30 UTC
What’s the difference between AMD screams for Witcher 3 performance and NVIDIA screams for Tomb Raider (SquareEnix reset) Lara’s hair which clearly stated not to use realistic hair with NVIDIA?
This’ truly a hairy problem…
all these technologies to be open
You may prefer whatever you want, but the cold harsh truth is that NVIDIA technologies (HairWorks in this instance) are based on a semi-open standard (DirectX) thus NVIDIA doesn’t really cheat or deceive anyone.
NVIDIA perfectly knows that tesselation on their GPU is a lot more powerful than on AMD GPUs so they used it as a basis for the things that give NVIDIA a competitive edge.
AMD did the same when they introduced TressFX.
Edited 2015-05-21 07:46 UTC
You could argue that it is a case of AMD not optimising their drivers for certain games…
http://wccftech.com/witcher-3-run-hairworks-amd-gpus-crippling-perf…
Try this: http://www.guru3d.com/news-story/the-witcher-3-hairworks-on-amd-gpu…
How does it give them an edge? Having the feature over not having the feature definitely has the edge, but how would being closed or open matter to how much edge it would have?
If a competitor sees value in the feature, they would just implement their own, whether or not nVidia opened their tech. And AMD apparently sees enough value in the feature to have their own counterpart.
The Witcher 3 is actually one of the more mild annoyances, because there you can turn off the problematic features.
Much worse is the situation with Project Cars, which is built entirely around proprietary NVidia technology which can never be made work properly on AMD hardware.
http://www.reddit.com/r/pcmasterrace/comments/367qav/mark_my_word_i…
chithanh,
Yea, it would make the most sense to fix these issues inside the GameWorks code. But so long as NVidia refuses to do it, the onus gets shift to the developers who have to re-implement the suboptimal GameWorks implementation for AMD. No matter what way you cut this, the situation of having games dependent on NVidia binary blobs is pretty bad for AMD and it’s customers. Here’s hoping this gets rectified somehow, because competition really should be based on actual merit rather than with vendor locked software.
NVidia might have the lead anyways, but the binary blobs are still a disservice to consumers in general.