The open-source Mesa/X.Org developers have been working on LLVMpipe, a Gallium3D driver that accelerates OpenGL and other state trackers on the CPU rather than any GPU driver to provide a better software rasterizer via LLVM optimizations. Unfortunately, it’s still slow and can barely keep up with games.
The setup I believe is wrong. A six-core phenom would be cheaper and more appropriate. Hyperthreading is not suitable for this kind of load. Actually two hyperthreads on the same cpu should run uncorrelated loads, not identical loads. Please try with the phenom and then the discussion makes sense. The pitfall of the measurements is well known.
Edited 2010-06-29 21:00 UTC
This new blog entry for LLVM seems promising:
TCE project: Co-design of application-specific processors with LLVM-based compilation support
http://blog.llvm.org/2010/06/tce-project-co-design-of-application.h…
The title of the article is messed up. LLVMpipe is “still slow”?
It’s a software renderer! Is it supposed to beat the NVIDIA GPU used in the test?
It’s already quite optimized. It’s several times faster than Mesa’s old software renderer. I’d say that it’s very impressive for how new it is.
This is why I hate Phoronix…
Well, I’d like a comparison with Haiku’s drawing engine in terms of 2D performance myself ^^
By the way, can someone tell me how changing the compiler, would the new one be blessed by Apple’s touch or not, is supposed to lead to a tremendous improvement in software performance ? I mean, GCC has been here for a long time, so the remaining performance improvement to be done in the compiler area are probably either of the nitpicking kind (1% size improvement in a very specific use case) or leading to compilation being 10 times slower…
The driver itself is was probably compiled with GCC; however LLVM is being used internally to JIT-compile rendering instructions (probably in the form of OpenGL Shading Language) into some internal form (or native code, I don’t know). Using GCC for that would probably be rather awkward.
Anyway, software rendering is slow compared to dedicated hardware? Mindblowing!!! [/irony]
LLVM is only used for code generation – the driver is most likely still compiled with GCC.
GCC is generally slightly faster than LLVM, but it takes at least 10 times as many man-hours to use GCC’s backend as it does to use LLVM (and I’m really not exaggerating there).
Also, while GCC might produce slightly faster code, LLVM can do its optimizations and code generation several times faster, which is important for startup speed, when you have to generate an entire OpenGL software renderer plus shaders.
The title of the article is messed up. LLVMpipe is “still slow”?
It’s a software renderer! Is it supposed to beat the NVIDIA GPU used in the test?
It’s already quite optimized. It’s several times faster than Mesa’s old software renderer. I’d say that it’s very impressive for how new it is.
There is no mention about image quality of LLVMpipe anywhere, but I assume it produces quite nearly the same output as a real 3D accelerator does. As such, pushing 25fps+ is already quite damn well IMHO. That’s a truckload of stuff to do on a processor that isn’t optimized for such tasks and thus I atleast find it pretty darn impressive to churn out such numbers.
If the developers can still push some more performance out of it and continue to add the missing bits LLVMpipe will become quite a useful piece of software.
If any of the devs are reading this: congrats with the great work so far and good luck
Unless there you are using some unsupported feature, or there is a bug, the image should be pixel perfect. Rendering is a very precise thing – there is really no room for “optimization” at the expense of quality.
Also, I suspect that LLVMpipe is more complete than any of the real Gallium drivers – programming the CPU is a lot easier than a GPU… (and I know because I’ve tried it).
So yeah, 25 FPS in any 3D game is very impressive.
Unless there you are using some unsupported feature, or there is a bug, the image should be pixel perfect. Rendering is a very precise thing – there is really no room for “optimization” at the expense of quality.
I have seen some software rasterizers that went the easy route of rendering things poorly, doing horrible mipmapping and all that just to squeeze some extra speed. The result just happened to usually be so ugly that those rasterizers never got too popular. Anyways, that’s why I mentioned the fact that I didn’t see Phoronix mention image quality of LLVMpipe: I assume they would have mentioned it if it wasn’t atleast as good as that produced by the 3D hardware.
Now the real question is how useful will this be? What are the applications for a software rasterizer in an age where almost everything — including damn small mobile devices — packs some sort of a 3D accelerator?
A rendering farm, perhaps? Remote-3D? That I could see being midly useful as normally you have to render the image, then download the image from the card and only then encode it and transmit, but with this you can just render directly to a buffer and encode, or even render directly to hardware buffer that handles the encoding and transmitting.
Oh, and once CPUs have enough beef and LLVMpipe matures and gains some more speed improvements it’ll be handy not to have to upgrade your graphics hardware if some features are missing or something: LLVMpipe will continue to push out the latest features and you never have to play the catchup-game regarding 3D features anymore.
Just to open up some discussion — or atleast TRY to — would anyone else have some interesting insight or possible applications for this to share? Would be lovely to see what others can come up with. (And god damn we need some proper discussion here on OSAlert and not just the usual flaming!)
I believe that the main use will be running Gnome-Shell or Compiz (which require compositing) on hardware that doesn’t support OpenGL or doesn’t have drivers installed (i.e. NVIDIA GPUs without good Nouveau support, because the blob can’t be installed by default).
I didn’t think all rendering was the same. Isn’t the rendering accuracy the difference between the consumer and pro versions of cards like Radeon vs FirePro3D?
http://en.wikipedia.org/wiki/ATI_FireGL#Differences_with_the_Radeon…