Apple announced that they are using the LLVM optimizer and JIT within their Mac OS 10.5 ‘Leopard’ OpenGL stack. LLVM will help in delivering better performance of the GL stack.
Apple announced that they are using the LLVM optimizer and JIT within their Mac OS 10.5 ‘Leopard’ OpenGL stack. LLVM will help in delivering better performance of the GL stack.
In other news, we just had a new release, get it here:
http://llvm.org/releases/
http://lists.cs.uiuc.edu/pipermail/llvm-announce/2006-August/000019…
http://llvm.org/releases/1.8/docs/ReleaseNotes.html
-Chris
This is probably good only for intel integrated graphics (and software renderer). I think, other cards from ATI and NVIDIA have TCL or vertex shaders and will not use this.
Both the Mac Mini and MacBook use iNTELS own GFX chips, and they will do that for quite some time I’m guessing. iNTEL will keep on releaseing integated graphics which will be perfect (cost wise) to put in the low-end macs and “fixing” the software so the graphics runs faster on these machines will be a god send for Apple.
Besides, maybe this is just the first step when it comes to LLVM, maybe it will improve other things down the line. Heck, it could improve even more things with Leopard that we don’t know of.
When will Mac OSX itself be compiled with LLVM? It sure sounds like a better solution than fat binaries and GCC or ICC. The thing LLVM needs before being useful on OSX though, is support for Objective-C.
Is there any systems out there using LLVM instead of GCC? Like some odd Linux distribution or something?
LLVM 1.8 already supports Objective C. It does not currently include the ObjC 2.0 features though.
That’s great news. It wasn’t too terribly obvious from looking at the webpage on the other hand, so I hope you forgive me:)
Is ObjC experimental or a full blown version that I could compile with from XCode and expect to work just as well as gcc?
> It sure sounds like [LLVM is] a better solution than fat binaries and GCC or ICC.
LLVM a better solution?
More eleguant ok, but what is best is whichever has the best runtime performance, which means in that order ICC > GCC > LLVM and which suits your need, no ObjectiveC on ICC so GCC > LLVM.
Now apparently Apple is set on LLVM so they will probably work on improving its performance: it’s not because OpenGL on LLVM is faster than regular OpenGL (they handmade many parts) that generic code is faster on LLVM than GCC (currently I think it is quite the opposite.).
You, sir, have that completely out of order.
LLVM is an aggressively-optimizing compiler that rivals GCC for speed in almost everything, and crushes it in situations where its most advanced features (LTO and inter-procedural optimization) come into play. There’s a reason that GCC was considering incorporating LLVM’s optimization framework rather than continuing to extend their own.
For comparison, look at the detailed build results on http://llvm.org/nightlytest/ , particularly at the GCC/LLVM column. That’s the ratio of execution time of the test compiled w/ GCC vs. compiled w/ LLVM. Notice how the majority are .9 or greater, and a significant number are >1?
Thanks for the information, I’m glad to be incorrect.
Could you give me the proper URL for the benchmarks?
I couldn’t find the graph you’re talking about on the URL you provided, I tried to click around but frankly this website is a mess.
On the comment page of the annonce http://arstechnica.com/staff/fatbits.ars/2006/8/17/5024 percivall says that a python compiled with LLVM is twice as slow as a python compiled with GCC, so either he made a mistake or there are still some problem for LLVM..
Just wondering about LLVM and compiling an operating system like FreeBSD using it; does it work? does it yield ‘nicer’ code that performs better? OpenSolaris is gradually being ported so that the GNU GCC can compile it, has anyone tried compiling it with LLVM/GCC$?
Being curious after the announcement I looked at their mailing list and saw that Apple will also submit the x86-64 target to LLVM.
Interesting..
That said I agree with pzad: the usage of LLVM by Apple for OpenGL seems mostly useful for laptops with lowend graphic boards.
It would be interesting to see performance figures..
That said I agree with pzad: the usage of LLVM by Apple for OpenGL seems mostly useful for laptops with lowend graphic boards. It would be interesting to see performance figures.
Actually, the announcement email that was linked indicates that this will be used sometimes even with high-end graphics cards when advanced capabilities are used:
LLVM is currently used when hardware support is disabled or when the current hardware does not support a feature requested by the user app. This happens most often on low-end graphics chips (e.g. integrated graphics), but can happen even with the high-end graphics when advanced capabilities are used.
> On the comment page of the annonce http://arstechnica.com/staff/
> fatbits.ars/2006/8/17/5024 percivall says that a python compiled
> with LLVM is twice as slow as a python compiled with GCC, so either
> he made a mistake or there are still some problem for LLVM..
It’s hard to know what’s going on without more details. LLVM currently has two GCC-based front-ends (one based on 3.4, one based on 4). The later one doesn’t have optimizations hooked up by default, which means that, by default, you get almost completely unoptimized code.
In practice this means that users who download it and try it out without reading the documentation often get really slow code and leave with a bad impression. Those who know what they are doing have no problem getting good performance out of the compiler.
This is obviously a suboptimal situation, and has resulted primarily because we have been focusing on other short-term priorities. I expect that LLVM 1.9 will have good performance “out of the box”, along with other new features (e.g. x86-64 support).
-Chris
Chris,
I don’t know if you know what exactly Apple is doing with OpenGL & LLVM, but from what I have read, it sounds like, besides optimizing the GL branch code run-time (which sounds really directly applicable to alot of general OS code), they are also optimizing the shader programs themselves that are sent to the GPU… ???
are you familiar with any other projects (KDE, …?) that are looking into linking with LLVM compiler?
the claimed results from Apple re: OpenGL really seem to speak loudly about what LLVM can do…
I’m sorry, but I have no intention of saying anything about Apple’s plans for LLVM other than what has already been publically announced. If you have questions about the LLVM though, I’d be happy to answer them.
I don’t know of any other large projects (e.g. KDE) that are using LLVM. I do know it is used by several commercial companies for a small variety of different projects though.
-Chris
No, they compile shaders to be executed on the CPU, when the GPU dosen’t support thouse shaders.