This release makes great progress in the C++20 language support, both on the compiler and library sides, some C2X enhancements, various optimization enhancements and bug fixes, several new hardware enablement changes and enhancements to the compiler back-ends and many other changes. There is even a new experimental static analysis pass.
GCC is already 33 years old. That’s one heck of a legacy.
It has been a while since I followed GCC (everyone had already moved on to LLVM). However the question remains: are they still stubborn against providing APIs for integration?
If I recall correctly their rationale was preventing closed-source tool owners from taking GCC piecemeal and using in their IDEs. But that also prevented open source IDEs from deeper integration. Now Eclipse, VS Code and others use LLVM “language services” to keep an up to date model of the code in memory, and perform better refactoring, code fixes, and overall have tighter integration between code’s text representation and the tools.
GCC used to be the cool kid on the block, and still supports more platforms than any other competitor. It has excellent cross compile support. That being said it is sad to see the project losing popularity.
sukru,
That’s interesting. I didn’t realize this about GCC. I know LLVM is popular for integration and a lot of tools use LLVM’s intermediate code representation too.
https://hub.packtpub.com/introducing-llvm-intermediate-representation/
I still use it by default for C/C++ and more recently dlang.
There is inertia, and gcc still comes as default on Linux distributions. I too compile basic projects with gcc, since it is easier.
However many larger projects have moved onto llvm. I saw Firefox, Chrome, and (part of) LibreOffice there. LLVM website is a bit archaic, and do not provide a proper clients list. Yet, there is a list of commercial contributors, which is quite large:
https://llvm.org/foundation/relicensing/
It is now faster for some programs: https://www.phoronix.com/scan.php?page=news_item&px=GCC-LLVM-Clang-Icelake-Tests
At work many projects evaluate both, and decide on a case-by-case basis. (Even if the IDE uses clang, it sometimes makes sense to compile the production binaries with gcc).
Linux kernel is a big holdout, but it can now be compiled by it:
https://archive.fosdem.org/2019/schedule/event/llvm_kernel/attachments/slides/3330/export/events/attachments/llvm_kernel/slides/3330/clang_linux_fosdem_19.pdf
There are major advantages of using LLVM for faster development:
1) It gives proper, human readable error messages for C++. It actually tells what is wrong with your template usage
2) It also gives solutions (did you mean this misspelled variable? did you forget to dereference? etc). With the IDE integration they become quick fixes.
3) What your tools think your code is, and what is the actual meaning stays the same. No need to have a “lightweight parser” that might not share the same dialect.
I know GCC is going to be there, at least for a long while. Linux and “configure” based systems use it heavily. And given its important role as the main driver of open source movement, I want it to stay relevant.
GCC also still produces better machine code.
The answer is “it depends”. It used to be GCC provided much better optimized code. But it is now a case-by-case decision.
For example, this one form LibreOffice:
https://www.phoronix.com/scan.php?page=news_item&px=LibreOffice-7.0-Prefers-Clang
“Skia is optimized to be built with Clang(-cl) and in CPU-based raster mode some operations are several times slower if built with something else”
GCC vs LLVM benchmarks go one way or another depending on the version, optimization level, and of course the code itself.
https://stackoverflow.com/questions/3187414/clang-vs-gcc-which-produces-better-binaries
If one day, RedHat or another large vendor realizes Linux kernel runs 1% faster on LLVM vs GCC, I don’t think they would even blink before deciding to switch.
Having two decent compilers competing is actually a very nice thing to have at this moment.
From one of those post it seems that Skia is faster with LLVM not because it generates better code but because it uses inline assembly and they made it to use LLVM vector extensions but not gcc's.
jgfenix,
I don’t know if this is still true today, but several years ago I tested GCCs auto-vectorization. I really strongly wanted to avoid having to use inline assembly, but the results were quite conclusive, GCC’s auto vectorization often failed over hand optimized assembly. Intel’s ICC compiler did better, but obviously licensing is a con for their compiler.
It would be very interesting to test all this again today in a compiler shootout.
I see a lot of comparisons between clang and gcc, but I haven’t found any that include microsoft & intel compilers too.
LLVM inline assembler and most of its vector extensions are just those of GCC. It should do just as well with gcc. It just seems some projects are gungho on returning to the 80s and get compiler mono-culture going, and will spare no excuse on crapping up their code.
Woo hoo, this is the patch that adds static analysis! clang’s got it too, and it’s always good to get multiple opinions about your code.
I write C code. In all cases, my gcc code is smaller in bytes, compared to Clang, In the best of situations, clang comes to within 8 bytes of final executable size, compared to gcc.
Both do good jobs at flow control analysis and warnings about type mismatches.
One important area where LLVM falls way behind:
Architectures Supported by GCC:
Alpha, ARM, AVR, Blackfin, Epiphany (GCC 4.8), H8 / 300, HC12, IA-32 (x86), IA-64 (Intel Itanium), MIPS, Motorola 68000, PA-RISC, PDP-11, PowerPC, R8C / M16C / M32C, SPARC, SPU, SuperH, System / 390 / zSeries, VAX, x86-64, 68HC11, A29K, CR16, C6x, D30V, DSP16xx, ETRAX CRIS, FR-30, FR-V, Intel i960, IP2000, M32R, MCORE, MIL-STD-1750A, MMIX, MN10200, MN10300, Motorola 88000, NS32K, IBM ROMP, RL78, Stormy16, V850, Xtensa, Cortus APS3, ARC, AVR32, C166, D10V, EISC, eSi -RISC, Hexagon, LatticeMico32, LatticeMico8, MeP, MicroBlaze, Motorola 6809, MSP430, NEC SX architecture, Nios II and Nios, OpenRISC, PDP-10, PIC24 / dsPIC, PIC32, Propeller, RISC-V, Saturn (HP48XGCC), System / 370, TIGCC (m68k variant), TriCore, Z8000 and ZPU.
Architectures Supported by LLVM:
X86, X86-64, PowerPC, PowerPC-64, ARM, Thumb, SPARC, Alpha, CellSPU, MIPS, MSP430, SystemZ, WebAssembly, and XCore.
I guess it always be behind, because of the license.
LLVM has actually dropped some back ends. They see more value in having more integration with a handful of key systems than in supporting more systems. And it’s a valid strategy – it would be very hard to add support for as many platforms as gcc has, but for the few they keep, they can easily extend their features. That also makes it harder for anyone working with LLVM to move back to gcc.
For me, cross-compiling is a major feature, and LLVM simply doesn’t support most of the processors I target, so gcc is my compiler of choice. And when gcc isn’t right for something, sdcc is. I’ve yet to actually use LLVM for anything at all.