The LLVM Project recently released a new version of their compiler, optimizer and code generators. LLVM includes a drop-in GCC-compatible C/C++ and ObjC compiler, mature optimization technology (including cross file/whole program optimization), and a highly optimizing code generator. For people who enjoy hacking on compilers and runtimes, LLVM provides libraries for implementing custom optimizers and code generators including JIT compiler support. This release is the first to provide beta GCC 4.2 compatibility as well as the new “clang” C/ObjC front-end, which provides capabilities to build source-to-source translators and many other tools.
It would be interesting to see another compiler (NON-GNU) take on GCC. OS’s like *BSD would benefit from such work.
There already is PCC. I think its the design of LLVM that makes it more newsworthy rather than being a possible replacement for GCC in BSD systems. If you haven’t look at it more closely I’d recommend it.
..or we could look at how LLVM *compliments* GCC, Or we could focus on how open-source compilers rival *commercial proprietary* compilers, or we could look at how for 20 years that a project started by Richard Stillman has been an intregal component of BSD kernel distributions, and is set to continue for some time.
oh I forgot its much better to have another smack down over the benefits of copyleft vs permissive licenses.
I agree with your comment but I was talking about “CLANG” not the “LLVM-GCC” part.
* added a missing word.
Edited 2007-09-30 02:06
“or we could look at how for 20 years that a project started by Richard Stillman has been an intregal component of BSD kernel distributions”
There’s not a *Single* line of GNU/GPL in ANY of the *BSD kernels!
GCC simply *Compiles* the BSD sources and makes binary code.
Thanks.
BSD kernel distributions. In other words, the various BSD OSes, which (as you mention) use GCC.
No thank you. You misread me if you *re-read the quote* you will see that it says “BSD kernel distributions” which all contain GCC and Gnome and … well you get my point, and all are *compiled* with GCC.
I make the point with no smack down. I use a kernel that Thom regularly advertised as *using* code from the BSD kernels, Every Linux based Distribution comes with X(ok not quite BSD).
Although I thank you again for *stressing* a different but supporting point as to why this should not be used in any smack down on different kernel licenses.
I personally think there is more to celebrate with successful open source applications that *compete* with proprietary ones like that of Firefox(under the Mozilla License) or when *binary proprietary blobs* are finally removed that damage all open platforms like that of Gnash by the FSF.
…Its not that I don’t care about the license I think their are better points to be made about LLVM as being a replacement for GCC(it isn’t check the slides) on BSD distributions. In fact if you click the link and look at the slides you will actually see quite a few *real* advantages to both clang and LLVM over their selected parts of GCC both technical *and* even some related to the license…and some disadvantages.
Edited 2007-09-30 02:45
“No thank you. You misread me if you *re-read the quote* you will see that it says “BSD kernel distributions” which all contain GCC and Gnome”
Actually, Gnome is not part of FreeBSD core, it is part of the ports collection. I cannot comment on the other BSDs, But with FreeBSD it is not part of the distribution. Similar to how Debian and Ubuntu have non-free repositories. It’s easy to install, but not installed by default.
Don’t use the words “actually”
I got in an interesting flame war on what was native vs available to a Distribution, and that was on a meta Distibution where *nothing* is the default install. I’m not having another one to with what constitutes a port vs default install, because the bottom line is I don’t care. I personally am more interested in the component parts of what makes up my own mythical meta-distribution, and selecting the parts that fit my needs. If you want to run without a full desktop GPL solution of either KDE Gnome or Xfce.
If you want to play some elitist Distro rubbish with me you have picked the wrong person. Go away.
“””
“””
Don’t you think you are overreacting a bit to bluenosejake’s very reasonable correction to your claim? Especially since he is verifiably correct. There is nothing elitist about making a factual correction regarding an erroneous claim that was central to your post.
I will repeat myself I have begged pleaded. Leave me alone. I *never* understood the reasoning for your relentless attacks on the FSF and Richard Stallman. Now I am experiencing them first hand I know your reasons well enough.
This low level intimidation needs to stop.
“””
“””
Cyclops,
I can’t believe I am having to say this again. But there is no intimidation campaign going on. I’ve explained that to you many times before. Please relax and stop being so paranoid. Besides, this is a news site and message board. How in the nine worlds could I intimidate you here, low level or otherwise? Throw deadly words at you?
Edited 2007-10-02 00:22
“I’m not having another one to with what constitutes a port vs default install, because the bottom line is I don’t care”
It’s easy to ignore the truth of something if you don’t care.
“If you want to play some elitist Distro rubbish with me you have picked the wrong person. Go away.”
I wasn’t playing any elitist games, I was correcting an error you made. If you can’t take getting corrected or being disagreed with, better cancel your internet account and turn your computer off.
Actually I wrote “I don’t care” because I don’t.
I wrote “go away” because I didn’t think is was either a good or strong point, and I didn’t want to get into a flame war about something I consider trivial….or even relevant to the point my original comment made.
If you have a point its that BSD does not have a “fully functional BSD only Desktop environment” or “BSD is not ready for the Desktop” you have made it. If your are trying to differentiate between a port…and a native application. Its lost on me, all I see is “work has been done so it works on that platform”…but after that I don’t care. This is “open-source” whether its BSD or GPL. Doom runs on every platform under the Sun, it has been “ported to those platforms”…lovely. Isn’t open-source great. I would say that doom runs “natively” on these platforms *once* it has been ported. I would even say that Doom is *cross-platform*. I would never say that Doom is a “port” unless I was trying to make a *point*.
So lets get back to your point, what is is it!? Its not just you trying to make a point. I think BSD Distributions are. I know its a port(sic) because I explain my mythical-meta-distribution and use the BSD+X+XFCE+Firefox+OpenOffice as an example and looked up the software that runs native on these platforms and gnome has a massive section on the freeBSD site. I think the mentality of “this is my stuff and this is your stuff…but to function we need some of your stuff we will call that stuff a port, and just have a minimal install we can label an OS” is just a desperate. I personally think its backward thinking. I think the goal should be the 95% market share occupied by Microsoft rather than argue about the 0.1 0.01 occupied by GNU/Linux;BSD;Solaris based distributions. Think about my comment on “elitest games”
If you have a truth(sic) its not *MY* truth. If you think *that* your truth is worth your internet account…or even switching on your computer more power to you, but as I said *I Don’t Care* have your kernel+CLI tools elitest pissing contest away from me. I’ll just continue using my end-to-end fully functioning Desktop solution that comes under at at least 50 different licenses.
Now you don’t have to “Go away” because I am. I don’t care for this pettiness.
Edited 2007-10-02 18:47
“””
“””
If you’ll recall, he was not pushing an agenda, but pointing out a significant and verifiable factual error you had made while pushing yours.
His advice about dealing well with being corrected and disagreed with is on the mark, and we could all benefit from it at times.
Edited 2007-10-02 19:02
“If you have a point its that BSD does not have a “fully functional BSD only Desktop environment” or “BSD is not ready for the Desktop” you have made it. If your are trying to differentiate between a port…and a native application.”
Uh, anything in the ports system a native application, or one that runs under linux compatibility, but the difference is that FreeBSD.org does not maintain it, just like MS does not maintain Firefox, and makes no warranty or support offers for it. It’s maintained BY OTHER PEOPLE.
That’s the difference, just like GCC is maintained by other people. That doesn’t make it not native, it makes it somebody else’s responsibility.
I think the division between FreeBSD and Ports make sense. It allows them to work on what makes FreeBSD great, the kernel and the userland utilities. Let other people supply the apps. That’s the way it works on other OS’s
“Now you don’t have to “Go away” because I am. I don’t care for this pettiness. ”
I’m not sure why I am being petty, I think I am being reasonable. I think you better read back through this discussion and see who has been acting petty.
lol. I take it all back. I thought I was in a flame war, but you can officially say that I have learned something. I did like this post. I don’t think it distracts from anything I’ve posted. I actually like my last comment a lot apart from a random “a”.
“FreeBSD and Ports make sense”, the way you describe it, it *does* make sense to separate those two, and I understand your second comment. The word “Ports” doesn’t “Unmaintained” “Unsupported” “Not invented here” does. For your argument to work *every single piece of software at shareware.com* is a port to the windows platform in fact *anything* not on the Recovery(sic) Disk is a port. In fact it so doesn’t fit in todays open-source world.
Your right it is *me* not you because I think *differently*, your post has actually raised two points with me, and one for you.
1) I use the word GNU which is *contentious*, I actually use it for the *freedom*(I really hate that word sometimes) that I’m given, although I should say control. It made me think how sad todays Linux based distributions would be if Hurd would have succeeded. The FSF having total control of *everything* is a bad idea. The fact that companies; organizations; individuals with different viewpoints enrich *my* computing experience is wonderful. A situation like that of BSD that you describe would be unbearable.
2) My other point is *you* are the norm. I am not. I don’t know if its because my first experiences of GNU was slackware, with that awful overwhelmingly installation where I had to *choose* what I wanted to install…when I recognized nothing. I do not differentiate between Distributions, because the individual packages that make up the Distribution are greater than the Distribution as a whole and the time creating and effort put into creating a distribution is trivial compared to that of the software it containes. The reason I was curt with you apart from the word “Actually” and I think its a nonsense, is the fact that I ended up in a similar flamewar that Mirrors this one, where an individual tried to convince me that doing an “emerge package” is vastly different from “./configure; make; install” or adding an overlay and doing a “emerge package” on a *source based Distributions* is vastly different all going on while I have my usual stalkers with me. The difference between me and you is you have some *mythical* idea of ownership through maintenance, when the only thing resembling *ownership* is those individuals; organization; companies that have copyright even then on a very loose idea of copyright.
3) Look at Distrowatch the list packages that the CD’s come with. There are specialist Distro’s tailored to particular functionality; ideals, but the reality is the differences come from a mix of various packages, and I mean specifically Kernel+Cli; Desktop; Package Manager; Applications; Proprietary Garbage; Patches and Fixes to make it all work as a coherent whole. If you look down the list of packages you notice that they are *all* the same, one might have 7.2 instead of 7.1 of 1.0.22 instead of 1.0.23 or 2.6.22 instead of 2.6.21 and they all feel and look and work the same…in fact the biggest difference is the Desktop…and I don’t think its that big a deal. Glance around at the various screenshots of different wallpapers/icon sets running quake its kind of sad that look the same on every distributions web-site. I actually cringe when I see articles like the “Linux rough around the edges” because you see one individual trying 20 different distributions and having *exactly* the same problems with *all* of them. I’m actually at a loss when people attack other Distro’s when in reality the difference is that that the Distribution they have chosen is probably suitable for them, everything else is defaults; wallpaper; and compatibility of software from various sources, although I suspect the reality is for a well *maintained* Distribution from a major player you are choosing between KDE and Gnome.
My final point, which is in direct contrast to what I was taught, and the definition in the handbook. The redefined guidelines by Microsoft continually expanding its Monopolist boundaries to extend beyond the Kernel+ CLI + Desktop + Media Center + Media Player + Virus Checker + Firewall + IM + Various API’s ; Defacto Office + Look and Feel + Internet Defaults + Development Kit + Moving towards Internet Applications of what an OS is. The “stuff thats always in memory” as a definition has gone. You don’t see Vista Users on here comparing kernels beyond statements that should start with IMO. They compare Vista Ultimate + Microsoft Office 2007 + Adobe Products + Other monopolistic product here + Worse than console Gaming a set-up costing thousands of pounds which people can only afford when *someone else is paying(sic)*. They compare a full Desktop experience at top dollar.
Now I spent a on time on that part of the post; which I now going to spell/grammar check; because some of it is long winded because I have stated how I *think*, and to get stuff(sic) organized in my own head. I am not advocating that you think my way because how I think on this one topic I am well aware I am the minority…it even clashes with this sites *name*. Its also long because you informed me; whether it was intentional or not, and these days that is rare, and replied to it how I hope how it was intended, and have intentionally ignored your emotive language.
The bottom line, the summery is there are *only* three OS’s(sic) in my eyes although a better description would be “stuff installed on ones PC” Microsoft+Monopolistic Applications(90%+ Marketshare)+Short Shelf Life Applications+Free/Share/Adware; Apple(3%+ share)+Monopolistic Applications+Less Free/Share/Adware; Packaged together, loosely collected visible code; from a verity of sources(I call it GNU call it BSD if it makes you feel better, but I’m not explaining why here coming in at 0.8%-4% share)
I hope you appreciate this post.It should explain my posts, and why I post like I do, and why my truth(sic) is perhaps very different to yours. My truth is certainly flawed due to the portability of FOSS, and even more so with BSD licensed code. The only real truth the one I keep mentioning is the Divide is Microsoft vs the little people and if you have a market worth over 100 million expect them to spend billions to replace you.
Edited 2007-10-03 01:30
“””
“””
I’m going to focus on this interesting bit about names.
I do find the practice of calling all of FOSS, in general, GNU, contentious because it:
1) Over-assigns the credit to one party.
2) Is a terrible name. (In my opinion, of course.)
Calling it all Linux also over-assigns the credit, of course. But the kernel does seem to be a piece of software that is particularly difficult to get right, has a catchy name, and that name is already well established, which imbues it with immeasurable marketing value.
I’m a tech, not a salesman. But as the consulting company I work for has become Windows-Focused over the years, I have to do my own “sales” these days. I have to sell existing Windows-using businesses on Linux/FOSS/GNU, whatever you want to call it, without official support from the company.
I, too, sometimes dislike the word “freedom”. Not because I don’t believe in freedom. But because while in politics using the term profusely will get you votes, in business using it profusely will get you pegged as a kook. So I use “Linux”.
I firmly believe that the more people who are exposed to “Linux” at work, the better off we, Linux/GNU/FOSS are.
I suppose that if I were in a different situation, I might use the term GNU. But that little “marketing” section of my brain which has been turned on through practical necessity keeps saying “Yuk! What an awful name!”.
One thing’s for sure. The dominance of the English language, while facilitating collaboration, has also thrown us a nasty curve by associating the word “free” with “cheap”.
I’ve thought a lot about it. And I just can’t come up with anything that I like better than “Linux” for the OS and “Open Source” for the class of software. I use “FOSS” here at OSAlert for convenience, and include the “F” because, well, it has a more “inclusive” feel to it than just OSS. Though I sometimes use that, too.
I’m interested in what your concerns are regarding the word “freedom”.
Edited 2007-10-03 02:17
I will repeat myself I have begged pleaded. Leave me alone. I *never* understood the reasoning for your relentless attacks on the FSF and Richard Stallman. Now I am experiencing them first hand I know your reasons well enough.
This low level intimidation/harassment needs to stop.
I wouldn’t really say I am the norm. I prefer FreeBSD over Linux. I wasn’t talking at all about how you install packages/apps/whatever on a particular OS, I was talking about what is part of FreeBSD, and what isn’t. It’s all about what works best for you. BSDs give you ton’s of choices, just like GNU/Linux. The only difference is that FreeBSD makes a distinction between what is their responsibility and somebody else’s. This allows everyone to focus on what they do best, and I like it. You may not, but you don’t have to.
I don’t expect you to like it. I expect you to choose what works best for you. I wasn’t at any point trying to say FreeBSD is better than GNU/Linux, or any other thing other than to correct your original error, which has now grown horribly off-topic.
BSD IS NOT GNU. BSD uses some GNU software, and uses GCC to compile it’s code, but it is from a different source. It also uses Xorg, from well, xorg, as well as mountains of software from other sources.
BSD is derived from 386BSD. The FreeBSD project began in 1993 from code originally written by Bill Jolitz (and others). Linux is based on the Linux kernel, originally written by Linus and uses the GNU utilities and userland, as well as X and almost all the same software as BSD (more actually, as GNU/Linux is more supported by corporations, as well as developers).
Linux is distributed under GPL. BSDs are distributed under a BSD license. These are distinct licenses with different goals.
Both are Free Software, and I don’t know enough to comment on which license is better, and in fact don’t care. Because of both these licenses, I have a multitude of choice in how I build a system, and I am grateful.
Don’t try to turn a simple correction into an idealogical war, as I don’t want one, and don’t need to defend MY choices. I use what I want, right now I use Debian, FreeBSD and Windows XP. That could change. I make my decisions on what software to use based on what capabilities I need, at any given time. That’s what Free Software allows us to do.
I understand where you’re coming from but nobody ever said that there’s GPL’d code in BSD. In fact, if there was any GPL’d code used in say the NetBSD kernel, it would no longer be BSD.
On the other hand, it is with the help of a fellow opnsource project, namely the GNU C compiler, that other opensource projects, in this case the BSDs, can build they’re systems without having to buy expensive compilers. As far as I’m concerned, that’s a win-win situation.
I’m happy to hear that there are compilers being release that are more in line with the BSD philosophy, but that hardly means that GCC should not be valued for it’s prior usefulness.
Pardon me, but it is time to stop propagating this absurd myth.
If there were any GPL code in the NetBSD code, the kernel would still be BSD, but it would be infringing the GPL license until that GPL code was excised. Legal measures could be taken, fines may apply, lawyers would get rich(er), but the BSD license would continue to apply to BSD-licensed code just the same.
Mate, its not that.
Talk to Sun, Apple and many other vendors who have tried to get patches merged which fix bone head stupid bugs – GCC maintainers refuse to merge it. There isn’t a thing they can do but either absorb the extra costs of manually patching and compiling or simply stop providing GCC for their platforms – then we all suffer because of it.
GCC maintainers don’t want to accept there could be some issues that need to be addresed; gcc to them is like a sacred cow; perish the thought that there are bugs and patches might get submitted by those out side the “Illuminati” and actually help develop GCC.
I just hope that if GCC developers keep messing around companies, the likes of Sun turn around and throw their weight behind LLVM.
Talk to Sun, Apple and many other vendors who have tried to get patches merged which fix bone head stupid bugs – GCC maintainers refuse to merge it.
Can you provide an example of this happening?
Look through Sun and Apple mailing lists.
Those are rather huge, and I think the burden of proof here rests with you. Can you really not point to an instance of it yourself?
I didn’t want to use any peoples names as they might not want their name used in such a debate; but here is one link:
http://www.opensolaris.org/jive/thread.jspa?threadID=35812&tstart=0
Then look through http://bugs.opensolaris.org at the variety of GCC bugs.
It has nothing to do with ‘burden of proof’.
So I see wesolows saying the same thing that you say, also without providing any proof. Other than that, there doesn’t seem to be anything there.
As I said, I don’t think I should have to search through OpenSolaris’s bug database just to prove your point for you, since I don’t really buy it to begin with.
Show me an instance of a patch that exists, that fixes a bonehead mistake, that wasn’t rejected for a good reason by the GCC folks.
If you already find an example, why don’t just post the links?
Edited 2007-09-30 19:56
Read through the comments on the Undeadly.org story regarding integrating PCC into OpenBSD. There are a handful of very long posts on there that go into lots of detail on patches that people have made that have been ignored. Patches that fix issues and long-standing bugs in GCC, patches that fix regressions. And patches that have been summarily ignored.
I’d post the link to the story, but for some reason, we can’t access undeadly.org from work (most likely the provincial network folks are blocking it).
It is my opinion that there should be at least two major FOSS players (with a strong commitment to standards, interoperability, and compatibility) in any given area. GCC needs some competition. Sure, multiple projects results in a division of resources, and some people don’t like that. But look how dividing the resources between XFree86 and Xorg worked out for us.
I agree, GCC really needs some competition. At the worst, a few resources are wasted but I think history has shown that it almost always leads to better products.
I’m not a programmer, I never was any good at it, but this seems to me to be a very interesting and welcomed project, at the least considering it’s license.
This bit is a tad OT, but I’d like to know what you programmers out there think about these next questions.
Are commercial compilers going the way of the dodo? It seems to me that any OS company that wishes to charge for they’re compiler has some stiff competition on they’re hands. Point in reference, see SCO with UnixWare and OpenServer.
As for proprietary solutions, i.e. free but closed source, are they on the way out? I know that MS has some fantastic solutions that are closed source but otherwise, anything else out there that is of interest?
Cheers.
Yeah. There’s ICC (Intel’s compiler) and the Sun Studio Compiler series. Both of these are better for their preferred targets (x86 or SPARC) than the MS compiler or GCC. The MS compiler until recently produced slightly better code than GCC. By now it’s probably neck and neck, but MSFT is doing a pretty major revision of their compiler backend right now that may (or may not) bring it back into the lead.
A real measure of how good a group is at producing compilers is what they do with Itanium. Itanium is heavily-dependent on the compiler to generate explicitly parallel code, so compiler quality matters a lot. The first compiler for such EPIC processors was the MultiFlow compiler (currently owned by Reservoir Labs and called Blackbird).
I’m not sure that codegen in the compiler is going to improve a whole lot in the future. The next direction will likely be theorem-proving compilers that output information about the purpose of the code they’re generating and verify the type-safety, thread-safety, and other properties of the code. You can see a rudimentary form of this already with the PreFAST /analyze extensions in MS’s compilers. If LLVM can bring this to open-source, then it will be a worthwhile replacement for GCC (and a good way to unshackle open-source from the FSF).
“If LLVM can bring this to open-source, then it will be a worthwhile replacement for GCC (and a good way to unshackle open-source from the FSF).”
I find this comment from a user like yourself a *strong* advocate of the Microsoft *proprietary* platform, somewhat ironic. I am pleased that you have again reinforced the fact that Richard Stallman created GCC and FSF have steered the project for *20 years*, but I suspect it is another off-topic slight against the FSF.
I would love to see justification as to why a project that involves mutual collaboration between *companies* is better in whole or in part controlled by a single company instead of an *organization*, especially since by your own point GCC is a cross-platform compiler unlike the proprietary one you advocate which is unavailable to any other platform but Microsoft’s own, yet your advocating control of the compiler by companies who have an interest in only *their* platform.
Edited 2007-09-30 15:39
I believe that allowing proprietary compiler backends can allow hardware firms to produce new chips with a smaller investment. GCC could have been the compiler that would plug into chip-specific codegen modules, but it was specifically designed to disallow this, because the FSF people are so small-minded and ungenerous (hint: even if a commercial company takes your code and tries to sell it to people, you haven’t lost the code you’ve written, and with sufficient patent-lefting, you can always catch up to their improvements).
LLVM is designed in a better way to allow plugins to be made. It could become the backbone around which people bring new kinds of processors with differring codegen requirements to market. Even without that, as sbergman says, competiton is always a good thing.
“I believe that allowing proprietary compiler backends can allow hardware firms to produce new chips with a smaller investment.”
I think the cost of writing compiler backends would be insignificant compared to the cost of developing new chips. Do you have reasons to believe the contrary?
yeah… academic groups produce new chip ideas now and then. See Sun’s Niagra, for example. It was developed out by Sun, but the initial version was done by a Professor and his grad students.
Intel’s Terrascale was also made by a small group. Look up Cavium’s OCTEON as well.
Smaller experimental chips are more likely to need compiler support to test out their design ideas before going into mass production. If the researcher has a backend IR with some non-chip-specific optimizations, she has a great basis to work with for designing a specialized processor.
I think the cost of writing compiler backends would be insignificant compared to the cost of developing new chips. Do you have reasons to believe the contrary?
It depends on what you’re trying to do.
If you’re trying to create a clockless chip, I’m sure the chip development is far more expensive than the compiler development.
But look at the Itanium. It’s a chip with a lot of theoretical performance. Getting that improvement requires vastly more work from compilers than on other architectures. No one has really managed to make the chip shine yet.
“I believe that allowing proprietary compiler backends can allow hardware firms to produce new chips with a smaller investment. GCC could have been the compiler that would plug into chip-specific codegen modules, but it was specifically designed to disallow this, because the FSF people are so small-minded and ungenerous (hint: even if a commercial company takes your code and tries to sell it to people, you haven’t lost the code you’ve written, and with sufficient patent-lefting, you can always catch up to their improvements).”
I’m shocked and appalled at this comment, do you seriously believe in these words. You openly(sic) advocate Vista. As OS that abuses smaller companies ability to compete in many areas from browser; chat; media center; media player; parental controls; virus protection; firewall thats ignoring all the proprietary formats; protocols etc that involve. Thats ignoring the costs of smaller firms wanting to bundle windows, marketing pressure to remove alternatives to their own products being bundled with OEMS, or small OEMS not being able to compete with large ones…or even to hardware producers the cost and complexity of conforming with Vista’s overreaching DRM. I could go on. I wouldn’t label it “small-minded and ungenerous” I won’t even label it as business(sic) Its *criminal*.
Richard Stallman *wrote* GCC 20 years ago and it was faster than *commercial proprietary compilers* costing thousands of pounds. What did this “small-minded and ungenerous” individual do he open-sourced it and put it under a license that ensures it will *always* be open for *everyone*. Anyone with the means to help out with GCC can do so…It can even be forked. I can even use it as the basis, of my own compiler…or I can study it to see how a compiler works…and well I just think you are being a little silly.
I’m glad that your focusing on sbergman usual anti-Linux propaganda, competition is good. Its a shame this only applies to software you don’t use/advocate I say advocate I mean attack individuals who have *already* created competition in the compiler marker with an open-source alternative.
The sad thing is I wished you really understood the phrase “competiton is always a good thing” then perhaps you wouldn’t spend you time defending every monopolistic abuse the Microsoft do. Then and only then would your computing experience be on par with mine.
Why is it always Vista with you? This article is about COMPILERS, not OSes. It’s not even about proprietary compilers.
GCC is a fine compiler for its purpose. Its architecture is not as inherently customizable as LLVM and its licence is not as free, so it’s good to see a competitor arising.
You must really enjoy your computing experience a lot to invent problems that don’t really exist in other people’s computing environments.
“””
Why is it always Vista with you?
“””
I can’t help but notice that Cyclops has a few more points of obsession than just Vista. Also “Vista Users” which he uses as a sort of catch all derogatory term for anyone who doesn’t agree with him. (I’ve asked him why it’s always “Vista Users” and if using XP was OK. But I never got a response.)
He’s also obsessed with accusing people of of playing something called “smackdown” with him, when they point out factual errors in his arguments. (A search of OSAlert user comments for “smackdown” and “by cyclops” yields 87 results.)
He also has an odd obsession with yours truly, as evidenced by the post you were responding to. Interestingly, he does not include me in the “Vista User” catchall category of evil astroturfers who are out to smack him down. It would appear that in his eyes I am a Linux-Hater. Which, if you are at all familiar with my posting history, you can see is a claim which is simply too bizarre for words.
All in all, and speaking as an avid fan and advocate of Linux, I find it easier and more pleasant to communicate with some of those here who prefer Vista, even though I personally dislike Microsoft, Vista, and XP for that matter, than I do with that nut case who claims to be a Linux advocate. (Albeit a very poor one.) He seems to have too many personal vendetta’s going on to allow rational discourse.
Edited 2007-10-02 00:26
Yeah.. Cyclops is pretty funny. I try to avoid responding to him (but I’m pretty bored right now since I’m stuck waiting between graduation from school and starting a new job).
It’s always good to meet a reasonable and knowledgeable person with shared interests, even if he’s on the opposite side of a stupid ideological divide. I guess you can start calling cyclops a “Gentoo User” (or “ricer” for short) and see how he responds to that. (I looked him up on google a while ago and it seems like he was making a feature request for some “game panel” in gentoo).
It would be really nice if Adam could add an “ignore user” feature to the forum software that would just automatically collapse posts by certain people. It would be good for the NotParker, Moulinneuf sorts who never really seemed to have much to contribute much beyond emotion. It’s too bad you can’t have an internet forum without seeing instances of John Gabriel’s Greater Internet F–wad Theory.
Thanks for your positive contributions to the discussion.
“””
“””
Hi PlatformAgnostic,
I would stop short of declaring the ideological divide to be “stupid”. The odd analogy of a cell membrane comes to mind. It should be pliable, not rigid. And it should let various things pass in and out when such is beneficial, while still acting as a barrier when that is appropriate and beneficial.
I feel that I am doing good in this world through my advocacy. But one has to keep things in perspective. A cell whose membrane is completely impermeable dies.
Nice to see the work Apple has done.
In my opinion, GCC has a near monopoly on the compiler market, and that’s not healthy.
Case in point: Linux kernel makes liberal uses of GCC extensions, so other compilers are out of luck. Therefore Intel C Compiler implements all kinds of GCC extensions, and define __GNUC__! Exactly like Internet Explorer advertising itself as “Mozilla” in user-agent string. Same goes for glibc. glibc headers are broken if __GNUC__ is not defined. Just look at Tiny C Compiler development list for workaround they had to put up with to use glibc headers. (TCC refuses to define __GNUC__, but what a pain it causes.)
Other free software projects too often depend on non-standard features of GCC, effectively disadvantaging alternative compilers. But kernel and libc are the most serious offender.
I predict rough and difficult roads ahead of clang, not only for difficulty of implementing C language standard, but also for fighting free softwares that won’t compile with anything other than GCC.
The great thing about open source software is that anyone can patch the software so that other compilers can compile it. If other compilers start gaining users I’m sure that will happen, but for now I think it is understandable given almost everyone uses GCC. I know that the KDE project is currently fixing it’s software to work with Sun Studio since they’re officially supporting Solaris with KDE4.
Edited 2007-09-30 04:17
If the Linux kernel developers had any interest in supporting any compilers other than GCC, GCC-specific code hadn’t been accepted into the code repository in the first place.
It can also lead gcc users into bad habbits. I taught myself C using gcc, whenever I had a problem I’d search usenet and the web and often find solutions often posted by people who also used gcc. As such things where fine and I learned many things and started getting a good grasp of C (or so I thought).
Then, later, I ended up writing C code at a place that didn’t use gcc, but a different and much stricter compiler. All of sudden many of the things that had been worked fine for me all along gave a bunch of strange compiler errors. So I ended up having to re-learn much about C which I thought I already knew.
So use ‘-Wall -Werror -c99 -pedantic’. You *can* make GCC stricter.
I know now that I can, but it isn’t the default and when I was learning C no tutorial or book told me to do so, or exlicitly mentioned that if I didn’t do that I wouldn’t be learning ‘correct’ C but a highly unstandard gnu C.
I’m not really blaming the gcc team as such though, since I actually prefer the non-standard gnu C because it makes many things easier. It’s just that one has to keep in mind that it is non standard and perhaps things could be done to make that more clear.
I write a lot of software, mostly in Java. What is the benefit of LLVM for C/C++ developers? Is it to run your code in a closed environment that can be more easily analyzed than running the program straight on the CPU?
The homepage states:
LLVM is “a compilation strategy designed to enable effective program optimization across the entire lifetime of a program.”
What does that REALLY mean for programmers without any of that fancy jargon?
“What is the benefit of LLVM for C/C++ developers?”
It’s dead simple. LLVM generates faster code (better optimization) in less time (better optimization architecture).
There are interfaces to JIT and optimization passes, but they aren’t interesting unless you are a compiler developer.
An analog question would be: What benefits has the JVM for a Java developer?
In the same way one can natively compile Java programs one can use C/C++ with a VM.
It means it can use the “classic” optimizations (at the source level), link-time optimizations, and profile-guided optimizations (at run-time).
Clang is an interesting project but it probably has problems in some cases. Translation from C++ to ObjC is not as trivial as it seems. ObjC is a superset of C but C++ is not really. Going from C to both languages is trivial but going back presents a challenge.
Seeing how they solved the problem would be pretty interesting.
Oh and long live competition. Another OSS compiler is more than welcome.
Clang is an interesting project but it probably has problems in some cases. Translation from C++ to ObjC is not as trivial as it seems. ObjC is a superset of C but C++ is not really. Going from C to both languages is trivial but going back presents a challenge.
nobody is trying to translate from c++ to objc. clang “translates” c++, objc and c for lvvm. maybe you think of objc++, which allowes mixing of objc and c++?
http://en.wikipedia.org/wiki/Objective-C#Objective-C.2B.2B
I wonder how long it will take until someone writes a gentoo ebuild and tries to build their whole system with LLVM to get more optimizations.
(I would’ve done it myself but I don’t run gentoo anymore.)
I suspect it would take a little bit more than one ebuild: almost certainly a new profile would be needed, new eclasses, not to mention rewrites of all the gcc and autotools wrappers Portage uses. Then of course they’d need to craft their own install medium so they can bootstrap the whole system with LLVM. And that’s ignoring the issue mentioned above with the kernel and other key bits of infrastructure being gcc-dependent…
Ricers may enjoy tweaking but I doubt there are many willing to take on a job of that scale. Not saying the Gentoo team themselves might not try it — they’ve always been up for a challenge, just look at the FreeBSD and MacOS ports — but don’t expect it to happen in a hurry.
Not nessecarily. LLVM has a front end that is based on gcc and works just like it, except that it uses LLVM as the back end. So theoretically it would be as easy as replacing gcc with LLVM/gcc. Of course a lot of stuff would probably not build but it wont be as difficult as switching to, for example, icc or another completely different compiler.
I’d say it creates different code. Sometimes it’s faster sometimes it’s slower (take some language shootout benchmarks and you’ll see. llvm is about 20% faster for nbody and almost the same factor slower for spectralnorm on my system)
Does anyone have a clue why it is used so litte in practise?
Why doesn’t Mono use it as its JIT or some Java or Ruby implementation? Somehow it would seem very natural and clever to use a good optimizing JIT instead of inventing and maintaining your own less optimizing one (this is IMO particularly true for mono). But I don’t know any OS project (except PyPy) that uses it and the list on http://www.llvm.org/Users.html isn’t too impressive.
The reason it is used so little in practice is that it is relatively new technology. I expect to see a growing number of usages in the future.
Ask that question again, after XCode 3.0 and Leopard are released.
I don’t know about other projects (I guess if a new project was started right now that needed an execution engine, they should consider llvm), but I can tell you why Mono doesn’t use llvm.
First when Mono started writing its own JIT llvm was very immature (we’re talking about 6 years ago): it is now mature as a compiler, but still very much unproven as a JIT.
Second, Mono supports more architectures than llvm and has done so for several years, so unless llvm catches on, we’re not going to lose supported architectures.
Third, llvm is HUGE: just the x86 specific code (in an optimized build) is as large as the whole Mono runtime (which includes 500 KB of locale data). This makes it unsuitable for the environments where Mono is already used, like embedded devices. The bloat factor translates also to runtime memory usage and JIT compilation speed. While llvm is definitely faster than gcc, that doesn’t say anything when you need to JIT code as fast as possible (and when you need to load 2 MB of code in the cache just to jit a single method it starts to drag everything down, not including the data structures needed by the JIT process itself).
Then there are various other reasons, like the tight control the Mono runtime needs on some generated code (the llvm exception handling mechanisms are inadequate for a full featured runtime like Mono), the integration with the GC (llvm has some embryonal support for this),
the need to control runtime behaviour precisely vs memory and stack usage (the amount of stack space the JIT can use is small and it needs to be deterministically fixed: using llvm here would need having a black box with not control on it), some low level details that I explained some time ago in a posting to lamba the ultimate that you might want to check.
Overall, for Mono to switch to llvm, a lot of work would be needed: implementing all the missing features in llvm, de-bloating it, writing the ports for the missing architectures and finally writing the backend to target llvm inside Mono. This is easily 2 years of work just to get something working correctly (and some things would be still suboptimal, it’s hard to debloat the x86 support code from 1.5MB as it is in llvm to the 100KB it is in Mono). In the same amount of time we can add more architectures to the supported list or write more optimizations that are suitable for a JIT compiler.
The reason that Python and Ruby don’t use LLVM currently is that their runtime requires additional capabilities that LLVM doesn’t supply as a low-level platform. See http://hlvm.org/ for details about its sister project High-Level Virtual Machine.
Designed to enable effective program optimization across the entire lifetime of a program. And not the other way around sounds promising. I will not greet Bitter Melon for not accepting my code
Edited 2007-09-30 09:13
I don’t unserstand the point of claiming that the fsf isn’t generous enough for developing a near state of the art compiler for a huge number of platforms and giving it away for free. Honestly, gift horse and whatnot!
Browser: Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420.1 (KHTML, like Gecko) Version/3.0 Mobile/3A109a Safari/419.3
Agree 110%. The work by all the folks is monumental.
LLVM provides the necessary infrastructure for adding garbage collectors to languages, but what about multithreaded support? it’s important nowadays that CPUs have multiple cores and most apps need to exchange data over the network…