As we all know, Mac OS X has support for what is called ‘fat binaries’. These are binaries that can carry code for for instance multiple architectures – in the case of the Mac, PowerPC and x86. Ryan Gordon was working on an implementation of fat binaries for Linux – but due to the conduct of the Linux maintainers, Gordon has halted the effort.
The project was called FatELF, and was in quite and advanced state already; in fact, a full FatELF version of Ubuntu 9.04 can be downloaded from the project’s website.
The format is very simple: it adds some accounting info at the start of the file, and then appends all the ELF binaries after it, adding padding for alignment. The end of the file isn’t touched, so you can still do things like self-extracting .zip files for multiple architectures with FatELF.FatELF lets you pack binaries into one file, seperated by OS ABI, OS ABI version, byte order and word size, and most importantly, CPU architecture.
This approach to binaries has a lot of benefits. For instance, distributors could ship a single DVD which would work on all architectures the distribution in question supports, making it easier for users to pick the right .iso to download. It can even go a lot further; you could ship a binary which could work on noth Linux and FreeBSD out of the box.
It all sounds pretty darn impressive and a major step forward, but the Linux kernel maintainers “frowned upon” Gordon’s patches. “It looks like the Linux kernel maintainers are frowning on the FatELF patches,” he writes, “Some got the idea and disagreed, some didn’t seem to hear what I was saying, and some showed up just to be rude.”
He further detailed:
I didn’t really expect to be walking into the buzzsaw that I did. I imagined people would discuss the merits and flaws of the idea and we’d work towards an agreeable solution that improves Linux for everyone. It sure seemed to be going that way at first. Ultimately, I got hit over the head with package management, the bane of third-party development, as a panacea for everything.After a while it sort of felt like no one was actually understanding a single thing I said. A lot of it felt like willful ignorance, but I suppose I’m biased.
Even if Gordon did get the kernel guys to agree to include his patches, the next hurdle would be glibc. Ulrich Drepper, glibc’s maintainer, isn’t particularly keen on the idea either. “It is a ‘solution’ which adds costs in many, many places for a problem that doesn’t exist,” Drepper states, “I don’t see why people even spend a second thinking about this.”
I’m actually quite sad Gordon is halting the FatELF project. It seems like a very worthwhile addition to the Linux world, and it could make the lives of distributors and users alike a lot easier.
OSX binary are called universal binaries, not fat binaries
Universal binaries are an implementation of the concept of fat binaries.
That might be correct for NeXT, but Mac OS has also had FAT binaries since the days of 68000 to PPC conversion and they were implemented as this guy did – resources with in a single file. If you look at how NeXT solved this problem, it is a separate physical binary with in the .app folder hierarchy that represents the application. This is obviously conceptually similar, but not the same. From what I read, the guy had actually created a single binary file with a method for loading the correct binary section for the architecture/ABI being used. This is more like what Classic MacOS did – though people might wave the “resource” fork being at me, I guess. If you play the resource fork card, then this isn’t the same at all and this guy probably was barking up an odd tree. To me it seems like the same idea though.
Fat-binaries ala legacy MacOS have always been part of NeXT’s multi architecture strategy, applications that shipped with support for NextStep X86 and NextStep 68K would generally have used fat binaries, rather than separate binaries in the package.
Presumably they did this to allow interoperability between NS variants at the “Unix level” as well as the more generic “frameworks level”.
Having the ability to package multiple copies of binaries and resources was important for allowing the NextStep (later OpenStep) frameworks to run on any OS, so that for example they could deliver executables in ELF format, or Windows PE format as well as their own Mach-O binaries in one package.
68k/ppc fat binaries were implemented by storing the 68k code in the resource fork (as had always been done) and the ppc code in the data fork.
In OS X (and OpenStep), a mach-o file can contain multiple architectures. One binary, multiple architectures. eg:
file /Applications/SubEthaEdit.app/Contents/MacOS/SubEthaEdit (Bundle executable)
/Applications/SubEthaEdit.app/Contents/MacOS/SubEthaEdit: Mach-O universal binary with 2 architectures
/Applications/SubEthaEdit.app/Contents/MacOS/SubEthaEdit (for architecture ppc)Mach-O executable ppc
/Applications/SubEthaEdit.app/Contents/MacOS/SubEthaEdit (for architecture i386): Mach-O executable i386
$ file `which ls` (unix executable)
/bin/ls: Mach-O universal binary with 2 architectures
/bin/ls (for architecture x86_64): Mach-O 64-bit executable x86_64
/bin/ls (for architecture i386): Mach-O executable i386
OS X binaries were fat before they became universal. PPC 32-bit and PPC 64-bit code, and even in some cases targeting different code for G4 and G5. ^aEURoeUniversal^aEUR binaries just added Intel to the fat mix, that was all.
New name. Old History with NeXT.
Quad FAT from NeXTSTEP: m68 NeXTStation, x86, SPARC 5/10, HP PA-RISC 712/60 & 712/80i Gecko systems.
Win32 too, if you used the Yellowbox stuff. But they were still separate executables with in the .app folder hierarchy, right? Mac OS 7 up till 9 also had FAT binaries, and these included 68000 and PowerPC code in the same executable “file” via code resources.
Yellowbox: That was mach-o, Portable Distributed Objects [PDO], orb, and other parts of Openstep to have the Openstep for NT development environment, on top of Windows. There was the the D’OLE [Distributed Object Link and Embed] idea when EOF 2 and WinNT 3.51 was out ala Openstep Enterprise.
The FAT binaries were specific to the entire OS. Yes, similar to Mac’s but from a different view of the computer model.
The original NeXT FAT solution was designed around the Network model; and as has been stated in other sites, you store a single copy [up to 4 architectures on a NeXTSTEP or later Openstep version of the OS (NS3.3 binaries ran under Openstep 4.x): Inspector would list the architectures available and gray out all but your CPU architecture] across the Network [NFS/NIS managed NetInfo master servers] which being as there was no Carbon all applications were either NS3.0 – NS3.3 [non-Openstep apps] or NS3.3-OS4.2 [accessible to Openstep]–all relying on the NeXT APIs now known as Cocoa.
NeXTStep and Openstep /NextLibrary included all architecture libraries for you to leverage and your build environment was set up so that you could then recompile your code for each specific architecture while ultimately making local libs specific for each architecture dylibs upon demand.
If your machine was SPARC the SPARC NeXT libraries were linked against the executable along with the local libraries of SPARC for that application. Ditto for the other ones. All distributed object code was shared with NeXT PDO.
Each system copy of the OS installed was either custom for the client deployed via images, individually installed [your choice of localization to include or not besides your native language] to name but to install options. Third party apps had a clean understanding of where the help system was, the /NextLibrary, /NextApps, /NetworkApps, /LocalApps, etc were for and stuck to them.
Corporations built site licensing levels and they would keep track of how many network copies were running simultaneously.
Normally, PPDs were installed by default, unlike now with on-demand.
WIth a clean standard set of APIs applications were very lean and the business logic specific to the application was included in the bundle/.app folder, including the usual localization plists, etc.
The documentation was in RTFD before it was later moved to HTML. NeXT RTFD was nice.
NeXT Corporate’s network was accessible from around the globe and everyone had the mandatory ~/Public folder.
Network branches were by Country and then by State/Region/Province, etc.
A lot of customizations specific to NeXT Corporate were never released that made a large network system a joy to work under. I was only ever a single ~username short cut away to quickly move to that branch of the network in Workspace Manager to give and copy info staff needed from one another, outside of custom database driven applications.
NeXTMail was a must. I’m still waiting to see a demo just once where Steve and people around the World demonstrate an OS X Server Network and show group collaboration. The iChat video between Paris and San Fran is not impressive.
on and on and on.
Edited 2009-11-06 12:47 UTC
its a shame to see Ryan Gordon’s talent not be put to good use anymore for the project, i can understand why he wouldn’t want to peruse it anymore. while it might not be “needed” it sure seems like it would have been appreciated and useful. c’est la vi.
Edited 2009-11-05 23:13 UTC
OOT, but it’s “c’est la vie“. I could not find private messaging so here it is… public.
But then some people prefer “c’est la emacs”.
Mon dieu!
This is debatable: if everyone’s toy feature makes it into the kernel then the kernel is called ‘bloated’..
not really a big loss.. there appears to be basically no real benefit to doing this, certainly nothing that compares to the downsides, this appears to be one of the cases where the kernel devs and glibc people etc actually made a sane decision for once
that’s because you’re simple minded and lack seeing the ‘bigger’ picture.
this would make it a lot easier for devs to release a single binary that can actually reach quite a few different users, not just X86, PPC, but also 64bit versions.
this is also pretty big with ARM making a nice push.
The problem can be solved more easily by other means. Simply including multiple binaries, one for each architecture, in the package. Then having a script or some other mechanism pick the right binary at launch time. Everything else, libs, text files, etc. can be more easily shared. Furthermore, these mechanisms are simple to understand and implement, and don’t require massive changes to glibc and the kernel.
far from it, theres no real reason to want to distribute one big fat binary, its stupid, and serves no real purpose
you want to support all arm architecture?
I don’t want tons of dead code living in my filesystem and gigabytes of updates. I don’t know how the fat binaries and libraries work, but I assume they complicate the loading process even more and could require more disk accesses.
It is a stupid idea that only makes sense to Apple and third party developers. It may make the life of distributors simpler (or not), but also increase network usage with each update.
In the end, you are not solving a problem, just passing the ball to the end user.
Christ, cry me a river about “disk accesses.” The average Walmart PC comes with a terabyte of storage, and a staggering majority of the world has access to at minimum a four-megabit broadband connection. Go check out the size of your average executable. I’m on a Windows machine, so heading over to my System32 folder, I see the following:
* 9MB – mshtml.dll – The Trident rendering engine, the largest and most complicated part of Internet Explorer.
* 2MB – explorer.exe – Your start menu and file manager.
* 13MB – shell32.dll – All of the Windows UI control implementation
* 5MB – ntoskrnl.exe (!!!) – The Windows kernel, including all process and memory management, is five megabytes.
I can’t believe anyone would spend the better part of an hour compiling from source to save 5 megabytes, not to mention that the source of any given application tends to be substantially larger than the binary it produces anyway!
Live and let live, though, I guess. I don’t use Linux, so it doesn’t affect me either way.
You can’t get any of those for PPC or ARM.
If you could, why would it hurt anyone to have to download it from a different URL than the URL for the x86 versions?
If there was Windows available for PPC and ARM, would you be happy to download the PPC and ARM binaries for every item in a Windows update, only to discard them when they arived on your machine?
It is far saner to have separate binaries for each architecture supported.
Edited 2009-11-06 03:09 UTC
True, disk usage isn’t really an issue for most people anymore. “Disk access” is not the same as disk usage though. When loading a fat binary, the system is probably forced to perform one extra disk access, to load the header which tells it where the correct binary data is located. That’s adds an extra few milliseconds per loaded file, which can really kill performance when loading many files. Booting a computer is one of those “loading a bunch of files” scenarios.
http://en.wikipedia.org/wiki/Seek_time
I’m not fond of wasting any resource just because we think we have plenty. History will tell you that you have to be conservative.
I don’t see *any* single reason to bundle useless code to benefict almost no one. Just because some guy came with an idiotic idea and felt innovative we have to agree with him?
The argument with all have broadband is just all right if you generalize what west countries have. I live right now in a 128kbps (16kB/s). Is not because there is not broadband in my country (Spain) but because I’m not yet covered by internet networks. Anyway I really think is not sane to download the “all-architectures” binary if I use only one architecture. What’s the point of it? Is the idea that I will copy my game/app/whatever from PC to phone and it will work? Probably anyway the libraries will differ so much, and the internal 256M card of the phone will not be able to take all binaries with it. I will more likely wanna see compressed binaries (using a scheme that get some size – like UPX) in kernel. Is the reverse of fat binaries, it will simply make distros to get some hard drive back and me to be happy on this. I will be able to put that Linux in a smaller drive machine, and doesn’t sound bad.
As commented already, the problem is that you need to compile first for different architectures and for different distributions. That means that you:
a) Have many computers with say Ubuntu x86, Ubuntu AMD64 and Ubuntu ARM, compile the source in the three and then what do you do with the resulting binaries to mix them into one fat binary? Too complicated.
b) Have one computer with cross compilers, then you compile the source with each compiler and then magically link all of them together to finally make dist in one single package.
c) Either a or b with single make $WHATEVER for every architecture.
In the end, the way it is now is simpler and works without doing anything. Distributing packages for different architectures is not a problem. Distribution sites usually check your browser ident to see which architecture to offer you by default. It works.
Except when your distribution of choice has not packaged a piece of software that you need.
Except it doesn’t mean anything, as the developer still needs to compile for all the architectures he wants to support. Being able to put all architectures in one only file contrasted to releasing multiple packages means shit if you still have to go through the compile process. If you are already supporting several architectures, then distributing a couple more packages doesn’t make your life harder than it already is.
Keep that argument hammering on…
The packaging has still to be done, it doesn’t matter if it’s done in 5 separate packages or in a single FatELF, what’s your point there?
If the packages doesn’t package it, then it’s not packaged, the format is irrelevant. It’s not like FatELF is a robot that compiles for 20 architectures and then packages for you.
5 versus 1. Hmmmmmmmm. I wonder. How many installer packages do you normally see from a software vendor for any given platform? One or a maximum of two.
Compiling is not the cost. Deployment is. Maintaining separate packages is a very large cost but that doesn’t seem to stop the nutcases from still believing that it actually works.
I totally agree with you. the idea of Linux is mainly source compatibility.
I knew you would. This kind of resistance was always going to be the case, and it reminds me slightly of the demise of AutoPackage.
While package management in distributions is a good way of maintaining a Linux distribution itself, virtually all developers working on various Linux distribution components simply can’t accept that their little baby concept is nowhere near being the panacea for wider software deployment and installation.
Unless deployment, installation and configuration of software, and especially third-party software, gets way, way easier then desktop Linux in particular will go nowhere. I can understand why somebody from Red Hat might rail against it as specific Red Hat packages for some prorietary software gives them a nice form of lock-in.
This is the sort of pack mentality amongst even developers, and not just rabid users, that hacks me off about the open source ‘community’.
What’s complicated about Linux deployment of binaries? Companies interested only have to provide RPMs and DEBs for RedHat and Ubuntu, and a statically compiled “catch all” version for the rest. Given that homogeneity is not the word that defines Linux, there are no solutions for a non existant problem. This is the way Linux works, like it or not.
It’s a community, a free community where individuals share a common goal (one way or the other) and that goal is to have a free and working OS, the best possible. Where you have freedom, you have choice, and that leads people to offer different choices for different purposes: thats what makes Linux so great, that most companies can’t get their head around.
Obviously, you know nothing about the subject, yet you feel compelled to comment anyways….
If you did know something about this you would know that for an application beyond the most trivial you would have to provide RPM for SuSE, Mandriva and RedHat, DEB for Debian and Ubuntu each of these in 32 bit and 64 bit versions (that is a total of 10).
Then lets say it has a UI so you want to provide a Gnome and a KDE version, so now you are at 20.
Then you app may have sound, so you provide one for ALSA and one for PA, and now you are at 40.
So 40 binaries to cover just the basics, if you app is just a little bit more complicated and has Hardware discovery and a Daemon that gets activated at install time then you will have to make your binaries not only distribution dependent but also dependent on the version of the distribution.
This is the point where most ISVs says “F**K THAT” and decides not to support Linux.
But the binary is just the last part in a long chain of actions, if you are an ISV that gives a S**T about quality you have to test your app before you release and that of course means testing on all different versions, including update versions of the different distros, so you do not only have to provide 40 binaries you have to test for about 200 to 300 different distribution scenarios.
Linux has unfortunately been conquered by some businesses that has a strong commercial interest (RedHat, etc.) in keeping this mess going for as long as possible.
Even open source projects suffer from this, because the first thing to go when faced with this many possible scenarios is quality control. It really is unfortunate that many Open Source projects works better and has a higher quality on windows than on Linux (whatever flavor you prefer).
Going back to the subject, how would fat binary solve any of the above problems? If anyone really wants single binary for different archs for whatever reason, there’s always “#!” and if you play smart with it, you can pack everything into a single file and unpack and execute the right binary on demand.
The fat/universial binary thing is just some weird thing apple did. Maybe they didn’t have common scripting mechanism across old and new systems. Maybe it just fitted better with their multi data stream file system. There’s no reason to do it the same way when about the same thing can be achieved in much simpler way and it’s not even clear whether it’s a good idea to begin with.
Sure you can brew together all kinds of custom solutions that could / should / would maybe work, so now you have just doubled the number of problem areas:
1. The program itself.
2. The install system for the above program.
It is so sad that today it is much, much easier to make a new distro (an opensuse or ubuntu respin) that contains your program as an addition, than it is to make an installer that will work for even a single distro across versions ….
A fat binary is not a complete solution, it is not even a partial solution but it is perhaps a beginning to a solution.
Maybe they just wanted the install process to be easier for their users …. and ISV’s …
Modifying the binary format, updating kernel and then all the tool chains, debuggers and system utilities are gonna be much harder than #2.
Sure, agreed. Linux is facing different problems compared to proprietary environments.
But here I completely fail to see the connection. Really, fat binary support in kernel or system layers doesn’t make whole lot of difference for the end users and we’re talking about the end user experience, right?
The real problem is not in the frigging binary file format, it’s in differences in libraries, configurations and dependencies and fat binary contributes nothing to solving that. Also, it’s a solution with limited scalability. It sure works fine for apple but it will be extremely painful to use with increasing number of binaries to support.
If 3rd parties don’t know how to make a good cross-distro package, it’s either their own fault or a limitation of Linux that FatELF would not solve.
A good post from another site:
Edited 2009-11-06 08:11 UTC
Because that’s a crap hodge-podge ‘solution’ for the fact that Linux distributions have no way to handle the installation of third-party software. There’s no standardised way for any distribution to acually handle that or know what’s installed, which is extremely essential when you are dealing with third-party software. Conflicts can and do happen and the fact that anyone is having to do this is so amateurish it isn’t believable.
It doesn’t handle ABI versions, doesn’t handle slightly different architectures like i686 that could handle the binaries….. It’s such a stupid thing to suggest ISVs do it’s just not funny.
It’s the sort of ‘solution’ that makes the majority of ISVs think that packaging for Linux distributions is just like the wild west. Telling those ISVs that they are silly and wrong is also stupid and amateurish in the extreme but still people believe they are going to change things by saying it.
Edited 2009-11-06 13:39 UTC
Linux distributions should handle only their own packages. Everything else should go in /usr/local. There are installers (like those for Windows) that companies can use to make the install process more friendly. And just as you said ISVs are not distributing versions for Linux of their software, I remember have installed many closed source software for testing. Most of them had an install script that did the job and installed successfully in /usr/local.
Why does it matter? Can the package manager update the closed source software itself? No. In Windows, if you want to update an application, you have to buy a new version and reinstall. You are really thinking that ISVs aren’t distributing Linux versions because the package managers don’t integrate well with them? Silly.
As amateurish as in the Windows world, where most applications provide their own copy of libraries which are already available in a normal Windows installation. That doesn’t stop them from delivering.
If anything, the LSB should really be enforced.
Oh, but you have the solution right? Because last time I checked, ISVs are not distributing Linux solutions because they are not developing them for a start. And then you have the flash plugin shit that almost recently got 64bits support for Linux. Distribution was never a problem, development was.
But you seem to think that ISVs have Linux developers and have ported their software to work on Linux but they are unable to package their software and that’s why they dont’ deliver, right? Sweet Jesus, open your eyes. ISVs don’t distribute Linux versions because they develop their software with MFC, .Net and even more rare APIs that are not available in multiple architectures, and they use 32bits specific integers, etc.
Jesus, open your eyes. ISVs don’t distribute Linux versions because they develop their software with MFC, .Net and even more rare APIs that are not available in multiple architectures, and they use 32bits specific integers, etc.
You open your eyes.
I can list one-man micro-ISVs that port to both OSX and Windows.
The iphone was getting better support from developers when it had less than 10% the share of Linux.
Linux is a total clusterf*** for proprietary developers. Not only is there a lack of standards between distros but individual distros are designed around open source. It’s mess and denial isn’t helping the situation.
Sure there is!
Install:
tar -xC /opt package_static.tar
Uninstall:
rm -r /opt/package_static
The (statically linked) future is now!
Here is a potential maintenance problem with that script
Lets say that AMD create a new architecture in a couple of years and let’s call it x86_86_2. x86_86_2 is to x86_86, what i686 is to i386, so it it perfectly capable of running legacy x86_86 code, it just runs a little better if given x86_86_2 code.
Now let’s install World of Goo on this machine, it may be a few years old by this point, but it is a very cool and addictive game so we still want it.
The script checks the machine architecture. Is it “x86_64”? No. Therefore it runs using the 32 bit architecture. Sure it will work, but it should be running the 64 bit version not the 32-bit version.
Now what if it was compiled as a single set of FatElf binaries
WorldOfGoo.bin gets run, The OS “knows” that it is running x86_64_2 and looks at the index to see if there is any x86_64_2 code in the binary; there isn’t, but it also “knows” that x86_64 is the second best type of code to run (followed probably by i686, i586, i486, i386). It finds the x86_64 binary and executes it. That binary looks in ./libs for the library files and for each one, performs the same little check.
Sure it will take a few milliseconds extra, but it will run the right version of code on future, unknown platforms
To my mind, FatElf is an elegant and simple solution to this sort of problem
Better solution is just to have a better detection script. Or have a system-detection script that is part of the LSB and thus LSB-compliant distros that you can call into to determine what arch to use. This does not require changes to the kernel, glibc, ld-linux.so and the ELF format. The latter is assuredly not an elegant solution.
Honestly, though, arches aren’t added that frequently and the core name of the arch can be detected easily. So until 128 comes out, we should be fine.
Sure, you’d be running a slightly slower version of the code. But how often does that happen, really? There are very few architecture changes that maintain backwards compatibility like that which are worth distributing a completely new version of your code for. x64 is one, that probably won’t happen again for decades. Maybe you have a 386 version with a Core2 Duo optimized version as well, but I just don’t see that being used at all.
Anyway, it’s easy to workaround. You just add a native app to the distro which the script would call, something like xdg-get-architecture, which would return that list for you instead of sticking that functionality within the ELF specification itself.
lol, funny world this is… script is bound to be stupid and one additional if is impossible? script can’t deduct anything more than basic condition with one basic response?
based on your words I seriously hope none of your scripts will ever be in linux as they would be bound to be limited and prone to be stupid.
stick to toy light sabers, luke;) real one might cut off your hand
Complete nonsense. You still need to compile all that stuff you put in the binary, how is that gonna help you? The only practical solution is static linking, which is what most should really do. For example, you can download and install Opera qt4 statically compiled and it will work regardless of distribution. The same with skype, and some other closed software. So it’s not impossible to do it, and yes, you need testing.
Spot on.
I would like to add that in this discussion, two separate problem fields got mixed up.
The first on is concerned with providing an – ideally ? – unified binary for a range of “hardware architectures”, e.g. the ARM notebook / scrabble game example from the comment above comes to mind.
As others have already pointed out, it is difficult to sell the advantages of a “fatELF” while all the costs and problems inherent with such a solution could be avoided if you solve the problem where it actually occurs, namely at the point of software distribution.
If you download, for example, Acrobat reader, the script operating the download form tries to guess the right system parameters and automatically provides the “correct” / “most likely” binary to match your needs. Additionally, advanced users can manually select the right binary for their environment from a – given the number of Linux distros out there – surprisingly small number of binaries and be done with it.
This, plus the possibility to choose an approach similar to the one used by the world of goo developers are less invasive to the target system and are, if done well, more convenient for the end user.
The second, somehow orthogonal, problem set is the large level of dispersion when it comes to what actually is an Linux based operating system (e.g. what is sometimes referred to as “distro” or “dependency hell”). It is imho crucial to get rid of the idea that from the point of view of an ISV, there is something like an “operating system”. What the application developer relies on is a set of “sandboxes” that provide the necessary interface for the particular program to run.
In the case of MS Windows or Mac OSX, there is – in essence – a rather small number of “blessed” sandboxes provided by the vendor that allow ISV’s to easily target these operating systems. The reason for the small number of these sandboxes is imho related to the fact that there is, essentially, only one vendor deciding what is proper and what is not, e.g. it’s more a cultural difference and less a technical one. Failing to address the fact that – for example – Linux based operating systems live and operate in an environment with multiple vendors of comparable influence and importance and multiple, equally valid and reasonable answers to the question of “which components ‘exactly’ should go into our operating systems and how do we combine them” misses the mark a little bit. Diversity between the different distros is not some kind of nasty bug that crept into our ecosystem, it is a valid and working answer to the large number of different needs and usage patterns of end users. Moreover, it is kind of inevitable to face this problem sooner or later in an where players rely on compability at the source code level and the (legal and technical) ability to mix and mash different code bases to get their job done.
If an ISV is unwilling or unable to participate in this ecosystem by providing source-level compability (e.g. open sourcing the necessary parts that are needed to interface the surrounding environment), they have a number of options at their hands:
– Target a cross-platform toolkit like for example Qt4, thus replacing a large number of small sandboxes with one rather large one, which is either provided by the operating system (e.g. distro) or statically linked.
– Use a JIT compiled / interpreted / bytecoded run-time environment, again abstracting large numbers of sandboxes into the “interpreter” and again rely on the operating system to provide a compatible rte or additionally ship your own for a number of interesting platforms.
– Use something like for example libwine
– Rely on a reasonable and conservative base set of sandboxes (e.g. think LSB) and carry the remainder of your dependencies in the form of statically linked libraries with you. There are problematic fields, like for example audio – Skype and Flash are notoriously problematic in this department – but the binary format of an executable strikes me to be a rather bad place to fix this problem.
That’s because you have no idea what the problem actually is, as most people or even developers fannying about on forums like this don’t.
The problem is not compilation and I don’t know why various idiots around here keep repeating that. It never has been. The cost in time, resources and money has always been in the actual deployment. Packaging for a specific environment, testing it and supporting it for its lifetime is a damn big commitment. If you’re not sure what is going to happen once it’s deployed then you’re not going to do it.
You lose all the obvious benefits of any kind of package, architecture or installation management system which ISVs effectively have to start writing themselves, at least in part. We’re no further forward than what Loki had to cobble together years ago, and for vendors whose business does not depend on Linux it is something they will never go do. Why would they when other more popular platforms provide what they want?
In addition, it’s never entirely clear what it is that you need to statically link and include in your package. You might detect installed system packages manually and then dynamically load in and then fall back to whatever you have bundled statically with your package, but the potential for divergences in that from a support point of view should be very obvious.
Hmmmm. I thought you were complaining about the disk space that FatELF would consume at some point………
Anyway, just because some can do it it doesn’t make it any less crap. It is hardly the road to the automated installation approach that is required.
Oh no, sure it isn’t. You can compile for any architecture and every set of libraries for every single distro out there withing your own Ubuntu distro with just one click.. oh wait…
Of course. Not only with Linux, also with Windows and OS X and it’s different versions. In Linux is even more difficult because you don’t know what libraries are available and which versions, etc.
There is another option, like providing your own .so files in the same package, as a catch all solution for the not so common distros.
How about statically compiling those rare libraries the app may be using?
In a totally different topic. FatELFs for all binaries installed is not the same as installing one or two closed source application that is statically compiled and may only add a couple more megabytes to your install.
There are no automatic installation for not homogenoeous systems. This is not an Apple developed OS. The heterogeneity of Linux systems makes things difficult. There’s no need to make them even harder implementing cruft that doesn’t solve the problem at hand, the problem at hand being: all distro behave different.
I don’t see how fat binaries would solve any of this (testing, support, …).
Because they support one installation platform that has wide distribution support. It’s an absolute no brainer. They’re not writing their own scripts now ar they unsure about what their dependencies are when they are troubleshooting an issue.
I thought fat binaries only deal with shipping multiple versions of the executable (as opposed to static vs. dynamic bundling of libraries). For closed source software, static linking is bit of a non-starter anyway, because of licensing issues.
It deals with multiple platform versions of executables (should you actually choose to support different platforms) and also deals with dynamic linking with help from the operating system. It takes at least some of the guesswork out of what is available in the system to dynamically link against, but then again, in some distributions some things might be there and in others some might not. However, at least FatELF provides a mechanism for finding that out and possibly doing something about it.
That is definitely a very important part of why static linking is not a particularly great solution for everything that I hadn’t pointed out.
I disagree.
Make a completely static binary and you are good for all distros today and the next 5 years to come.
Should be good enough.
Obviously you have never tried to make a completely static and non-trivial binary. Some libraries (e.g. freetype I think) explicitly do not support static linking. Static linking with the libstdc++ is also a nightmare.
Links:
http://www.baus.net/statically-linking-libstdc++
http://www.trilithium.com/johan/2005/06/static-libstdc/
http://gcc.gnu.org/ml/gcc/2002-08/msg00288.html
and so on…
That’s why few ISVs are motivated to package for Linux distributions even now. No one wants to do it. Deployment is a PITA. It is on any platform. It’s a massive cost in time, effort and resources for testing and supporting multiple scenarios that ISVs just can’t do it.
To suggest packaging for umpteen different package managers, multiplied by umpteen different distributions multipled by umpteen different distributions versions and then suggesting they have a statically linked catch-all is so f–king stupid it isn’t even funny. No ISV is doing that now and no one will do it ever.
To suggest that isn’t complicated………well, you’ve never done serious deployment in your life.
I don’t see any choice here………..
You don’t have to support all of them. There’s no point in supporting something like Arch, for example. But Ubuntu, SuSe, Red Hat and Debian. The fact that we use 64bit computers today, and ARM processors in the near future complicates things. We are not in a wintel 32bits only world anymore. What do you expect ISVs to do about that?
You seem to have a crystal ball.
It isn’t complicated for a company to devote 1 person for each platform to do the packaging. Unless they are cheap. Or you really think that all that magic must be done by just one single guy? Come on, distro packagers package thousands of binaries for different architectures even for free. Red Hat guys get paid to do it on a mass scale, how come Adobe cannot devote some resources into providing its “quality” software for Linux? You think they are not interested because it’s difficult to package and support. They are not interested because they see no money in doing it, and of course they are not doing it for free.
[/q]
So narrow minded.
You have to support enough of them – and then support multiple versions of one distribution on top. Not going to happen.
Many ISVs like Adobe have said so, and you don’t need a crystal ball to know this is a problem.
Deployment is a massive and unknown cost. It always has been You can’t just assign one person to every environment and think you have it covered. You have to be able to find out if a problem is with your software or whether it is something else. It entails a support commitment. Linux distributions currently make that cost high, even if you support just one distribution.
If you don’t know this them I’m afraid you’re not qualified to comment.
So inexperienced. You’re not going to get a choice of software with that approach.
Good thing that never happens with closed source development because what you cant seedoesn’t exist.
Considering how utterly abysmal much closed source software this kind of behavior and thinking is obviously not specific to open source.
I quietly dreaded OS News getting ahold of this one; about all I can say is, “in before hate-fest.”
Honestly, I agree with the Kernel team, it sounds like a lot of complexity was being added to solve something that wasn’t really a problem. Exactly as detailed, it’s a cute idea, and it’d be great if it carried no cost, but it touches a lot of very basic, fundamental things in the system. Not to mention the minor quibble that it means that many Linux binaries could potentially have lots of different binaries stuffed into them, which kinda seems like a frivolous waste of space — even if not very much.
And, heh, just a minor point, we’re not talking about a closed-source project here. If a lot of users want this included in the kernel, you can actually start writing mails directly to the kernel maintainers — or try to submit your own patches, if you’re so inclined. If a large groundswell of demand shows up, the kernel team may relent.
I thought this was a good idea. It may have led to people, who because of maintenance reasons decided in the past not to release a linux version of an app to do so.
Who wants to have mantain a binary for each distros packaging system etc etc
stop posting about things you know nothing of…. NOTHING prevents vendors from distributing a package that works on all distributions _AND_ architectures today… and for that matter, 10 years ago either, its simply blatent incompetence on the part of “ISVs”
In theory ISVs could also build their own live cd in asm and distribute their program that way, but would it be worth the effort?
Until there is an msi equivalent that can be installed across Linux distros without worry of some random library update breaking your program expect the current situation of Linux getting the shaft from ISVs to continue.
Telling ISVs that they’re stoopid heads for not wanting to write a bunch of scripts for not only multiple distros but also multiple versions of those distros is insane. Don’t forget they also have to support all those multiple distros and versions as well. It isn’t like open source where you can just dump the program and then have all the distros figure it out. When customers ask why program whizbang doesn’t work in distro version combination xyz you can’t tell them to fix if themselves if they don’t like it (which really means f off since very few can actually fix such problems).
You win ISVs over by creating an appealing and stable platform for them, not by telling them to figure it out a chaotic mess on their own.
The real problem is at the end of the day Linux is designed around open source applications. Most Linux advocates have no idea as to how many headaches you run into when you try to distribute proprietary GUI applications in Linux, especially compared to Windows or OSX. Until the distros get together and acknowledge that more needs to be done to bring in ISVs expect the Linux desktop to continue to flatline.
I don’t expect much to change however, given the entrenched FOSS ideology, the ongoing window manager war and failure of the LSB.
this just further proves you know NOTHING what you speak of.. you CAN very much create a single downloadable application bundle that will work for all distributions and architectures, and it doesnt take nearly as much scripting and messing around as creating microsoft installers(which i can only assume you know nothing of aswell)..
theres really no great wisdom in it, no great thing missing before this is doable, nothing much to do to work at different distributions, its all MYTHS created by stupid people, which other stupid people happily eat, and im sorry, you fell into that micetrap…
i must fall back to my previous statement, stop talking about shit which you know NOTHING of..
and btw, if isvs are idiots, then they f–king deserve to be called idiots, i see no reason to cater to their idiotic non-existent needs for stuff that isnt really needed.
During the Slashdot discussion of the announcement of the project, a point was brought up that this makes it easier for small programmers to distribute programs for multiple architectures.
The problem still arises, that the programmers must still compile their program for every architecture they want to release on.
The only improvement that a fat binary provides for a programmer is that their personal project website can list one raw package for their various programs, with a 3x bigger executable file sharing the same data, instead of releasing 3 separate packages with their own copy of the data. That’s it.
Programmers still need to package their program for all of the different distributions, because fat binaries doesn’t fix that at all. If a programmer wants to distribute a DEB and an RPM, they still need to create those manually. Fat binaries would only help for making one DEB and one RPM for all supported architectures, instead of one DEB and one RPM per architecture.
Big packages ship the data separately, so that’d be one binary for each arch, and one data package. That’s why there are noarch rpm’s and such stuff as well.
That’s assuming that the data is not arch-specific, which is almost always, but not always. In such rare cases fat elf would be even fatter hehe
I can see a substantial benefit if we left the package manager behind… no longer would users be required to know whether they have an 386, 686, x86_64, ARM, or PPC installation: download and it’ll work… at least, provided it was distributed as one. Right now, we have the package manager handling all of those distinctions; this would remove the vital dependencies on a specific package manager… at least, provided you have all the right versions of the libraries installed.
Not really. The package manager works the same way (and therefore has the same source code) for any installation or architecture.
Different distributions and/or different targetted architectures simply “point” the package managers at different URLs.
The end user doesn’t have to know the right URL. The URLs for the package manager repositories are normally set up when the OS is installed on the machine in the first place. If an OS is installed and the machine runs at all, it will already be set correctly.
Packager managers/repositories work. They are far better than anything that is available for Mac or Windows. Why mess with them?
I have to agree with the kernel guys and Ulrich here. There is a much better way to do it:
$ ls foo/bin
foo
foo-arm
foo-ppc32
foo-ppc64
foo-i686
foo-x86_64
$ cat foo
#!/bin/sh
foo-`arch`
This way, you can very easily and simply delete the binaries you don’t need, no patches are needed, and it is still user-friendly.
Edited 2009-11-06 00:02 UTC
I don’t know about FatELF but deleting unwanted binary code is no problem at all with universal binaries on OS X (implementing similar functionality for FatELF should be easy):
The lipo command creates or operates on “universal” (multi-architecture) files. It only ever produces one output file, and never alters the input file. The operations that lipo performs are: listing the architecture types in a universal file; creating a single universal file from one or more input files; thinning out a single universal file to one specified architecture type; and extracting, replacing, and/or removing architectures types from the input file to create a single new universal output file.
Since the binary code part is normaly small compared with other resources (sound files, pictures, translations, etc.), thinning out a binary is normally not worth the time (except maybe if you run under very restricted condition like a mobile phone).
However, Linux fanboys simple forget a common use case under Mac OS: copying a application from one system to another. E. g. you copy a universal binary app from a PPC-64 bit system to a x86-32 bit system – a simple drag and drop operation (or use cp if you prefer that). With universal binaries, no problem at all. Everything works. From the user perspective, the system behaves as expected (why should there be any difference between copying a data file or a application binary?).
Compare this with the sad situation under Linux ..
As long as Linux developers care more for HD space an ‘the one true way’-wars that ends up in a mess under /usr/<whatever>, the year of Linux on the desktop will occur the same year as commercial fusion power arrives (whenever you ask, it’s always another 50 years in the future .
The system adminstrators approach most Linux fanboys adopt may work for server environments or their own tiny world, but not for the masses ..
Yes, it is certainly possible and easy to make a command to remove unneeded binaries, but why would you do that when there is a much simpler solution. Power users can selectively delete binaries (e.g. delete arm and powerpc, but keep i686 and x86_64). And their could still be an easy command or menu option for those who don’t know or care what architecture they have.
Um… couldn’t you do this with the shell script solution too? As long as you don’t delete any binaries, it will act exactly the same as universal binaries on OSX.
The important word in your sentence is ‘could’. And that’s why Linux is still so far behind when it comes to desktop solutions for the masses: no user friendly interface standards (I don’t talk about the size of icons but the fundamental mechanisms) and an inability to think outside the box (i. e. administrated server environment situations).
E. g. under Mac OS X, grandpa Jo can copy an app from his PPC64 machine to grandma Mary’s x86 notebook simply by dragging and dropping a single file and without even thinking about different CPU architectures: grandpa Jo (rightfully!) expects this to work and it does – now, how many apps under Linux right now would support the same use case? Approximately none! Even if it works for some apps, he won’t use such a feature because from his view it does not work reliable if it’s not supported by all the apps.
So instead of adopting something that might ease the use of Linux, you rather depend on every single app creator to implement his or her own scripting solution (that might or might not work). And instead of making something as installing (and deleting) an app a simple drag-and-drop situation you force everybody to use the “magic” (i. e. incomprehensibly) package manager which normally scatters files around /usr and /opt subdirs and makes it impossible to handle things manually ..
…looking for a problem…
Most people who criticize package managers don’t know how gracefully they work and instead rely on old myths that don’t make any sense anymore (if you got into the “RPM hell” any time in the last 5 years, you obviously screwed up).
Now, sure, there are alternatives to package managers. But coming up with a patch that implements something that would be useless to all current distributions and getting upset about not getting it integrated into serious projects is a bit of a stretch, isn’t it?
Here’s an idea. Create a Linux-based system which doesn’t rely on a package manager with a central repository. Make sure the ABI is 100% stable among all your releases (provide compatibility kludges for when that isn’t the case). And now you’ve got your problem!
Isn’t it an elegant system? Instead of relying on binaries provided by your distribution, you have to download from third parties directly (now that’s a great idea, isn’t it). Not only you waste more bandwidth and disk space, but you’ll also depend on apps to nag you about updates or silently update themselves.
I miss the days when tech-inclined people were smart.
As opposed to relying on your distribution to provide the latest packages for you, which may or may not happen. Wonderful.
And please, disk space? We have 1TB HDDs today. While I prefer my OS and apps to have a small footprint, several more megabytes don’t matter all that much.
Thats all fine and dandy when you are using 1TB hdds. But what happens when you need to put the system on a rom chip? Or pxe boot it. I dont want the extra cruft in my system.
Small programs fit on small chips.
But more importantly a software distribution system that is used 99.9% of the time on x86 boxes for programs like Firefox and OO shouldn’t be designed around embedded development.
I don’t see anything smart about defending a system that was designed to save space in an era before gigabyte drives even existed. Trying to save bandwidth is also a joke in the age of broadband. The repository system solves problems from the disco age.
As for updates a central repository system also has to nag or update silently.
So what is left? What is the big advantage of the shared library system? Software can be installed automagically? You don’t need shared libraries to do that. Software is kept in a secure place? No need for shared libraries there either.
There’s nothing elegant about the shared library system. It’s more of a hack that creates unneeded interdependencies. Programs still get broken and it requires additional labor to maintain. If you want to talk about wasting resources just think about how much work has gone into people fixing/maintaining the repository system in comparison to OSX/Windows where the users simply run the program without dealing with a middle man.
ISVs hate it because it gives them less independence, among other reasons.
The Apple engineers ditched the shared library system when they made OSX. Were they not smart either?
If you want to defend 70’s tech then go ahead, but I’m sick of this attitude by Linux advocates who believe that people who criticize Unix/Linux are stupid. The original Unix design is not the omega of operating systems. Even the people that created it wanted to reform it years later (plan 9).
Edited 2009-11-07 02:18 UTC
– Reduces memory use
– Bugfixes. If a security vulnerability is fixed in a library, every app benefits without having to be updated.
– Yes, drive space. Many mobile devices still have root filesystem on small fast flash drive.
Not really stupid – rather, it’s about a knee jerk reaction when leaving their comfort zone. They mostly have experience with click-and-run installers, and want the same on Linux too. And these days, we have many click-and-run installers for Linux available. There is nothing in Linux that prevents you from making them (or makes it exceedingly hard, either).
Insignificant when the typical laptop comes with 2gb of ram.
And if the update breaks another app? Shared library systems have their own risk in that they sometimes can’t patch a file without breaking another app. This can result in a much longer delay for a patch then you would have with a program that updates directly from the developer.
As I said before small devices take small files. The shared library system isn’t needed for embedded development. There are plenty of cell phone operating systems that don’t use shared libraries.
There is no click-n-run installer that works across all distros and makes adjustments for all version differences. There isn’t even a standard “program files” directory among distros. It’s a big mess and there is no installer that reconciles it.
The burden of making such an installer shouldn’t be on ISVs. They shouldn’t have to mess with scripts to determine which distro you are using, which version, which window manager, etc. They should be able to dump the program files and libraries into an isolated directory and not have to worry about dependency issues. Users should be able to install or uninstall proprietary applications with a control menu.
Distros simply aren’t designed to do that. They are designed around shared library repositories. ISVs run into endless problems when working outside the repository system. Many end up just treating all the distros like individual operating systems.
This is what the end result looks like:
http://www.opera.com/download/index.dml?platform=linux
Most ISVs don’t have the resources to support a dozen operating systems that only make up 1% of the market. Even those that do probably decide that porting isn’t worth the effort.
Is this supposed to be a joke?
Meanwhile, every app using that lib is vulnerable. Congrats!
What a killer feature! Meanwhile, other systems like MacOSX can exist on small mobile devices quite well despite universal binaries. Of course, you can strip unwanted architectures from universal binaries.
Fully agree with the first poster. Somehow Linux people always come up with good reasons for server or mobile devices that makes it impossible to adopt something that would be user friendly for a desktop situation – and then, they wonder why “the year of Linux on the desktop” still has not arrived ..
I don’t see the logic here. If there is a bug in frobbo-1.1.0 that is fixed by frobbo-1.1-1, your apps will be vulnerable until the shared library gets updated. If all your apps bundle their own copy of frobbo-1.1-0, all the apps will remain vulnerable until they ship a patch of their own.
We’re not talking about universal binaries here, but bundling shared libs with the apps vs. providing them centrally. Bloat caused by universal binaries is miniscule in comparison.
Universal binaries don’t really make things better for desktop users – just supporting i386 is enough for that segment.
This approach to binaries has a lot of benefits. For instance, distributors could ship a single DVD which would work on all architectures the distribution in question supports, making it easier for users to pick the right .iso to download.
I take that as a joke? Why the hell would an user with a non-x86 CPU be confused as to which version of a distro to pick?
Even if you consider that ARM is about to make a leap into mainstream through “netbooks”, don’t you think someone who bought an ARM netbook is either a) not aware that you can install another operating system on it (or not caring enough about it) or b) skilled enough to know what version to pick?
The minute an ARM-based netbook lands in Wal-Mart, yes you’ll have people who had no idea it was a different architecture. And you’ll have complaints, bad reviews, and returned hardware. This will lead to bad press, news and blog stories about ‘how Linux is failing in the marketplace’, and ‘ARM notebooks: the future, or a failed experiment?’
I’ve been following the whole FatELF story in the kernel mailing list and I haven’t seen any rudeness, unless you consider rude when someone doesn’t agree with your ideas.
One thing the author didn’t post is a link to the conversation itself, so users here can actually read it themselves and see how “rude” it was. Please, read the whole thread, then you can come back.
http://lkml.org/lkml/2009/10/29/357
I might have skipped one mail or two, so maybe I missed something, but in general the discussion was as cordial as it could be.
I’ve followed it, and each point made by the author was rebated with a reasoning by the kernel developers. The opposite rarely happened, so I really wonder if he’s really joking when he says people was not hearing him. To me it seemed the other way around, and I am not involved with the kernel developer in any way at all.
Even if I had not the scope to see the point of FatELF (which I think I have), I would still be inclined to think that there would be very little or no gain at all on including FatELF on the kernel. There are lots of ways to preserve old libs so ABI doesn’t break, and I certainly don’t want to install code for every imaginable architecture on my box.
Besides that, Ryan Gordon clearly stated that there’s another reason for him to leave: patent issues he can resolve. That has not been mentioned, I think it’s important to know the reasons why he’s leaving before demonizing the linux developers.
Please, accept that variety of opinions is actually a good thing. You can always maintain the patch separately yourself. Lots of projects attached to the kernel in one or another way do so. There’s no need to stop working on something you have put so much work if you truly think it can be useful.
Thank you.
Its sad we had to get 2/3 of the way through this discussion before the *real* problem was finally mentioned. This is why Ryan withdrew his proposal, and he specifically mentioned the ‘patent’ problem in his withdrawal post as the thing he didn’t have an answer for (and nor does anyone else for that matter).
In fact, what really struck me as odd (if not disingenuous), was the contrast between what Ryan said in that last post on lkml, and what he said later (and how others interpretated what was said), but heck, its not like Linux haters have ever needed much of an excuse to go off on a rant anyway…
The article’s writeup was unfortunately (but not surprisingly) a little one-sided.
I get the feeling Thom think’s Drepper is representative of all os-devs. Drepper’s response about why somebody would think one second about fatElf was harsh.
I don’t understand why the fatElf dev spent all the time and effort before notifying the kerenl/glibc team to see if it was actually a good idea/needed.
Drepper has always been that way, I am not going into the “he has the reason or not” debate because I really don’t care.
But the kernel team certainly didn’t respond in a harsh way at all.
About why did him go that far without consensus, well, that’s certainly an interesting point. But even then, if Ryan Gordon really cares about this, there’s nothing stopping him from working in the project. There are *lots* of projects that relate to the kernel in many ways, and they continue their way without being in the Linus tree. If he can really show how useful this is, and he gain adepts, the thing might change. But it’s ridiculous to go thinking that, because your code is useful to you and two other persons, it is going to get included into the kernel source tree.
If you look at the ML you will also see very valid points about some basic tools needing porting to work with this. It’s not just the kernel and glibc (if that wasn’t enough), it’s the kernel level tools to debug and monitor, the debuggers, profilers, and lots of tools that in turn rely on these and might need also some tunning.
This definitely needs to probe its usefulness and it has also to probe being bug free because it taints basically every single foundation of a linux system you can think of.
a solution looking for a problem. Having a distribution for each architecture is good enough for almost all cases. And certainly for enough that the demand for this feature just isn’t there.
I can see one use for fat binaries that multiple distributions don’t solve: Hosting /usr on NFS in a heterogenous hardware environment.
You can have as many usr directories as you want, and remote machines can mount the one that fits the architecture at hand.
I appreciate the premise and intention behind this idea and believe that it merits further (non-volatile) discussion. But the sheer number of architectures supported by Linux might make fat binaries impractical.
Apple maintains one MacOSX distribution. They implemented fat binaries because it made business sense — facilitating a product line conversion from PPC to x86 in a manner least confusing to the customer. The universal binary (adding x64) is a logical and consistent extension to their prior approach, as it assists in their x86-to-x64 push. Previously supported architectures will likely be phased out over time (Apple’s PPC won’t be around forever).
Linux supports far more architectures than OSX, and when was the last time support was phased-out for a specific architecture? I can’t imagine how they’d consolidate all base-install packages on one CD or DVD considering how little room is left at present without fat binaries. Even if you could squeeze them all on there, you have to consider the impact that fat binaries would have on distribution update servers. The file sizes would be doubled (or more).
It seems there’s one or two angry people voting down perfectly valid comments. If you don’t think what’s being said is right, please take the time to respond and enlighten us.
And finally, a plea for OSAlert maintainers: it would be nice to see who is voting down just for the sake of it.
Edited 2009-11-06 01:56 UTC
Don’t be bothered by the voting system so much, a higher score won’t benefit you at all. At best it tells how many people think the same as you, thus how unoriginal you are.
Unfortunately a lower score makes your post “invisible”. Well not completely, but you (the reader) needs to take an extra step to see a modded-down post. I’ve modded up posts that people have modded down simply because (to me) it appeared they were doing so out of some sort of bias.
Fat binaries the least problem of Linux, how about the ABI compatibility and the shared libraries (different libraries versions with different patches on different distributions)?
On Linux, its too hard to make a binary (only one, for one architecture) that will work in many distributions.
Btw, whomever thinks about a fat Linux distribution is crazy, that is totally useless.
Only 3rd party developers would like to make fat binaries (I would do it for netpanzer on Linux, but needs support from compiler, Macs has it), the operating system only needs to provide support to the currently running architecture.
I for one am glad this got dropped. It does absolutely nothing that a tarball with a script to start the correct executable based on the environment can’t already do, at the cost of many invasive changes to the very bottom of the stack.
Now if there were huge numbers of proprietary apps out there running into some problem detecting what system they were running on then this might make sense. But there isn’t. The hard part is actually fixing the code to know that pointers aren’t necessarily the same size as an int, and that bit endianess might be swapped. Packaging has always been dead simple.
> Packaging has always been dead simple
I take it you haven’t done much of that, have you?
I am in agreement with the “solution in search of a problem” crowd. The only benefit I can see for linux users is so that closed source software is easier to have compiled for differing architectures. Just have the compiler do it standard. The problem with this is that closed source software represents the minority of what is ran on a linux box.
For all the open source software options, your platform of choice is just a recompile away. Package management is quite well done these days for those distributions that want it.
Don’t get me wrong, I applaud this gentleman’s effort. I just fail to see this as a real benefit to the linux community
…he should just branch sources and work on it by himself or with a team of like-minded developers and push it out into the public to see what happens.
IMHO the java or .NET bytecode will solve this problem. The JIT can compile this code to any processor architecture.
I’m surprised to see so many Linux-folk up in arms over “what a stupid idea” FATbinaries are – at how it’s “a solution finding a problem”, and how package managers are clever as hell.
There’s a few things I like to express about what appears to be the general opinion. First up, I thought it was a clever idea. The guy admitted it was more work, but only because the tools weren’t mature yet. I can imagine hitting ‘compile’ (or ‘make’ or whatever) and getting a fully matured FATbinary out of it, compiled for the various architectures – as long as your code was up to snuff.
If you go to a Linux distro site, you have to identify what platform you’re on. For techy folks, that’s fine. But if Uncle Steve is sick of Windows always being infected with viruses, it means he has to call someone to help figure out what he’s supposed to download – or ask to have shipped his way. Admittedly in the Windows world you have to know if you’re 32-bit or 64-bit, but the good news is Mom and Pop don’t really care. It runs on x86 or x64, and you’re done. Not so in Linux-world, where it runs credit card terminals and I’m sure some toaster somewhere.
Being able to hit up, say, Mandriva and download a DVD ISO that “just works” would be fantastic. Like how Mac OS X does. Or Windows 7, to a point.
—
As for package management, yes, it’s neat. I like it, I do. But they’re only as good as your repos – and say my Grandma sees a new Scrabble game she wants. She’s on Ubuntu and starts up Synaptic to search for Scrabble, and the one she wants isn’t there. Now what? Well, now she has to go find it on Google, and hope they have a .deb package for her version of Ubuntu, for her architecture. She doesn’t know how to add repos, but she remembers from her Windows experience how to click “download”, then go back and double-click the installer.
If there’s an error, like, “No, stupid, you’re on ARM and you downloaded something for x86”, guess who gets a call at 10:30 at night? It would simplify things for the average user.
Yes, I know, proper instruction is important and everyone’s responsible for their own knowledge of hardware and software. But that sort of thinking won’t win you market share or take over the world. It’ll keep Linux a niche operating system with a small, but vocal user base — while the rest of the world continues to run Windows, Mac OS X, or (in the future) Android. Sorry gang.
— Not to make it sound like FATbinaries is the cure-all for all of that. I don’t mean to imply that — just that the whole “you can run this script” or “stick to the repos, stupid!” mentality is harmful to the future of Linux and the possible marketshare.
Then again in a year or two we’ll probably all be on Android or ChromeOS having similar arguments over… I don’t know… icon spacing.
I agree, it’s a problem with Debian packaging that isn’t properly solved by setting the package architecture to “all”.
The cross-architecture problem can be solved without Fat binaries. Have the source code in the package, bringing in all the dependencies required to build it, and then build it on-the-fly. Or, have the package install a number of different architecture’s binaries into a folder like previously proposed (with an invisible script that selects the correct one). Or, have the package download the correct binary from the web. Or, write the program in an interpreted language.
At first I thought “FatElf sounds cool, that’s shocking that Ryan was given a whole lot of ‘meh’ and discouraged from developing it”, and then I thought about it and realised that Fat binaries are not actually necessary. As long as you can cross-compile (and you can), you can have basically the same thing as Fat binaries – and the user or Computer Janitor can remove the ones that are unnecessary to save disk space.
Fat binaries are the wrong solution. It’s inferior to package management because it doesn’t take into account dependencies, updates, etc etc. It basically an install.exe with a single step toward management in terms of architecture. Silly.
So grandmas doesn’t need to learn about computers, all that is really required is to take “apturl” and add the ability to seamlessly add repositories. Maybe with a trust system, warning the users if you are installing an untrusted repository. This direction seems to be in progress. The problem might be a solution that meets both grandmas and techies needs, in that situation, grandma looses, and rightly so for the sake of the system. But in this case, I think a small friendly wrapper can automate adding a repository and installing an app from it for grandma without interfering with the techies.
Looks like it exists in some form already.
http://www.linuxloop.com/2009/03/10/repository-adding-via-apt-url-a…
And I am going to suggest it to the Gentoo developers straight away.
Sorry but give me repositories with all possible thin binaries. Fat binaries only make sense ignoring package managing. Never like the idea, glad to see it gone.
I don’t see a need for all binaries to be distributed in this manner, but I would definitely like to see support for fat binaries added to the vanilla kernel. It would make the distribution of commercial applications a lot simpler for both the companies and users. Imagine just being able to download “Flash” without having to choose the correct distribution or architecture.
Or you use the package manager and it’s easy…..
Would anyone be able to contact him and ask if he’d consider helping out Haiku? Perhaps a member of the Haiku team?
I’m sure they’d be much more receptive to his ideas and solutions that don’t involve pacakage management.
As Haiku matures, and continues to transition from GCC2 to 4, with new APIs introduced in R2 and different architectures, I’m sure his experience would be invaluable. Having seen how ingrained package management is in the Linux community, I don’t see any reason to assume that he’s a difficult person to deal with or anything like that. I can’t think of a more exciting OS project to work on either.
FatELF is optional, so i see no problem.
Kernels and Packages could still release non fat-elf binaries…
and the people that need multiples, could use FatELF.
What’s the problem here ?
In the end people would see that fat binaries are not all that useful (because creating them is a pain, and it’s non supported by build/packaging toolchains and bulidbots). And there we are, having complicated the core components (kernel, libc, packaging), with feature nobody uses.
This is classical example of bloat waiting to happen.
This is an idea that looks good on paper, but seems worse and worse when you start looking at the cost of the feature. It gives a warm fuzzy feeling of having that checkbox on the feature list, but not much more.
Well done Thom. As I read the article it became clear you wrote this. Completely uninformed and one-sided.
Bravo.
^aEUR| doing cross-distro packages, or packages for multiple distros, is a pain in the arse. ESPECIALLY when the package is forced to have a dependency (say, Avahi’s daemon).
It IS also desirable in a variety of use cases. Say, distributing software to unknown users regardless of their distro. Just like every other platform under the sun does. (The Mac demonstrates you can do this with flair regardless of the transitioning of an entire platform from one architecture to SIX — ppc, ppc64, i386, x86_64, armv6, armv7, for both libraries — all — and applications for applicable subgroups of those architectures.)
A vocal part of the Linux community spits on those use cases. (See Autopackage’s demise.)
Discuss.
Edited 2009-11-06 23:35 UTC
Totally agree. The Linux crowd cannot live and think outside their small and very restricted distro-/package-managment world. The way software is installed in the Linux world is so restrictive that they always depent on tools, distros or the source code. Why exactly is it not possible for granpa Jo to copy a application from his PPC64 desktop to granma’s ARM netbook and run it (like copying any other file)? Obviously, the Linux way is too restriced to handle such a simple use case. Freedom for the user has a lot to do with user friendlyness and a clever system architecture that can handle simple everyday use cases. But, alas, with Linux you depend on a distro and package management software like a mobility impaired on a wheelchair ..
… that he got as far as he did with it! The mere NOTION of binary compatibility is an anathema to what the majority of your free****ots are always ranting and raving about.
It’s WHY hardware vendors have been slow to support linux, it’s WHY things like APM still don’t work right in the 2.6 kernel… It’s why your real die hard zealots use Gentoo.
The only thing surprising me was that Ryan didn’t see the shitstorm coming… Hell, doesn’t anyone else remember the shitstorm the concept of ELF itself had at the start? Or how many of your more radical free***’s still get their panties in a twist every time you mention ndiswrapper?
The negative response he encountered mirrors most every experience REAL software developers encounter trying to do things in linsux in the first place, which is why the number of viable commercial Linux endeavors can be counted on one hand.
… and FSF zealots: Commercial is NOT a dirty word. Go back to ranting and raving about how the man is keeping you down, how everything is the fault of the evil corporations, and protesting the idea of a unified world government.
Edited 2009-11-07 22:35 UTC
it’s the standard linux developer attitude especially from ulrich drepper.
solutions from others are always problematic. problems the developers don’t have are no problems at all.
reiserfs
realtime audio
schedulers for snappy desktops
etc…
all the stuff that matters they are not really interested in.