Developer Frans Pop, author of debtree
, posted an article showing the evolution in size of the GNOME desktop environment in recent Debian releases. The picture he paints isn’t particularly pretty: the default GNOME install has increased drastically in size over the years.
Pop writes and maintains debtree
, a tool which graphs dependencies for any given package. After realising the enormous amount of megabytes taken up by default GNOME installs in Debian, he created a few dependency graphs of important packages, such as hal
– which produces a truly scary graph. In any case, this table shows the development of the default GNOME install in Debian.
Pop is quite harsh about these developments. “Sure, some of that is real functionality, but a lot is also (IMO) redundant visual effects that only serve to slow the desktop down and junk needed to do stuff automagically. And a heck of a lot is duplicated functionality,” Pop writes, “One of the main reasons I switched to Linux was because it gave me back control over my systems, but with KDE4 and pervasive stuff like hal and all the various “kits” Linux is on a fast track that’s giving priority to flashiness over real functionality and eroding that control.”
Of course, all these extra megabytes make GNOME (and KDE) easier to use and prettier to look at, but is that really important, required material for a distribution like Debian? Shouldn’t KDE and GNOME be engineered in a way not to require all these strictly end-user oriented features? Windows has shown similar size increases, and only Mac OS X recently had the ability to shed some weight through dumping (among other things) PowerPC code. Maybe it’s time for the Linux world to focus on shedding some weight too?
It’s an interesting discussion, and I’m sure you guys have something to add too.
If you read his post he sounds like an old fuddy duddy user who doesn’t want his linux install to actually recognize has hardware when he plugs it in, or his desktop to look nice. He should be using TWM and motif and shut the f–k up.
I actually want my desktop to do something. I don]t want it to be a whole bunch of disparate applications like it used to be in 1998. I want my desktop to look nice. It seems to be a general consensus among desktop users if current trends are any indication. I’m not defending Gnome growing in size if and only if there wasn’t any functionality added, but that is not the case.
That’ not to say that Gnome and Kde couldn’t use more optimization and a better grasp of dependencies.
So, no sir, you get off my lawn, and take that old man smell with you.
Edited 2009-09-07 23:03 UTC
You are going too extreme. If one is not happy with the size of GNOME, should go for only twm? There are so many window managers and desktops out there like xfce, enlightenment 6 & 7, fvwm, WindowMaker, Afterstep and on and on. Xfce is quite usable with some enough eyecandy I reckon.. and what about Enlightenment 7? so slim and lots of eyecandy.
BTW, please get off from his lawn!
I recommend giving LXde a try if you want a leaner, faster desktop; works well for me.
http://www.lxde.org/lxde
Agreed. I used to love WindowMaker and still do to some degree but I switched to GNOME years ago because I found myself writing all kinds of scripts, patches, and other hacks to automate my desktop. It gets old after a while and I’d rather have a nice desktop that just works. GNOME fits that bill for me. If they stripped out of the functionality that the author seems to be complaining about I might as well go back to WindowMaker.
Agreed. Now, I’m a big believer in making things as simple as possible… but no simpler. And I imagine there is room for simplification in places. But I learned a long time ago that when people prepare dependency graphs to make a point about complexity, it’s time to take the same skeptical attitude as when a professional magician walks out on stage. Depending upon exactly how the graphs are laid out, they can make the complex look simple, or the simple look complex… without being particularly obvious about it.
What the author needs to show is *specifically* how things could be simplified without sacrificing functionality.
Edited 2009-09-07 23:21 UTC
Uhm … okay, but most of the recent tools started to DEPEND on the useless, utter crap like HAL and UDEV and you can’t really do much about it UNLESS you compile your own kernel. Is that fair? no. Is that against KISS philosophy? well, surely!
Well, there are few types of users and I’m certainly not like you. I for example don’t want the desktop to get in my way … and I hate automation. BTW – that’s why I don’t use Windows or Mac …
It’s not about a code optimization when the desktop env loads a @#%@load of never-to-be-used crap …
Edited 2009-09-07 23:47 UTC
Just what exactly is your problem with HAL and udev?
If you hate automation, you should be using pen and paper, not a computer.
HAL is ‘Hardware Abstraction Layer’. There are all sorts of ‘HAL’ in all sorts of different operating systems. It’s a generic term that could mean anything.
In the case of Windows NT the ‘HAL’ is a abstraction layer that sits between the kernel and the hardware. This allows the kernel to be portable.
In the case of Linux desktop HAL is a way for applications running on your desktop to be able to access information related to hardware in a standardized way. It’s currently being fazed out in place of ‘DeviceKit’ which is a better design, but provides similar functionality.
The Linux kernel detects and configures all your hardware on the fly. Each time it boots up it does it. It then presents lots and lots of information on hardware and configurations (as well as other stuff) via /sys and /proc virtual file systems.
HAL/DeviceKit is designed to work with that and make things easier for userspace to deal with changing hardware configurations.
——————————-
Here is various things and what they do:
UDEV = Configures your /dev directory and contains configuration information on hardware stuff. For example if your touchscreen is not showing up as a touchscreen, then you need to edit udev’s rules so that it gets properly configured as a touch screen.
HAL/DeviceKit = Collects system information and presents it in a nice way to userspace. Deals with notifications when hardware configurations change. Also performs some actions on behalf of user applications.
DBUS = IPC (Inter-Process Configuration) for applications. There are two major ‘busses’; 1. System Bus, 2. User Bus. This is the protocol that is used to send notifications and other information back and forth to users.
——————————————–
This is a example of how it works when you insert a USB flash drive:
* You insert thumb drive
* Linux’s USB system notices this and begins communication with the thumb drive, configuring it, analyzing it. Each major configuration detail gets sent out to /sys.
(for example you have a USB device that gets configured, and the Linux SCSI protocol stack is used to abstract the communication with USB Mass storage protocol, so you get a SCSI device that shows up, etc etc)
* HAL Daemon is watching things change in /sys. Each new device that gets configured (some virtual, some real) then sends a notification over the DBUS System Bus.
* Gnome-Volume-Manager is listenning on the DBUS System Bus for devices that have a ‘volume property’ that indicates that it’s a storage volume that contains a mountable file system.
* Eventually after the device ‘settles’ the Linux kernel does the final configuration and sends the partition volume information to /sys
* HAL picks up on this and sends a notification to DBUS System Bus that a device has been detected and configured that contains a volume with a mountable file system.
* Gnome-Volume-Manager (running as your user) receives the notification and sends a request to HAL that that volume should be mounted to /media
* HAL checks with PolicyKit to make sure that your user has the correct permissions to be able to mount ‘removable’ volumes, which you do.
* HAL mounts the volume at /media/disk and sends a notification of the new file system over DBUS
* Nautilus recieves the notification and then creates the icon on your desktop.
————————-
The following is the ‘old’ ‘non-bloated’ way that you had to do stuff prior to things like udev and dbus:
1. You insert the USB Drive.
2. You open up the terminal.
3. You run ‘dmesg’ several times to find out how the device is detected.
4. You run ‘su’ or ‘sudo su’ to become root in a terminal.
5. You check to make sure that the correct /dev/ node exists. If it does not then you have to look up the Major and Minor device numbers in the Linux documentation to corrispond with the devices and it’s partitions.
6. Then you run the ‘mknod’ command to create the device node if it does not exist.
7. Then if you want you can run the ‘chmod’ command to configure the permissions for the device.
8. Then you mount the device to /mnt or something else.
Now you can exit out of your root shell or stop using sudo and begin accessing that file system.
DISADVANTAGES TO OLD WAY:
* You have to use root permissions or use sudo. This is very insecure. Modern approachs have small system daemons that check user permissions with Policykit before performing functions that otherwise require root permissions. This allows administrators to set group policies and configure things in a sane manner for lots of desktops and lots of users. For single user systems it still allows users to perform common desktop functions without resorting to using root permissions.
* The number of Major and Minor device node numbers that Linux supports is limited. This limits places artificial restrictions on the number of disks and different types of devices that Linux can support. Udev allows recycling and using abritrary Major and Minor numbers to innumerate devices, thus eliminating this restriction.
* There is no standard system-wide notification. There is no way for ‘the system’ to notify userspace of system hardware configuration changes. This makes adding things like:
– USB Thumb drives
– Mice
– Cameras
– SD Cards
– Wireless and Wired Network devices
– Audio devices
– Joysticks
– WebCams
– USB Cdroms
– USB Floppies
etc. etc. etc.
It makes all that a completely manual process requiring root account (through su or sudo the effect is the same). This requires that users either have intiment knowledge of how Linux deals with things and configuring hardware devices manually through the command line, or that a system administrator needs to be on call to handle these sort of things.
* Any application that does actually need to watch and listen for hardware configuration changes needs to implement the entire functionality of ‘HAL’ in itself, each and every application. So no shared functionality, code, or library. It needs to run with much higher level of permissions to access and examine the /sys and proc file system then is otherwise necessary.
So people can keep their ‘bloat free’ systems and unusable junk like that. Just becuase THEY (as in I am not deriding the person I am repling too… Not knowing how something works is one thing, not knowing how something works and then claiming it’s nothing but shit and bloat is entirely different matter.) have no clue how things work does not mean other people don’t.
Is it THAT HARD to understand, that not everyone needs device automounting, media autostart, notifications; automatic recognition of file format and all of this ‘wonderful enhancements’?
As I said beore, there are MANY types of users and not everyone needs neither likes – say – Ubuntu way of doing things. And as the guy above already said – the whole lot of the enhancements bring a whole lot of new security issuess.
Some people prefer to have a minimal installations. Not because their masochists, retards, gurus, geekies or whatever. It’s just a minimal nature of some of us – certainly and thankfuly not all of us.
@dragSidious – I think that everyone here knows exactly [or – at least – know an idea of] what the UDEV and HAL is, but thanks for an exhaustive explanation. I’m sure someone will find that useful.
Lastly – I don’t really need an automation and that’s the point. I don’t want some future malicious code to mount my devices, access my private informations with stored enc session and so on, and so forth.
BTW, this is why I hate UDEV, HAL:
UDEV – it always MESS my whole HW config. It used to swap my ether interfaces and I don’t want to be forced to use the god damn udev rules. It’s unnecessary! I don’t need to trimm /dev dir in size. It looks just fine on my OBSD box, so why the hell should I need udev in linux or freebsd? this is useless and it creates new troubles. I love the OLD and REASONABLE way of doing things.
HAL – that’s a real pain in the ass. It used to f43ck up the whole system and making it unusable. Keyboard used to generate the whole bunch of doubled inputs, mouse behaved oddly and everything was just a mess. Again – I don’t want to edit some god damn policies! I don’t need that. Everything worked GREAT without HAL, so why the hell should I sacrifice simplicity? that makes NO sense. Throw it into Ubuntu, but just don’t make it default for every damn distro and kernel version on earth. Not eveyrone uses Ubuntu.
ugh …
Sorry for the typos. I don’t have a time to correct them
Edited 2009-09-08 19:45 UTC
They would be better of switching to NetBSD instead of fighting the windmills on Linux side.
The real problem is that most of these enhancements [UDEV, HAL] are starting to invade some of the *BSDs … and that’s why I’m so mad about the idea itself.
FreeBSD has a devfs which is quite a different beast from udev on Linux. With udev you have a tmpfs mounted on /dev and udev, a userland daemon, populates the /dev directory with whatever info the kernel sends it. It’s fat, slows down boot time and there are the problems you mention with the ethernet cards amongst others. Devfs on the otherhand is in the kernel and is slimer, and orders of magnitude faster. Linux used to have a devfs (I used it full time, never had problems with it) but it was deprecated because the code was said to be hard to understand and no one wanted to fix the races devfs was said to have. Kroah-Hartmann’s udev was chosen instead. And this guy said that devfs was flawed by design. Nevermind that devfs has zero problems on FreeBSD and Haiku for example. But udev does have one advantage, it doesn’t rely anymore on the minor and major device number scheme. That makes it possible, according to Kroah-Hartmann, to plug-in much more devices which is needed because of USB etc… Bottom line let’s wait and see when FreeBSD and Haiku run out of minor/major device numbers ^^
Hey, did you forgot to say about FBSD’s HAL? It’s there and it’s as useless as the linux one.
Rule one in Linux code land: don’t believe anything Kroah Hartman says. I’m in full agreement with you on devfs. I used it, I liked it, I adapted a few drivers to use it, and followed the saga around it closely, essentially Greg “Arsehole” KH drove the devfs maintainer away by being a complete ass. Udev is the dumbest idea ever, why have a userspace daemon manage something that originates in kernel space in the first place? Oh, and as for your major/minor numbers, have an ls -l on your /dev… yep, device nodes still have them. At least most distros are smart enough to put udev’s tree on a ram disk, but this is a cludgy workaround that shouldn’t be necessary. Solaris, FreeBSD, OS X… and just about every other UNIX out there has used a devfs for years. Only in Linux land could an inferior idea displace a superior one because of loud voices.
Edited 2009-09-08 21:43 UTC
I’m sure you’ve read about Greg defending udev, but here it is anyway:
http://kernel.org/pub/linux/utils/kernel/hotplug/udev_vs_devfs
Sounds convincing enough for me.
Hmmm. I think you’re right. It wasn’t a crazy off the wall argument he posted. He actually posted real valid answers as to why udev was a better choice. Is it? I don’t know. OSX seems to be fine without it, but I’m sure OSX has some form of userland daemon doing the same thing udev does now, Maybe OSX doesn’t need to do it because of the limited device compatibility? Either way, interesting read. Thanks!
It probably has. But anyway before udev, there was hotplug which handled ^aEUR” what I guess you are refering to ^aEUR” in conjunction with devfs.
The thing I remember that was a big deal about udev and I think it was mentioned in that link , was that devfs used to create all these devices in the /dev directory that just used to sit there. I think udev creates its devices based on when is connected and only those devices. I could be wrong on that but I remember they got rid of hotplug for a reason and it sounded like a good one at the time. Either way its a mot point because udev isn’t going anywhere anytime soon.
That was indeed a major FAIL. I knew that major and minor number still exist. What I meant is now they don’t mean anything anymore, they’re just a pool of numbers that are not bound to any specific driver.
These days it looks like distros are trying to make their boot times shorter. I wonder if they’re going to open up the devfs bag again.
I think what shocked me most when I had to switch to udev was how much it slowed down booting ^aEUR” Even with a tmpfs. I told myself: “oh well, maybe it’ll improve over time”. It didn’t.
My ethernet card jumps to eth2 instead of eth0 for no good reason once in a while. Which pisses me off because I’ve got to go edit /etc/udev/rules.d/blabla and then modprobe -r driver ; modprobe driver. No such problem with devfs. Fortunately I don’t reboot often.
The Return of DevFS:
http://lwn.net/Articles/331818/
Edited 2009-09-08 22:26 UTC
If you just really don’t want to use HAL, then just don’t use it. Choose a distro that caters to your needs and stop whining. None of your complaints have any technical merit anyway.
perhaps if you used an OS that made your life a little easier, and took some of the drudgery out of your day, you’d have time for editing your spelling and grammer errors.
Moust peepol start eech sentance with a capitol lettur.
Few peepol seam to nou how to use kommas correctlie.
Missspelling grammar in a sentance about improving it is pryesless!
Cheers,
Emil
I’d rather say that not everyone was born in a country, that make any use of english, so I think that the typos and grammar errors in my posts aren’t something weird.
I also think, that native english speakers should focus on themselves – most of them don’t know how to use their own language and that is weird.
I was responding to a poster, who in defense of his hatred of GUIs, automation and lack of annoyances that newer OSs and Desktop environments have, said he’d rather spend his time doing all that stuff manually, then had the nerve to complain he didn’t have the time to check his spelling.
I personally couldn’t care less about my spelling. Try reading the preceding posts instead of just responding when you have no idea what the topic is.
Those make things people’s workflows faster. Anyways that is not describing the full benefit to the ‘Utopia’ approach.
They solve a lot more serious security issues as well.
That is one of the reasons I like Debian. When I want a desktop OS I can have one. But when I don’t then I don’t have to have one.
I try.
Think about this:
Using the old fashioned way of managing and mounting external drives you have to provide the root password. Quite often multiple times you need to provide a root password. Or you have to use sudo, which is just as bad.
Using things like HAL you have a privilaged daemon that performs that action on your behalf based on rules (configurable via Policykit). As long as your user has the permissions to the device then it will mount it for you. You don’t even have to use ‘automount’. I like to use ‘pmount’ which is the command line version of things.
This way you can mount and access removable media without providing the root password (or sudo, which is largely equivelent), without giving permissions to the device, or editing fstab or anything like that.
Suppose your in a situation were a firefox vulnerability was exploited and you have malicious software running in your account without you aware of it.
Which is more likely to cause problems? You continiously using root account every time you want to switch networks or mount a flash drive, or using HAL and related daemons to perform that action for you outside of the account.
Also you have similar issues with using tools like gtksudo and whatnot. When you allows sudo access to a GUI application your effectively giving full root access to that entire application. GUI applications are hugely complicated and they are very easy to exploit… were as with ‘dbus’ all you have to do is make sure that the input privilaged applications recieve over dbus is validated before it’s acted on and you avoid almost any possibility of exploitation.
So while having dbus and that sort of thing running on your system does open new possibilities for exploitation they also provide a much much smaller attack profile compared to running a application using sudo or setuid root or whatever.
Udev provides a way in Linux to innumerate hardware devices consistantly. (as well as other benefits like defeating the limitation on the number of major and minor device nodes)
Prior to Udev rules Linux innumerates devices on a first come, first serve basis. That is when the system boots the first device that gets detected is eth0 and the second is eth1 and so on and so forth.
This is fragile because if you upgrade your kernel or change your hardware configuration in some trivial manner then this can affect the order things are detected. So while the old way worked ‘ok’ for static hardware configurations it had a hard time coping with modern systems that are much more flexible.
So Udev on many systems tries to store static configurations.
With OpenBSD they hand out device names based on what drivers you use. So if you have a onboard VIA ethernet and then a PCI Realtek ethernet they show up as completely different network devices. In Linux it’s customary to have all ethernet devices show up as ‘eth0’, ‘eth1’, ‘eth2’, etc, irregardless of the actual driver. This means that OpenBSD is able to avoid the innumeration problems that often caused problems with Linux routers in the past… but I expect you run into similar issues with OpenBSD when you use multiple devices that use the same driver.
Oh, and if you prefer the OpenBSD way to name devices you can replicate that using Udev. You can use udev rules to create abritrary names for ethernet devices, although this may cause problems with some system scripts. For example you can setup a router with ‘external0’ and ‘internal0’ ethernet devices. So in fact udev gives you _more_ control then otherwise is possible.
Except it did not work. For example in X.org there was no such thing as mouse hotplug support.
How it worked prior to X.org Hal/Devicekit support was as follows:
X.org has _zero_ hotplug support for anything. No mice, no keyboards, no nothing. The configuration is static and is set at start-up time through /etc/X11/xorg.conf
Linux has _very_good_ hotplug support. Originated with things like Knoppix Linux gained the ability to detect and configure all supported hardware on the fly with good success.
So.. to make mouse hotplugging work Linux set up a emulated explorer PS/2 mouse at /dev/input/mice. If you configured X.org to use that then you could use a static configuration for a standard 2-button PS/2 mouse with scroll wheel. (eventually X.org just started using this as the default wheither configured or not)
So every pointer device you plugged into Linux would end up being used as a generic PS/2 mouse.. even if it was not a mouse.
In order to make something other then a mouse work (Wacom tablets, touch screens, joystick pointers, trackballs, touchpads, mice with more then 3 buttons etc) it required you to go in and manually configure and list every device in xorg.conf. Any device added or removed, required a change to the configuration file and required you to log out, restart X, and log back in.
For me personally I used a Wacom tablet and a complex trackball and it would often take numerous tweaks to get everything working correctly. To do the tweaking necessary it would take several _hours_ of making a small change, log out, restart, test, small change, log out restart, test, etc etc.
Nowadays I just plug in my tablet and it ‘just works’.
Like everything else it worked fine if you have _very_specific_ configuration and had _no_changes_ to your hardware configuration or setup. It was ‘ok’ for the most common configurations, but it quickly broke down if you did anything remotely advanced.
Yes, in fact, it is that hard. I absolutely can’t understand why anyone would want their desktop to be unusable and why they would care to micro-manage things like device nodes. I want my computer to Just Work so that I can get things done, instead of fiddling all day with configuration files and obscure commands.
People like you are exactly what’s keeping Linux behind on the desktop.
“Developer> hm, XXX sucks, why do I have to do this? let’s make it easier
You> NOOO! BLOAT! DO NOT WANT!
Developer> okay, since users are protesting so much I’ll leave it be.”
[…nothing ever changes and the system remains in its 1960s state…]
I’m glad that a lot of developers are sensible enough to ignore people like you.
Edited 2009-09-11 08:33 UTC
Mind you, a lot of us who have that “old man smell” are exactly the people who are writing software for you.
Oh, the ungrateful Linux youth.
Writing the software for me? Ungrateful youth? How funny, because I’m a programmer myself and I strive to make my software as easy to use as possible for my users. Even if that means I have to spend several more hours to make something just a little more easier to use. In fact, my attempts to contribute to Linux desktop usability is usually undermined by overly conservative people like you.
Old man.
<joke>Of course not. You are too busy typing in mount commands each time you plug in your USB key or inserting a cd-rom!</joke>
“I want my desktop to look nice. It seems to be a general consensus among desktop users if current trends are any indication.”
That’s because you are not a professional who’s primary focus is finishing his job. And because you probably never saw an expert’s workstation and deduce conclusions from flashing your weeny with your best friends. There’s simply no other explanation for your attitude.
On the other hand, the state of modern computing *is* embarrassing; not because of lack of flashiness but because of the chaotic state of the industry.
It’s not about looking nice, it’s about things working when plug them in, which HAL helps with. It is about automation. I don’t want to have to worry about crap not working unless i write a script, or configure something.
If I plug in a printer, I want the damn thing to be able to print. If I plug in a USB drive, I don’t want to have to mount the thing myself, I want it automounted.
I am a professional, and I don’t have time to be screwing around with scripts and the command line to get basic functionality.
Maybe your time isn’t worth very much, but my time is. I’d rather spend it working.
Edited 2009-09-08 15:06 UTC
Right. I find it weird that people here are lumping essentials like udev to “unnecessary bloat”.
Also, people forget that “fat” depedencies reduce other kind of bloat, the kind caused by duplication of functionality (which leads to buggy code). The disk space bloat is harmless, in that none of the superfluous applications are running unless you launch them (unlike the bloat caused by, say, Akonadi launching mysql server on boot).
I believe what we have here is making lots of noise of a non-problem. It’s Linux, you can strip it down as much as you want.
Unfortunately, you’d rather spend your time on FIXING the problems cause by an automation, which is far more complicated, than the hand-driven OS. People like you used to act like they don’t care what’s under mask. IF so, then just use Mac and don’t say that “everything needs to be auto”. You can also use Ubuntu and waste your time on fixing auto crap.
Don’t presume to tell me what I like doing, or what OS I use, or what I should use. I prefer to make my own decisions, thank you very much. You don’t know me, so just sit down.
I use Debian, because I like control. I do care what is under the mask, but I don’t care to messing with the OS when I have work to do.
I use Debian because it is the perfect balance. I don’t spend my time fixing anything. I plug in a monitor, it autodetects the display, I don’t have time to be editing modlines and xorg.conf. I am fully capable of building my own xorg.conf, but choose not too, because my time is too important
I plug in a USB drive, it detects the drive and automounts it. I am fully capable of mounting drives, but choose not too, because my time is too important
I plug in a printer, it’s autodetected and configured. I am capable of setting up a printer, but choose not too, because my time is too important
I don’t know where you get this assertion that you have to spend time fixing things due to automation. I don’t think I ever have, at least since I switched to Debian. When I use Windows, it’s the same thing. I don’t spend time fixing issues, I spend time working. I have code to write and servers to run, and it’s 2009, not 1998.
This is not a server I’m talking about, it’s (say it with me now) A DESKTOP.
When I want lean and simple I use FreeBSD. If it’s a server, there is no GUI, regardless whether it’s BSD or Debian, I want maximum control. If it’s a desktop, I want to get to work, and not have to hobble together with scripts and config files to accomplish what lowly Windows can do since Windows 98.
Edited 2009-09-08 20:57 UTC
I think you the nail here where most here are missing him. Hardware and software are changing and also how we interact with it. A lot of people are not able to look far into the future or even like the future because it gives changes. Changes that force them to learn and to start thinking again.
It makes people uncertain and people who can’t adjust a disadvantage. I have seen it the last couple of years with the migration from Solaris 8 to 10. As some sysadmins did not spend time in learning they are missing the edge and the are getting more and more behind everyday. And to be honest, we are leaving them behind.
In the changing Linux world, there also will be casualties. Most likely not real users will be effected, but only the wannabee, the diehard, the one that use Linux or BSD to be c00l and 1337. But like you said multiple of times, it should work and I can’t agree more. If people have problems to see how migrations in flight happen, like HAL and udev are for example, then let it be.
I think I’m an old men, who wants things to work without to much overhead. HAL gives some overhead, but it gives more in return. The same for DBUS for example and people who want to know what or why should take a look at GNU/Hurd for example. Or should think about a way to make one implementation that solves directly all there problems. The waterfall-model doesn’t work, steady progress does.
Just my 2c and time to do a weekly update of Sqeeuze.
Ok… suddenly this discussion went from GNOME to something broader… But just for the sake of the argument:
As I stated before, I’m not a power user, nor a noob… But I’m all in favor of automation. I’ve been using GNU/Linux for almost 4 years (with an ocassional jump to windows (something I’m ashamed of)), starting with Slackware (great learning experience) and then running Mandriva, Fedora, SuSE, Ubuntu… What’s the point of this ridiculous resum~A(c)? Well: slackware is really a great distro, similar to any BSD… but in order to get things working, you needed to get your hands dirty… I was learning a lot, so I don’t complain, but now, all I want is my OS working, and the tweaking to be an option for me… I said in a previous reply, I use Linux Mint (derived from Ubuntu and all its auto-crap) and, believe me, I have not found any trouble at all with all its automation, even considering that I use the command line for a lot of things regarding configuration…
Not everything needs to be auto… But that’s a choice the end-user has to be able to make… For some users, the really experienced ones, doing things manually is the way to go, for others, automation is the right thing. That is the greatest thing about GNU/Linux, or Free Software/Open Source in general: you can choose what better suits you, not being forced to accept things just because someone consider it is better for you.
Edited 2009-09-08 21:33 UTC
“I am a professional, and I don’t have time to be screwing around with scripts and the command line to get basic functionality.”
How do you know what and if it needs to be scripted unles you’re doing it? And what kind of profesional wants to wait for his flashy windows to their stuf and then drag them around with a mousey and click on candy-like buttons? Or you are a hard core professional and have it all black?
The idea of a GUI is laughable. It’s not a natural way to communicate with the machine but a greased copy of an ancient trick taken out of context. I do not use GUIs or try to avoid them for the reasons you mentioned and others.
Who was talking about eyecandy. I’m talking about the sort of automation like udev and hal. I’m talking about cups compared to LPD. I’m talking about accessing smb and nfs shares from nautilus and konqeror.
GUIs are a fact of life. get over it.
I don’t want eyecandy, I want functionality. If you can’t separate the two in your mind, then that’s your loss.
Funny you say thta because I happen to be a professional audio engineer and graphic designer who, you know, actually makes money doing what I do.
But you know us artistic folks we do nothing but waste time on little things like usability and ease of use and having things actually look and sound nice. In-fact those who actually use other OSes where these things are important seem to do more work. No one here is going to argue against that there are more working professionals using OSX and Windows than Linux.
Its human nature, people want nice things, its make them feel better. Case in point. I’ve owned a copy of Logic since version 6 but didn’t really start to use it until version 8. Why? The UI sucked, it was a mess, it made no sense and it looked like pure ass. 8 changed that, the UI is nice to look at now and the functionality has been revamped to coincide with the new look. 9 has taken it even further.
Nice to know man. I was a professional graphics programmer and made MIDI gear out of amigas back in the 90’s. It was easy, not because of a GUI, but because every aspect of that machine was thoroughly documented and the OS consistently crafted by a single jolly team.
You do not consider one mouse + display to be an audio equipment? It has nothing to do with making music, maybe you use it to kickstart your fancy virtual studio but you need a real gear from there on. If you do click-make everything than congratulations, you are a prodigy.
Well if you were in 1988 then yeah what you described is how music is made, but that is not the case in 2009. Protools, Logic, Cubase, Live are all graphical programs which in some instances require a mouse and keyboard to manipulate. Kontakt is an extremely popular software sample that requires the use of a mouse, Maschine has an external controller but still requires the use of mouse and a display. I don’t know what era of music making you are stuck in but real pros use Protools and Logic, no questions asked, especially for audio engineering and guess what, they happen to be graphical applications. Protools has fancy interfaces and hardware where you can control the UI without a mouse and keyboard some of the time but you really aren’t getting much done without them, especially when it comes to editing audio.
Live is extremely popular among djs and is primarily used (gasp) live. It requires mouse and keyboard as well, at least for the setup, in a performance then you can trigger loops using whatever controller you wish. Most just tap a button on their keyboard to trigger loops. DJing almost requires computer now as most djs opt for the convenience of something like Serato or Traktor over plain old vinyl or cdjs. Its rare nowadays to see a dj walk around without a laptop and as part of his gear, and they will be using the mouse nd keyboard throughout their set. Think I’m wrong watch the dj next time you are in a club. he will have a laptop, mybe some turn tables, and audio interface, and possibly asome kind of triggering device (small 25-key keyboard, pad controller) and he will still use the mouse and keyboard throughout his set to load the songs, set queue points, loop, trigger effects, etc. There is no wy around it. The mouse and keyboard is part of musical culture now, as much as the computer itself, its a natural extension of it.
Dude, i can be here all day schooling you on music gear and software. Its what I do. I take what I do seriously and go out of my way to know what the f–k I’m talking about. Like I said I’m a professional. The trend amongst all music related software is flashy good looking interfaces. Does it add functionality, no. Does it make the software feel more professional, yes. I pointed out Logic as an example, even JustBlaze has said that once Logic was revamped it became “logical” to use. I know, I was there at Remix Hotel hearing the words flow from his mouth.
Thank you for your lengthy reply. Our opinions differ, but that’s allright. I live with GUIs for 20 years and I always try to avoid them, can’t help about it. I’ve also made music on 0.5 MHz/64k RAM computers so I can tell bloat when I see it.
Edited 2009-09-10 06:21 UTC
Well, if we had as much more % of functionality as we see more % in size, that would be magnificent. Sad thing is, all this isn’t a unique “feature” of Gnome, a lot of Linux apps follow this same path of growing more in size than in features.
I pushed a machine from Lenny to Squeeze. Sure, squeeze is in testing status and things are expected to break but there was enough difference between KDE3.5 and KDE4 that the system ended up a broken mess. Ye ‘ol dist-upgrade went badly and I ended up scrapping the entire system for a clean Squeeze install.
Now having both Lenny and Squeeze machines running, I can tell you that KDE3 is a heck of a lot more efficient. The stuff that supports hardware is all well and good but forcing KDE4 rather than allowing a smooth transition to Squeeze with KDE3.5. I like the graphic bling on the small box but the big box with KDE3 runs a heck of a lot better.
I do thinnk he does go over the top in some regards although the current situation is absolutely stupid. There is gvfs which is supposed to replace gnome-vfs. By 2.28, there should be no reason why gnome-vfs is included anymore. FAM has been replaced with gamin, why then hasn’t GNOME standardised on one piece of technology instead of offering numerous variations? The spiderweb of dependencies has been known for ages, there has been a plan to simplify it but no one is willing to put the hard word on the maintainers by saying, “no, if you don’t upgrade your application, your application will be booted from the GNOME desktop”.
I mean, Christ almighty, Bonobono (plus CORBA and numerous other components) has been deprecated for how long and Evolution *STILL* uses it?! webkit has become the de-facto standard open source rendering engine and yet GNOME is still hobbling around with gtkhtml and ghecko? All of this should have been sorted out by now – 2.28 should have heralded the shedding of all this crap.
Don’t get me started on HAL; bless the cotton socks of the programmer who originally came up with something to address the original aim of trying to get Linux to ‘just work’ to improve the desktop experience but things should have moved well beyond HAL by now to a system where detection is based on the hardware notifying the kernel of its presence which then sends a signal via DBUS to a a hardware ‘management’ dispatcher which decides based on a set of rules how to handle a given device. HAL right now is still using the retarded CPU hogging, unscalable polling that even a person with computer 101 knowledge can see as being retarded.
There are problems and they’re not being address; you need leadership from the top down where a target can be set, a roadmap laid down and for contributing programmers to know, “ok, this is the target we’re aim to get to” instead of a project wandering around the wilderness trying to work out what is supposed to be moving towards in terms of a target.
Edited 2009-09-07 23:28 UTC
Actually one should use inotify instead of FAM/Gamin…
And add more Linux dependencies?
The tunnel vision. Monoculture there, monoculture here.
>And add more Linux dependencies?
>
>The tunnel vision. Monoculture there, monoculture here.
Why not? Most people don’t give a shit that Kraken Desktop Environment (KDE) runs with [insert your favorite irrelevant kernel here].
Really?
How many platforms out there support FAM? Gamin?
Gnome already uses Inotify. It’s been like that for a while. Debian sometimes likes to install famd, but it does not have anything to do with Gnome.
It’s not Gnome’s fault that other Unix systems make shitty desktop platforms and it’s not Gnome’s problem to fix their deficiencies.
Gamin in FreeBSD, for example, has to open files to monitor them, leading to performance issues and runs into limits related to the maximum amount of open files.
There are similar issues with graphics. It’s good to be portable, but you actually need something on the other platforms to support the features. You need something to be portable _to_. If the other platforms lack the basic functionality then your kinda screwed, aren’t you?
The solution isn’t even that – what is required is a grand unified HAL which abstracts all what the desktop needs via an abstraction layer so that all one needs to do is port the HAL and the desktop will compile on top.
The HAL needs to be operating system agnostic and the GNOME programmers unconcerned about the underlying operating system; if a feature is missing in HAL then it needs to be added; if the underlying operating system is deficient then the maintainers of that operating system need to be notified of such a deficiency and put the responsibility of implementing it on their shoulders.
There was a talk about such an idea but it never got off the ground – but it is needed; evidence alone is how horrible some of the GNOME code is trying to accommodate every operating system in the software itself rather than all calls be made to one consistent underlying API which abstracts all the operating system dependent calls. Oh, and it would not be dependent on GNU extensions; it would only use what is uniform across all platforms – for example, if you need to use grep, don’t use the GNU extensions to the standard grep. Many times these libraries can achieve what is required without the GNU extensions – you just need to build several components to reach that end result.
Its not going to happen because it would require 6-12 months of solid of design to ensure that it covers all which needs to be covered as well as designing it in such a way that features can be progressively added without breaking compatibility. I’m willing to work on such a project but like previous attempts to make a contribution – they have fallen on deaf ears. So rather than persevere I roll up my nap sack and mosey on back to Mac OS X with the occasional glancing over my candy coated iMac to see how things are going over there. Once I see nothing has happened – I go back to enjoying what I have.
Edited 2009-09-08 11:41 UTC
To me the solution is less abstraction, less portability cruft, and less operating system agnosticism: a Linux userland which is native to Linux. Fast, lean and mean…
I don’t care about running the Kraken Desktop Environment on Windows/MacOS X/whatever. And most people don’t either. Most people don’t care about multi-booting OSes. They just want one OS that fits all their needs. Currently all OSes just aren’t good enough in the desktop department.
I don’t use GNOME, KDE, Windows, MacOS X for this reason. I don’t mind that they exist, but the world deserves something better than that.
Desktop app development on Linux? No way. I don’t want to have to learn 10,000 ways to do the same thing. I want a consistent API.
To sum up things desktop-wise:
Windows: bloated, insecure
GNU/Linux: bloated, a rat’s nest
*BSD: less bloated, and less a rat’s nest than Linux, but the kernel is not as advanced as Linux, and it’s server oriented.
MacOS X: bloated
/me goes back to look at Haiku
Edited 2009-09-08 12:47 UTC
The solution does already exist, and have been largely deployed already. It continue to evolve and improve to cover more and more hardware classes. The name is Solid.
But the problem is that Solid has been written from the ground up whose only consideration is Linux – if you want to make it available on a non-Linux system you’re required to either change your operating system to be have more like Linux or you have to majorly change things in Solid.
Add to the fact that Solid is a KDE project – I don’t see it making its way over to GNOME anytime soon. I thought DeviceKit might be the silver bullet but as I read more into it – it is just as bound to Linux as was HAL. I swear there is a subculture within the open source world who seem to be hell bent on a Linux monoculture than developing solutions that are agnostic and promote diversity.
Wrong, it was writen as an abstraction layer above the OS, designed to handle different OS mechanisms. It abstracts away the OS spesific part for the application developers.
No you don’t, you only have to write Solid backends to talk to your OS spesific functions. Major changes to Solid is not needed, since this is exactly what it was designed to do. Think of it as Phonon for hardware. Besides Solid alrady have functionality for *BSD, Windows and OSX.
There you are right, the famed Gnome NIH syndrome will make that a certanty. More likely they will implement their own version in a couple of years.
Well having KDE, GNOME, X.org littered with piles of abstraction layers to be able to run on every OS out there is “monoculture” too. It would be more “diverse” if all OSes had a different desktop solution tailored to the OS. Anyway “monoculture” and “diversity” are buzzwords.
You are wrong. What we clearly need is people who use these software to contribute by coding. Surely anyone can code, it’s not much harder than learning new language. We don’t need leaders and heavy organizations, community driven model is much better since person who codes what ever he likes makes best code. Did you know that time that it took for you to write this complaint you could have improved those things by contributing code that could have saved baby seal … Wait a minute, this isn’t my MOSS cup, FO…
Wow, what planet are you living on exactly? Or is it a parallel universe where most people work for the computer instead of having the computer help them get their work done? Most users do not want to code, do not ever want to know how the computer does what it does, and shouldn’t have to care. This is a growing attitude problem in the foss community of late, next thing we all know if people like you had your say we’d all have to contribute x amount of coding hours before the software would even work for us. Ridiculous.
Which is why the community-driven model has resulted in such a rat’s nest? We really do need someone to take the rains and say to all the developers: “This is where we are going, if you don’t like it go code somewhere else or start your own project.” A thousand people pulling in a thousand little directions is precisely the problem. They need to lay it down and stick to it. That’s how Microsoft and Apple got where they are–they know what they want and that’s the end they code for. It doesn’t please everyone but, as the massive ubiquity of Windows should tell you, it’s good enough for the average Joe. That’s also the reason Apple is on the rise at least in the U.S, they’re delivering an attractive alternative to a good majority of people (myself included). Is it perfect? No. Do they clearly have a design in mind? You bet your ass.
Agreed, but there are low level things that almost no one ever likes to work on and those get neglected under the current community-driven model. While the person who codes what they enjoy certainly writes the best code, what about those parts that are necessary but that no one enjoys (low level abstraction, debugging, adding features that are demanded but aren’t wanted by the coder)? There are a lot of projects that become abandonware or otherwise don’t really improve because once the coder is done and it “works for me” as most of them seem to say, it needs tested and debugged in a variety of different setups. It’s not fun, it’s not glamorous, but it is necessary… and almost no one in the foss world really wants to sit down with their projects and really iron out all the bugs that are reported by users. Do you know how many legitimate bugs get triaged by developers who either don’t think it’s an issue or just don’t want to deal with it, and that’s if the users are lucky and don’t get a response back saying something egotistical to the effect of”Well, if you can’t fix it, shut the f**k up?” Most of those responses come from people with your view, I would guess… the egotistical children of foss.
Oh, sure. Provided upstream likes it of course and allows the fix in, as long as it doesn’t stomp on some other idiot’s ego that someone else dare modify his feature or add something new. And don’t tell me it never happens (Glibc, anyone?).
Note also that Linux kernel development doesn’t work this way either. Patches that don’t adhere to the (high) standards of the Linux kernel get rejected, and there’s a leader namely Linus.
Distros should do the same. Reject apps/libraries written in exotic languages, with exotic dependencies, not modular or fork them.
Exactly. The community development of the Linux kernel works well because there is a leader who is willing to stomp on a few egos to get the results he wants. Distros really aren’t the problem though, it’s the upstream software in general that has the issues for the most part. Sure, distros can patch the heck out of it and maybe apply a Linux kernel-like model to such patch submissions, but that doesn’t get to the root of the problem: the fact that most of the software is simply not tested the way it should be and the developers having little interest in fixing bugs that they don’t find important. It’s all well and good if someone in distro x starts patching it, but then distro y may do so in an incompatible way and before you know it we’re even more fragmented than we were.
I don’t think it would create de facto compatibility problems if standards were imposed upon userland software.
Out of my mind, right now there are the following libraries on my system which are duplicated (I spent a some time minimizing the number of dependencies on my system).
libid3tag
id3lib
taglib
gnutls
openssl
libmcs
xfconf
libmpg123
libmad
cdparanoia
libcdio
mac
ffmpeg
audiofile
libsndfile
libarchive
libzip
libexif
exiv2
libdownload
curl
libsoup
libgsasl
cyrus-sasl
gstreamer
xine
* other misc. cruft:
boost
* some more are being worked on as we speak
thunarvfs
libsexy
libglade
gtk+
It’s just crazy. I bet I’d be using less than 64Mb of RAM at bootup after one of each of these is decided upon.
Wow, finaly someone has realized something dead-obvious! I was always shocked by the fact, that people are actualy letting the bloat to grow. That’s completely against KISS philosophy … and that’s why I always avoided Debian, RH* and choose Slackware and Arch for their lightness and speed. I prefer to build my own desktop from the ground, than using “one size fits all” distro.
BTW I also thought it is dead-obvious that HAL and UDEV for particular are unneeded crap, but most of ppl think the opposite. That’s really weird. Fortunately we all have to choose from and that’s very good.
“Shouldn’t KDE and GNOME be engineered in a way not to require all these strictly end-user oriented features?”
I thought the point of KDE and GNOME was to have end-user oriented things?
The limits of the ‘bazaar’ development model heh?
I can’t wait until some .NET/java/perl/python/[insert your favorite toy language here] scripter says that hard drives are so cheap that 3gigs doesn’t really matter
/me goes back to looking at Haiku…
You’re not seriously suggesting that .NET & Java are toy technologies!? Perl and Python too are certainly widely used by many people to get real work done.
And you know what? Sometimes 3 gigs DOESN’T matter. You can sacrifice storage/dependency footprint for features, reliability, shorter development time or other things and sometimes that’s the best choice to make.
There are four main reasons why this shit exists:
* For people who can’t code to not feel left out.
* Increase “productivity” of corporations (i.e. do a bad job fast to make more $).
* For hardware vendors to make more $ because then you need to pay the hardware tax every year to run this shit at an acceptable speed.
* To lay off good programmers because they demand a better wage than some scripter.
Every single bit matters.
No it’s never the better choice. You should hire more programmers to code and audit if you want shorter development times and higher reliability. It takes a certain amount of man hours to do a good job and you can’t compress that losslessly.
Edited 2009-09-08 16:21 UTC
No, you can’t compress it losslessly. That’s the whole point. So you can get things done quicker and to a higher quality, but at the cost of a bigger disk/dependency footprint and maybe a slightly slower runtime too (as long as it’s fast enough for the user to be happy).
Writing good software is about understanding and satisfying the needs of the customer, and not bankrupting the customer in the process. If the customer doesn’t need the software to run acceptably on a computer more than 5 years old, don’t spend time and money focusing on providing that. Same goes for disk space. If you would focus on these things to the detriment of anything else, then you’ll go out of business because competitors will undercut you by doing what the customer actually wants, instead of what your ego wants, and the competitor will probably do it cheaper to boot.
Even in Free Software land, the best projects focus on providing useful features and reliable software. The worst ones are perpetually crippled by the lead dev’s bizarre obsession with some stupid minor facet of the software that nobody else cares about.
What you say is true. The only thing is, it shouldn’t be that way because that’s how inferior software become dominant. Think BeOS compared to Windows.
The behavior of competitors doing what you describe amounts to nothing other than swindling and some measures should be taken against that.
Besides, we do have time, there’s still a few years left before planet earth is inhabitable
Edited 2009-09-08 17:52 UTC
That’s called “competition” and it’s far from swindling. If you don’t do what the customer wants and another company does, charges a fortune for it, but the customer believes it’s worth it, then it’s not swindling. The customer willingly paid for service rendered by them that you did not provide. That’s called competition, and it’s one of the things that keeps the market going. Next time, if you don’t want to get undercut, do what the customer asks.
The important word here is “believes”. Belief is not fact. If you induce someone into believing something that is false then I call it a swindle.
Software that takes more RAM/CPU cycles/HD space than required is bad software.
You can call it a swindle all you want, but in the end it is not what you believe about the software that counts. Most company managers are satisfied with “good enough” and are willing to pay more to get it done quicker. Most of them are not computer geeks, they do not care about disk cycles and CPU execution times (a few ms here and there makes no difference to them at all). You can rail against it all you want, but in the end you’re still out the money they gave to someone else because you wouldn’t do it in the way or the timeframe they wanted. If you’re writing software for someone else, your beliefs are irrelevant. Your objective is to satisfy them, not yourself. If you don’t get it done fast enough, your beliefs aren’t going to get them to give you more time.
What you say is true, but it doesn’t have to be that way.
If everyone starts to think that nothing can be done about it and that one has to bow to the system, then it won’t change.
Well, these days we have a handful of DE’s and WM’s to choose from. I believe GNOME and KDE are trying hard to fulfill their goals of user-friendliness, so, if one thinks that all that is just bloat… well, use another desktop. I’ve read about many power users who prefer using FluxBox, IceWM or the likes, so why don’t stick to that and let GNOME and KDE be the great things they are for the rest.
I feel it’s kinda selfish to want something to be different just because you don’t like it, when you have so many choices to please your likings.
Good and reasonable. No one can object an argument like that.
But you seem to ignore the big trend towards a situation where one can administer the system only from badly designed GUIs, which mostly require the whole Gnome to function at all. Actually, in some respects the system and X actually requires these desktop environments to function properly. NetworkManager, DeviceKit, HAL… Make your pick.
I repeat myself, but the so-called “user friendliness” is a Trojan horse that is doing havoc all over the place, regardless whether you prefer TWM instead.
+3
@strcpy:
Mmmm… I do not (repeat: do not) consider myself a power user. I use Linux Mint, and many people here can say that is one of the easiest distros around… But, well, I prefer the command line to configure it than the GUI tools provided with it. There’s mintupdate, mint* (put whatever other name you like) to manage almost anything a basic user can ask for, but I, not a power user, feel more comfortable in the command line.
What I’m trying to say is, perhaps many distros try to put a lot of GUI tools to configure the behavior of the OS, but we always can resort to the command line to manage anything… And see everything working as it should… (well, at least I haven’t broken my system in a very long time …)
Of course… HAL is something beyond my reasoning… But it really does simplify my life (I remember a time when I tried FreeBSD 4.x, and you actually had to manually add the device nodes in order to get some HW stuff working… correct me if I’m wrong, but HAL is the thing that gets rid of that annoyance for me, isn’t it)
Edited 2009-09-08 20:11 UTC
Well, if certain devs didn’t have that type of selfishness, we wouldn’t have that many options to choose from, or would we.
edit: typo
Edited 2009-09-08 13:56 UTC
@l3v1
You’ve got that right. But, following that trend, wouldn’t it be easier for him to start a new project of his own?
Of course, it’s easy for me to say that not being a developer… But I stick to my point: everything is a matter of taste…
Edited 2009-09-08 20:04 UTC
CSS? do they know the meaning of it?
Edited 2009-09-08 03:43 UTC
I believe this demonstrates the main reason why Linux has been a failure on the desktop.
It is a huge clusterf***.
I am not arguing that users actually care about dependencies, but the picture clearly shows something that can not be called good engineering. Something that continuously leads to many other problems as well.
As it would not be insane enough, Debian is also abandoning /usr as a standalone file system and putting glib into /lib because someone decided that libc is only usable when called by wrappers.
One step farther away from UNIX.
… and one step closer to Windows.
I put in Gnome for other people, take out the top menu bar, give them all the app icons they need in the bottom one, along with four virtual desktop icons, show them how to use virtual desktops and the icons, and they seem very happy and in no time have all their files that they regularly use all over their desktops.
Well, its how they work normally, their own physical desks have all kinds of stuff all over it in no particular order except how they happen to have got used to it.
Me, at the moment I’m using fluxbox with some kind of olive green backdrop, nothing on the desktop at all, right click to launch the apps from a modified menu, and its very calming and uncluttered. I’m thinking about flwm…
One size don’t fit all, is the answer.
Let’s finish the transition to devicekit similair transitions, so we can remove hal and other things that have a replacement. That would clean things up considerably, I’m guessing.
Ok, Gnome is getting bigger and bigger and same applies to Debian. But did someone heard about Gentoo? USE=”-cups” and you won’t depend on cups. Avahi? USE=”-avahi” and it’s done. And one can apply this rule for every optional dependency that by default Debian will install.
Oh great, one month has just passed while recompiling everything… Ever heard of dlopen()? Most free software developers haven’t heard of it either…
C’mon, any modern machine can build a full Gentoo system in a few hours and updating/maintaining it doesn’t cost that much time and you can cherrypick the updates you want. Try to install Gnome on Debian or Ubuntu and you’ll get Dia or even gnome-bluetooth with all the bluez stack.
I don’t want a “modern” machine, I don’t want to have to recompile anything. I want to actually use the computer to do things, FAST! and securely, but not system administration. I want the computer to make my life easier, not the opposite. GNU/Linux and Windows both FAIL at this (I don’t like to say Linux, because the kernel itself is really good, but the userland is junk).
Edited 2009-09-08 10:01 UTC
If you want things done FAST, your choice would be something like DOS.
It has no desktop, consists of just a few kB of files, starts up in a few seconds and gets out of the way whenever you start an application, so the application itself is as fast as it can possibly be on that hardware.
If you want somehow more FEATURES from your operating system, you will have to agree to some compromise regarding FAST.
Finding the balance between FEATURES and FAST that is right for you is your task. No KDE, Gnome or XFCE developer can help you there.
A friend of mine did exactly that. He compiled a kernel with all his hardware drivers not being modules but compiled statically into the kernel. He was even able to switch off module support on his kernel which makes the thing slightly more secure. This gave him startup times of 3 seconds (from bootloader to login), because he did not run through a whole bunch of init scripts.
There was no automounting or other automagical stuff working on his machine. He then also compiled only the things he needed himself. It was completely self built from scratch. Beautiful, but lots of work.
What this shows up is things not being separated out enough. He points this out in the article.
I agree. Each dependency may bring in a bit more than used, and that just cascades. The way to fight this is the same as it has always been.
“Do one thing, do it well.”
If you application adheres to that, and the libs it uses do, the dependency tree shouldn’t cascade. If libs are included to add functionality most of the users won’t use, that starts a cascade effect. If the lib themselves also do this, you end up where tools have (indirect) dependencies completely unrelated to the tool.
All OS and software suffer from this, it’s unavoidable and part of the nature of the beast, but in the open source world, this isn’t only more visible (because of no black boxes) but controllable, we have all of the source.
I don’t want to stop the gnome desktop being featureful and easy to use, but I want to be able to strip it down which you can’t do with cascading interdependencies.
Of course there is a tradeoff between work required to do this, and gain from doing so. The hope is supporting the resource starved environments, netbooks, mobiles, etc, will be a slimming force on bloat like this.
Those who say bloat doesn’t matter need to get a clue. If you ignore it, it gets to a point where you can’t. (Vista cough) and you have to slim it down (Win7 cough), especially if the hardware isn’t what you thought it would be (netbooks cough).
Nice timing, he probably knows that a lot of those things are being faced out in the coming 2.28 and 3.0 series of Gnome, yes corba and bonobo are still in evolution, it was build around it and it needs to go and it is a huge task. But still dbus will replace it and it will stay. Idem with gvfs which will be faced out, but things will depend on gio.
And that is fine. I want a desktop which recognize my external storage on plugin, which will update a folder view when something is created and which has a uniform printing system.
Gnome is in transition, but this doesn’t mean when the old things are finally gone it will become smaller, for every thing it will remove, new things will come and most of them I will use with much joy, like Clutter, Seed and so on. I think at the end it will be a great toolbox to make great applications.
“I want a desktop which recognize my external storage on plugin…”
You mean like: “*each* desktop should recognize my external storage”, or “I want to see files on my external storage on plugin.”?
End users are not programmers. There is an important difference between business logic and programming logic. End user should tell what he needs, engineer will tel how it works.
Fluxbox for me, but the last couple of years I’ve found myself installing everything with the
–without-recommends switch, Debian is becoming a joke
Why does inkscape and just about every other package require avaih? And Hal and policykit in squeeze are just a freaking nightmare.
I don’t really like distro hopping, but the time has come after 7 long years to move on from Debian and as far away from Ubuntu also as possible! Arch perhaps?
And isn’t it about time Grub stop spraying my screen with useless gibberish every time I boot? I mean come on, it’s 2010 already!
Edited 2009-09-08 11:38 UTC
Debian’s packages + deps have got bigger, so GNOME is too big? Has anyone definitively proved that GNOME is to blame in the first place, as opposed to changing packaging policies?
Debian just tries to package every possible feature which is not a bad thing in itself. The problem is that too many GNOME people (or freesoftware people for that matter) seem to be oblivious about modularity (plug-ins and dlopen()). Then there’s also all that shit that pulls in Mono, Python and whatever is the latest popular toy language these days.
Edited 2009-09-08 12:58 UTC
Actually, I should have included in my original comment some of the other possibilities for “blame” in this instance…
* Gnome is getting bigger?
* Gnome’s dependencies are getting bigger – not Gnome’s fault but should they replace them?
* Gnome is acquiring more dependencies?
* Debian’s Gnome is acquiring more package functionality?
* The packages Debian’s Gnome depends on have got larger or acquired more deps themselves?
* The underlying software (Gnomes deps, the kernel, etc) is requiring more complexity in order to handle, forcing the code to get bigger?
* The underlying hardware (given hotplug, buses that may take a while to enumerate, variety of devices to support) is requiring more complexity in Gnome and deps to support, thus forcing larger code?
* The compilers Debian is using are generating larger code? Has this increased performance? Is the compiler being used to best effect? Is there a compiler bug?
* Perhaps more dependencies are added but are they also used for other things on a typical system? For instance are libraries required in that would now often be there anyhow as part of the base system? Scripting languages people would want to install anyhow or that other standard Debian packages would already have pulled in?
The numbers are interesting on their own but they are not very informative without us knowing the nature of the change, what caused the change and what the alternatives are. It would also be useful to know if this is the typical *increase* in space usage if I install Gnome, or if most of the deps would already be there in my existing install – this has a pretty massive effect on “is software X going to use all my disk space”.
I am very new to Linux, and have been testing a few distros with virtual machine. I have noticed that people ask on forums, “what is the best distro?”
Well, yes that is a hard question and can’t really be answered. But maybe the Linux community could focus more on what are the “worst” desktop linux distros. That can’t really be done either as it is all subjective.
In fact without a site like “distrowatch” the linux community would be even more confused.
The office/multimedia applications that come with most of the Linux distributions I have tried should be considered mostly bloat is well they suck (abiword, koffice, rythmbox. Some don’t come with firefox, but some “relatively” unknown browser. However, applications improve over time (well some degrade) and maybe should be offered a chance (I don’t agree with this way of thinking, but it seems to engulf the entire open source movement. “Just give xyz a chance”, err no thank you)
Arch linux has been the only distro I have tried that was not filled with crap (unless I put it there). Although, it was more difficult to initially install. I like Mint too, for the easy aspect (yes it is bloated but it works right from the box and it is good to have as a livecd…………some distos should only be livecds lol)
KDE4 : I don’t like how I am expected to select my applications (I prefer how XFCE does it). Perhaps I could change a setting.
Gnome: Uses lots of ram for me but less than KDE.
XFCE: Fewer crappy default apps. Accessing the application I want is more logical and convenient (right click). A noticeable amount less memory required.
You’re wrong in some point. Even Arch linux, which is basicaly great distro, is filled with crap, but it’s that kind of crap that forces you to recompile the whole kernel to remove udev. CRUX linux is the only one today that can actualy work full-time without this internal bloat.
The overal situation of FORCING people to use some crappy tech which is located in kernel is BAD. Leave it in userland, leave it as a module and everything will be ok. Just don’t bloat the kernel, for @#$@ sake … !
Why this obsession with size? If GNOME needed 200 GB of HDD and 8 GB of RAM but made my breakfast, I wouldn’t complain.
It’s the ratio of size vs usability (or end user oriented features) what matters, and if it takes lots of RAM and disk space to do it, so be it.