A new version of the real-time Linux scheduler called SCHED_DEADLINE has been released on the Linux Kernel Mailing List. For people who missed previous submissions, it consists of a new deadline-based CPU scheduler for the Linux kernel with bandwidth isolation (resource reservation) capabilities. It supports global/clustered multiprocessor scheduling through dynamic task migrations. This new version takes into account previous comments/suggestions and is aligned to the latest mainline kernel. A video about SCHED_DEADLINE is also available on YouTube.
From the YouTube description: Unfortunately, however, the movie has been filmed in sequence, and then assembled to let people understand that all these activities were concurrent (we were not able of making three video simultaneously).
Well, that undermines the demo…
ericxjo,
Well, it probably boiled down to something as simple as them not having three cameras. I don’t have any trouble believing it could do all three at the same time. Although they should have panned from one to the next.
Even a non-realtime kernel should have been able to handle those three tasks simultaneously without any trouble at all on an old 486. I’d be more impressed if the tasks demanded much harder real time restraints. And then executing them while compiling linux and browsing with firefox!
Hi all.
I’ve been the supervisor of the whole project, since the first submission (which was called SCHED_EDF). If you check, I have always released a news on OSAlert and Slashdot whenever we released a new version.
As I have written in the description of the project on youtube, the project has been realized as part of a 3MEuro project called ACTORS and financed by the European commission. When you get funded by EU, you have to pass annual reviews with commission members, In particular, the movie has been filmed near the final review meeting, at which 3 EU members attended. These 3 members (selected by EU) had the chance to see the full system working. If you can’t trust the movie, trust at least the EU commission.
Unfortunately, we started the ball-and-beams, and then we started the robotic arm. For this reason, in the ball-and-beams you can notice the arm stopped in the background: we simply did not start it yet…
The project started in 2008, and we had several submissions. During these years, the code has been reviews by the Linux kernel community several times, and it is getting ready for inclusion in mainline.
Please, take care on this:
http://jeelabs.org/2012/09/24/linux-real-time-hickups/
hello all,
it seems to me that this sooper-dooper “sched_deadline” scheduler simply is partition scheduling which was made mainstream by the best control program ( os ) presently qnx 6.
even though i use mint linux executing off a usb drive… linux is simply a badly written program with big claims. it is too complicated ( libraries, many commands, slow, crashy… ).
lie-nux…
And because it’s such a useless, big lie that it took over most of the computing world. You’re a funny guy.
hello nuxro,
if only numbers were to be taken as credible, even capitalism is good and india is a super-power.
believable ??
——————-
i know about computing… i built the first control program ( os ) of south asia, in 2002. it was based on microkernel architecture but ran in x86 “real mode”. just a demonstrator. had message passing and unix-like “signals”.
i called it “dragunovos”… from the soviet sniper rifle dragonov.
Edited 2012-10-27 11:12 UTC
For some reason I do not believe a thing you’re saying.
A badly written program with many commands and libraries? If you were an OS-developer you’d know the difference between a kernel and userland.
you have answered your own doubts. the good os is simple in architecture, which makes it reliable and easy to add to ( modules ) or remove.
how much ever one “hardens” the linux kernel, how much ever ones one shifts whatever to “user-land”… by architecture linux is not microkernel. so this talk about “user-land” is shouting out that in linux there is not natural separation between kernel and “what ever one might call it”. it is the entire architecture of linux at fault… or is there a architecture at all ??
i have 512 megabyte in my desktop ( mint linux ). i have to restart most days. it just hangs. i only use mint linux because the windows xp machine was not allowing me to access the win-xp boot partition.
don’t forget the aspect of these complicated “dependencies” when one has to “install” some program.
what happened to the unix method of copy some program to a directory and just use it ??
Edited 2012-10-27 12:31 UTC
You’re not making any sense here.
No, a good OS is one that fits its intended purpose. There is no single definition of a “good os.”
And? No one claimed it was.
Oh, really? Why are there so many different operating systems which use Linux-kernel but an entirely different userland? Oh, that’s right: you have no idea what you’re talking about.
Ahahaha. Fail. Next time learn what you’re talking about.
Edited 2012-10-27 12:32 UTC
sorry it is you not making sense.
you said…
———–> “Oh, really? Why are there so many different operating systems which use Linux-kernel”
firstly such programs cannot be anything other than being called “distros”.
the second part after that…
———–> “but an entirely different userland”
i did not understand. is there some great development in the linux parallel universe that i don’t know about. if one uses the “system v ipc” in linux, that would not be linux at all, yes ??
so you are now questioning the existance of qnx.
when you said fail… please elaborate.
Edited 2012-10-27 12:43 UTC
Android is a good example of a Linux-kernel with non-GNU userland. There are also plenty of different kinds of embedded systems that use Linux-kernel without GNU-userland, like e.g. HDTVs and several BluRay-players.
Package manager – files has nothing to do with the application itself. You CAN just copy and application and its dependencies to another directory and run it from there just fine. You’ve clearly never heard of shared libraries and the likes and you just expect all applications to be statically compiled, and that says enough about your level of technical abilities.
———–> “package manager”.
1. does package manager never say “this or that .so file is not present”.
2. microkernel does not remove shared libs, but it does not make the libs to become a headache for the downloader.
3. cannot a “linux fan-boy system” be used at all without package manager. ??
———–> “and that says enough about your level of technical abilities.”
will you agree to contribute to a clock-less multi-core microprocessor and control program development if you are sure about your abilities. i will tell you the site to vist and see the documents.
Edited 2012-10-27 12:58 UTC
You not having set LD_LIBRARY_PATH environment variable properly does. Read http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Basically, the LD_LIBRARY_PATH environment variable must include the path to the location where the libraries are. This is a security feature. You, on the other hand, seem to expect the system to just automatically use whatever files that happen to be in the same directory as the executable, something that is fine in small, embedded systems, but Linux is meant for multi-user systems and on those it’s a bad idea to do that.
If you do not have the libraries installed at all, well, that’s your own issue. Package managers exist exactly for the reason that people don’t have to hunt for the dependencies themselves. If you insist on not using a package manager then you must manually hunt all the dependencies and install them.
Yes, it can. You just have to install all the dependencies by hand. See e.g. LSB http://en.wikipedia.org/wiki/Linux_Standard_Base
It’s not worth it, though. Why do you even want to not have a package manager? Is there some good, specific reason for that, or is it just that you do not understand how package managers and shared libraries work under Linux?
No, thank you. I have no need or interest in such.
a “package manager” denotes a hierarchical file system.
if you take a walk and re-consider linux again… boring, complicated, un-reliable, pretentious.
try running lie-nux on your next airplane. good luck not “crashing”.
Edited 2012-10-27 13:27 UTC
That’s your opinion.
I have several Linux-servers running 24/7, no crashes.
ok. i am signing off.
Which is more than I can say for Win-sux.
> a “package manager” denotes a hierarchical file system.
Nope, a package manager doesn’t denote that. It’s a way of resolving package dependencies when a one or more parent packages are installed, removed or updated. It also handles one or more software repositories and can be used for entire OS upgrades. In other words, far superior to Windows Update or Apple’s Software Update, both of which only handle MS or Apple products respectively and aren’t used for OS updates either.
The fact that almost every Linux system has its partitions formatted with a hierarchical file system that packages sit on top of is a full two levels of abstraction away from a package manager.
> if you take a walk and re-consider linux again… boring, complicated, un-reliable, pretentious.
Not boring at all – particularly with massive choices you get for your desktop environment (instead of one for Mac OS X or maybe two if you’re suffering Windows 8) and how you can customise it, which is usually where Linux desktops shine (think Compiz with all its special effects). I bet most Windows and Mac desktops look identically boring, with just the background colour/image changed and a stupidly large number of document icons on the Desktop.
Now if “boring” you mean not many commercial games for it, then I might agree, but there is a Linux client for Steam in beta right now, plus more and more games are working under WINE, never mind stuff like indie game houses supporting Linux (Humble Indie Bundle anyone?).
As for “complicated”, you can pretty do most things by the GUI in Linux now and with things like Ubuntu’s Software Centre, installing new packages is actually easier than Windows (I hate Windows myriad of package installers and updaters – it’s nasty when every app updates in radically different ways).
Linux is one of the most reliable OS’es on the planet – only some of the BSD variants (UNIX again!) can beat it for uptimes. It’s pretty rare to get a whole kernel crash nowadays that isn’t triggered by some sort of hardware fault. It’s why Linux is the only OS used for tiny embedded systems (watches, phones etc), through to larger consumer products (hard disk recorders, TVs, wireless routers) right through to servers and mainframes (the most popular OS on the Top 500 supercomputers? Linux).
I’m not sure how “pretentious” is an accusation you can level at Linux – I think Apple have that moniker well and truly sewn up. I’ve found Mac OS X to be a rather inferior UNIX to Linux with a shiny (prententious!) desktop layer on top that is no more functional than the average Linux desktop.
> try running lie-nux on your next airplane. good luck not “crashing”.
I believe many airlines use Linux for their in-flight entertainment systems, but I couldn’t speak about the more critical flight systems. I wouldn’t be surprised if some of those run a hardened/real-time Linux variant to be honest.
You’re picking nice features from a huge range of Linux distributions.
Yes, of course Linux can run on much more limited hardware than let’s say Windows and it has a number of nice DE/GUIs, but can it run those graphical wonders on limited hardware? IIRC you could run Linux on a 386 with 1 MB, but that rules out KDE, GNOME, XFCE. The Ubuntu tool you mentioned may be very nifty, but what use it is for Slackware users? Or anyone not using a GUI.
Package managers may be nice, but not everything is in a repository and even when everything is you can still break the system. Windows and OS X users can’t even spell the word ‘dependency’. For some reason these “inferior” systems don’t have this problem.
Sure, Linux can make a great server operating system or a mediocre desktop system that’s great for the more technical minded, but often when the positive points of Linux are presented they tend to be picked from various distributions.
Of course Windows and OS X have dependencies. What the frak do you think DirectX, .NET, JRE and OpenGL are? To name but a small obvious few.
And only last week I ran into an issue where one client couldn’t install a Blackberry manager on his Mac because he never upgraded the OS from Tiger. That’s one hell of a massive dependency right there.
Yeah you could argue that only idiots would try to run java applications without installing JRE, or expect all the latest Mac applications to run a 7 year old version of OS X, but the same could be said for Linux users who choose to run a distro that doesn’t properly manage dependancies when they’re only a novice. The *buntu’s, however, rarely have such issues.
So while I do agree that the Linux shared object method kind of sucks at times, these days it’s pretty well managed so conflicts only happen about as often as they do on other platforms.
Edited 2012-10-27 20:06 UTC
Probably as often as finding users who still use OS X Tiger (and use a BlackBerry) or Windows installs without Direct X or .NET I guess.
This in no way compares to the stuff Debian wants to download if you install something. On a server, without GUI, it downloaded X11 + its dependencies because I wanted to install some graphic conversion utility (CLI based!). Removing that tool removes the tool, but not X11. With apt-get dist-upgrade I managed to create non-working or unstable systems. Or wicked situations where X can’t be installed, because Y needs to be installed first, but Z depends on it and Z doesn’t exist.
Windows and OS X don’t have package systems that involve dependencies, because it’s no big deal.
That’s hardly a fair example given Windows Server and OS X Server all come with a GUI preinstalled. Plus even with X installed, Debian has a lower footprint. So if you want to bitch about having to install a GUI on Debian, then at least appreciate the irony that you’re comparing it to two other OS where a GUI is mandatory.
With the greatest of respect, this boils back to my comment about people using a more advanced distro before they’re ready for it. If you want to be spoon-fed, then use an appropriately dumbed down distro and leave the enterprise solutions to the sys admins.
To be honest I get sick and tired of hearing people say Linux needs to be more like X, Y or Z. The reality is, if I wanted something more like Windows, then I’d be running Windows. The reason I choose to run Linux is because I don’t want to be spoon-fed. I want the power to build the system I want, even if that means I could potentially fuck it up completely. And if I do need something quick and painless, then I’m more than happy to throw on a copy of Ubuntu (I’ve done this in the past when I just needed a working laptop that evening).
So if you don’t like the way how your enterprise distro let you hang yourself, then I think the bigger issue is why you considered an enterprise level distro to be suitable for your specific requirements.
You’ve never used Windows update then
I only complain that some tools require a lot of dependancies, a lot in number and in bytes. Some of which aren’t always logical on first (or second) sight. Sometimes you want to check something out, a utility of 23 KB, then 31 MB gets downloaded, the utility doesn’t work, you deinstall it and free up 23 KB.
But at least we can agree that the GUI isn’t really a dependency for Windows and OS X, as they already have one.
I’m not complaining that Linux needs to be more like Windows or OS X, I’m fine with it like it is. My only point is that “dependencies” are much more of an issue than they are on Windows and OS X. No doubt you can find issues with Windows and OS X that aren’t so on Linux.
When using Windows or OS X I have no package manager that downloads dependencies, nor do I have to do this manually. Apparently I was using a new iMac for some time before finding out it didn’t have the JRE. I only noticed because someone wanted to show a webpage with a rotating mobile phone. OS X said I didn’t have Java, if I wanted to install it, I said yes and saw a rotating phone.
Well, I have and the only thing I don’t like about it that on a new install it says that there are X updates and after installing them and rebooting it turns out there are more updates and after another reboot it again offers some updates.
Or sometimes it takes a long time searching for updates only to prompt me that I need to update Windows Update first.
Most Windows software just installs and it if needs something extra it will download/install it automatically.
I’ve not used Debian extensively, so I can’t comment on apt-get specifically, but other distros do remove dependancies like you’d prefer.
However you keep making comparisons to Windows, yet Windows doesn’t do any dependancy management what so ever. If an application says it requires a specific framework, then the user is required to download and install that themselves and subsiquently uninstall it themselves when they’re done.
Sorry but that comment is just retarded. You’re bitching that Debian asked you to install a GUI then complimenting Windows and OS X for having one pre-installed. That kind of backwards logic helps no one.
Well yeah, like I said before, shared objects in Linux suck at times. I’m not going to dispute that. But the examples you’ve given have been pretty poor:
* You complain about having to install a GUI on Linux despite it being mandatory on Windows and OS X.
* You complain about the size of dependancies on Debian, despite it still having a significantly lower foot print than Windows and OS X
* You complain about how you’re not spoon-fed, despite running an enterprise distro instead of a user-centric one
* and you comment about how good OS X is for auto-downloading a Java plug in, yet bitch about Debian for doing the same thing with it’s dependancies.
Nearly every single example you’ve given has boarded on hypocracy. Which, sadly, undermines the whole point you’re trying to raise. But I suspect (and please feel free to shoot me down if I’m wrong here) that’s because you haven’t spent a massive amount of time on Linux nor Windows platforms so rather than offering a solid technical understanding, you’re having to explain your points with anacdotal evidence?
If that’s the case then that’s fair enough and I’ll happily chat in more detail about how Linux shared libraries differ from Windows share libraries and why that makes Windows more user friendly. But just be aware that the evidence you’ve cited in this thread doesn’t really support the points you’re trying to raise.
Well, I have and the only thing I don’t like about it that on a new install it says that there are X updates and after installing them and rebooting it turns out there are more updates and after another reboot it again offers some updates.
[/q]
Yup, that’s because Windows can’t do any dependacy resolution so it has to do each update in sequence rather than just nabbing all the latest patches.
We may never agree on this, but I do genuinely think installing applications on Linux is easier. If it wasn’t, Apple and MS wouldn’t be mimicing Linux repositories with their own centralised software repositories.
Just like OS X, but the point is that the need for installing dependancies is very rare and if it occurs the installer will do it for you. Hence there is no need for a package system with dependacy checking. On Linux this need is much greater, explaining a number of package systems.
Perhaps it’s better I’ll leave it at that, because you keep getting the impressing I’m complaining about Linux while glorifying Windows and OS X: I’m not. I happily use Linux on servers and prefer it to Windows and OS X. On the desktop I prefer OS X. The 2nd best choice would be Windows 7, but I’d still pick Linux.
But that’s exactly what happens in Linux as well!
I repeat, Windows Update. I’ve been polite but I sense we’re now going round in circles.
No, I’m getting the impression that you don’t really know what you’re talking about so grasping onto some anecdotal evidence as your reasoning. This is going to be a lengthy post, but I’ll explain things properly.
Firstly, Windows /DOES/ have dependency issues, but they’re hidden. However to explain why, I’ll first discus the way Windows dependencies work.
In Linux, you can have more than one version of a shared object running concurrently, but that requires a manual configuration. The reason being is that all the shared libraries are stored by version number (eg ssl.so.0.9.5) then symlinked to the objects name (eg ssl.so). Applications are then ran against the symlinks. So applications expect to run against the latest versions of all the dependencies installed because that’s typically what’s symlinked – which then causes issues if the application is either newer than the dependencies (also an issue on OS X and Windows – obviously) or if the application is older than the dependencies. This can be worked around with additional directories with sets of symlinks to earlier libraries and then adding those directories to the environmental variables (LD_LIBRARY_PATH IIRC). However that’s way more technical than any desktop user would be expected to go, so package managers instead keep a central repository of all applications and ensure that everything is compiled against the latest libraries to avoid any conflicts.
Windows, on the other hand, has a system where applications are compiled against version numbers rather than library names. This means that you end up with a significantly bloated install as instead of one copy for each library, you need several (eg for .NET version 3, you also need every prior version installed as well). This is great for usability though as it means that:
a) previous dependencies can be shipped with the OS as you know they’ll be required
b) users only need at least version 2 of .NET in order to run any .NET v2 applications.
However, and to reiterate what i said above, that also means that the dependency disk footprint is massively greater than on Linux. eg a .NET v3 application will depend upon versions 3 and every version previously as well. Because without versions 1 and 2, version 3 cannot install. So a “standalone” .NET 3 application could quickly run into a dependency issue of several hundred megabytes instead of a few dozen megs on Linux. However on Linux, it may be a much more significant job getting an older application to run outside of installing it from the repos, but the dependency footprint would be significantly lower.
To further confuse issues, Windows does also have some statically named dependencies as well (typically core Windows libraries). And that means that any application that depends upon newer versions of those libraries will force the users into installing a Windows Service Pack – another few hundred megabytes of dependencies on top of .NET et al.
So yes, I do agree with you that the Linux shared object method is a pain at times, but every single example you’ve given thus far has been a grossly inaccurate representation of why Linux shared objects cause problems. The actual reason is Linux shared objects suck is because, by default behaviour, Linux doesn’t prefer to run multiple versions of libraries running concurrently and thus provides no automated / spoon-feeding tools for users to perform such tasks. So it’s down to a seasoned sysadmin to manually side-load libraries and create sandboxed shells for them to run from. But quite honestly, distros like Ubuntu do a decent job of eliminating the need to have multiple concurrent libraries anyway. It’s more of an issue when running enterprise level distros on servers (which is why you’ve ran into problems), but in those cases sysadmins are expected to at least understand the basics of what I’ve just explained.
Which goes back to my earlier point; if you want to be spoon-fed then don’t bloody run an enterprise level distro. They’re powerful OSs but designed to be handled by experienced users rather than noobs who read that Linux is good on servers and so then jump in with both feet. Much like how OS X is a great desktop version of Unix but that doesn’t mean that your average OS X user would even have the slightest idea how set up a Solaris server.
Choosing the right Linux/Unix for a job is as much choosing the right *nix for the user as it is choosing the right OS for the servers role.
So yeah, you were kind of right that Linux shared objects can be an utter bind at times. But not for any of the reasons you’d given.
Yes, if the product you are installing is in the defined repositories you have a good chance it will do that.
Anyway, if you install stuff on Linux you encounter dependancies. I don’t mean this as a bad thing, it’s just something that pops in to view.
When I install stuff on Windows or OS X it’s not something that pops in to view or is any issue. Download a random piece of Window software and chances are very low it will fail because you are missing something.
Windows Update updates Windows and some Microsoft products, it’s not comparable to apt, yum, rpm or any other package manager.
I’ve been working with Linux since 1998 or so. At work I have a number of Linux servers, physical and virtual. I’m not looking for a spoon feeding Linux distribution to use on a desktop. It’s my job to install Linux and Windows servers and to maintain them. That’s how I got the money to buy all that Apple stuff. I’ve even got a fluffy Tux and a Geeko.
Sure I’ve had my share of ‘challenges’ with dependancies, but it wouldn’t be a reason I’d use to tell someone about the downsides of Linux.
My technical Linux knowledge exceeds my OS X one by far, because I have no idea how OS X works as I never had to bother with under-the-hood stuff. That’s why I switched from Linux to OS X, because I was wasting too much time getting things to work and jumping distributions to find something that did get its act together.
The only point I’m trying to make is that dependancies are a far bigger issue on Linux system than they are on Windows and OS X ones for users.
If a Linux user starts talking about some piece of software and you ask about its dependancies he may or may not know them, but he would know what you mean. Ask a Windows or OS X user about them and they wouldn’t know what you are talking about.
I’m not saying dependancies are a bad or nasty thing, or that Windows/OS X are better. From a standpoint of having most control the Linux way is probably much better, but that’s not my point. And that point is: Windows and OS X users don’t know about dependancies, because they are never an issue.
This is not directly a reply to you, but this is a good place to explain this to the other people here:
Under Windows and OSX there is already a common base to target so it’s easy enough to target that. Under Linux there is no such a common base so often you do end up needing to install some extra packages.
Also, under Windows and OSX there is no system service that would handle installation of external dependencies, so any application you install under those operating systems has to package all of its dependencies with it. Compare this to Linux where there is such a system service and therefore applications do not need to package all of the dependencies along — this results in much less system bloat and much better sharing of resources. If you have a Windows – installation just go and check the size of your WinSxS – folder…
It may seem complicated and annoying, but there are clear reasons for why the system is implemented like this. Sure, it could be improved to automatically remove unneeded dependencies if one removes a package, and to handle multiple versions of the same package with differing dependencies — like e.g. package Y could have two different versions, one with support for X11 and one without, and it’d install the one with less dependencies by default and switch over to the one with X11 as dependency if the user installs X11 at a later date. Alas, no one seems to care enough about this.
But you’re not supposed to install stuff outside of the repos in Linux. Do you even have a slightest idea how to use Linux, but that’s an insanely weak point to make.
It’s not the same, but it is comparable. They both manage security patches and API dependencies.
Yet you’ve failed to grasp every single point about Windows and Linux raised?
That I can sympathise with
And I agreed with you on that. However every other point you’ve made to back up that assertion has being wrong and often bording on complete ignorance.
I have no idea what software have what dependencies and I run one of the more hands on distros.
And to say Windows users aren’t aware of .NET, DirectX and such like is just laughable.
Edited 2012-10-28 18:10 UTC
I certainly know how to use Linux, that’s why I can install stuff that isn’t in the default repos. I can change them, add to it, convert RPMs, build from source. Isn’t that what Linux is all about, the freedom to do stuff without restrictions?
There is software about that isn’t in any repo, how can you have install that if you aren’t allowed to?
Slackware doesn’t even use repos, not counting 3rd party slapt-get. Does that mean you can’t install anything on it?
They can both install stuff, yes, but that’s where WU stops. It doesn’t de-install, install anything else, doesn’t do any checks when you run a software installer. It’s just a Microsoft software updater.
I do grasp, but I made one single point: Windows and OS X users don’t know about dependencies, because they aren’t issues on those systems. .NET and DirectX perhaps qualify as such, but install Windows 7 and you have them. Nor does WU even come near APT or RPM in possibilities, software, management, number of software packages.
I know very well how it all works, but I’m not going there. My point is very simple. You’re trying to prove Windows does have dependencies. Yes, a few (that are already installed), but this doesn’t compare at all to Linux (where they aren’t all already installed).
But you do know they are there and needed. Ask a Windows dude about dependencies and they’ll look at you funny. Just download any piece of Windows freeware and see how many dependencies it will try to install, probably zero.
I didn’t say that, I said “dependencies”. .NET and DirectX are known amongst Windows users, but they are installed by default. Their total number is 2. Install a default Debian and add some of your favorite stuff, the number of dependencies that get included if far far greater.
Let’s install nmap:
(ceres) 2 7:39PM /root> aptitude install nmap
The following NEW packages will be installed:
liblua5.1-0{a} libpcap0.8{a} nmap
Or SLRN:
(ceres) 3 7:40PM /root> aptitude install slrn
The following NEW packages will be installed:
libcanlock2{a} libuu0{a} slrn
Ghostscript:
(ceres) 4 7:40PM /root> aptitude install ghostscript
The following NEW packages will be installed:
fontconfig-config{a} ghostscript gsfonts{a} libcupsimage2{a}
libfontconfig1{a} libgs8{a} libjasper1{a} libjbig2dec0{a} libjpeg62{a}
libpaper-utils{a} libpaper1{a} libpng12-0{a} libtiff4{a}
ttf-dejavu-core{a}
(and that’s just the deps they are missing, they require more)
Download a network scanner or Usenet reader for Windows and it will come with a single installer, OS X just comes with a single icon you drag ‘n’ drop. All the dependencies the Windows application needs are included in the installer, the OS X ones are included within the .app. The user won’t be aware of them. Hence my point: Windows and OS X users don’t know about them.
And even better: they are allowed to install stuff that isn’t in a repo!
Again: I’m not criticizing Linux, package managers or dependencies. It’s just something that comes with the territory. Drive letters are something that comes with Windows and Apple users have the Dock. Hate it or love it, it comes with the product. When you use Linux, any Linux, you’ll soon learn about dependencies.
MOS6510,
You guys are both right and are fussing over details
Linux users who stick to the repos generally have dependency’s managed for them. Sometimes we want to or have to install outside of the repositories. External packages can be trivial to handle when they’re compatible with managed dependencies, other times they can be a nightmare.
On windows, the convention is for the installer to include/install all binaries and dependencies, so duplication is rampant, but it usually works.
It’s not always the case though, this past month I couldn’t get the new .net version of xlite running on windows 7 on someone’s machine, so I reverted to the non .net version. Also, skype 5 has broken dependencies on windows xp, although that’s likely deliberate.
Aye, keeping the OSAlert tradition alive!
And I’m a little more right anyway. :-p
This is the problem though. His conclusion was right, he just reach that conclusion with completely incorrect details.
Which is really irritating as I want to talk about he reasons why Linux dependencies suck and how to better handle them. But instead he keeps derailing the discussion over trivialities that don’t exist.
So I’ve now given up being polite and having any productive debate.
The difference between Slackware and Ubuntu are as big as the differences between FreeBSD and OS X. So you’re not really comparing like for like.
and as I’ve repeatedly said already, if you’re the kind of user that will get stumped by dependencies then you shouldn’t be running a more technical distro. so this whole argument about how “new users would struggle with dependencies on Slackware” is stupid as new users shouldn’t be using Slackware in the first place.
It does checks when you run WU and it does add items to the registry for de-installation via Add/Remove Programs (even if it doesn’t de-install anything personally).
However you’re still missing the point. You stated that Windows doesn’t do dependency checks, I cited a Windows utility that does.
that wasn’t your original point and I’m sorry to be rude but you’ve made it perfectly clear that you haven’t completely understood how this stuff works by the way you’ve blindly stabbed at arguments. You’ve demonstrated complete ignorance about recommended deps in apt-get and how to clean up dependencies when uninstalling via the CLI (there was a post about these two items which you’ve ignored). You’ve also demonstrated complete ignorance about the size of Windows packages and basically just fudged every single point you’ve made.
But rather than listen to others and learn a little information (after all, your root point was never disputed, only the crazy arguments you made to uphold your issue with Linux dependencies), you’ve been too proud to admit that you could have gaps in your understanding and have proceeded to reiterate the same retarded arguments over and over.
In short, you haven’t a fucking clue what you’re talking about.
Another example of retarded logic; I know Win32 APIs are there on Windows and needed, but that doesn’t mean that Windows is broken because of it. I know a car engine is there and needed, but that doesn’t mean car engines are broken by design. I know our sun is there and needed for life on Earth, but that doesn’t mean I have to worry about whether it would rise and fall each day.
I mean, if we’re just listening off stuff we know exist as proof that things don’t work, then we might as well just give up now.
True. But they same people also complain about the 16GB footprint Windows has and how the footprint always grows with time; eventually forcing a complete system rebuilt.
Having every possible dependency preinstalled by default isn’t the golden egg. Personally I wish Windows managed it’s dependencies a little more.
Another example of not knowing what you’re talking about.
.NET isn’t a single dependency, it’s about 5 (v1, 2, 2.5, 3 and 4 IIRC). Each near enough 100MB in size.
DirectX isn’t a single dependency, it’s several:
Direct3D, DirectSound, DirectInput, etc. then you have Silverlight extensions, DRM extensions, XNA extensions, and such like. Granted (bar Silverlight) they’re all wrapped up in one re-distributable, but that’s another 100MB re-distributable. Linux, however, would have them all as separate libraries. So if an application only made use of graphics libraries, you wouldn’t have every other library in a forced download as well. Thus reducing the downloading and disk footprint required.
You’re also ignoring VB runtimes, Visual C++ re-distributables, MFC, Win32s, Winsock APIs, Trident objects, Office plugins, PDF plugins, Silverlight, Flash, hardware device libraries, ASIO, OpenGL, JRE, ActiveX controls, misc 3rd party shared DLLs, and so many more I’ve forgotten.
You keep arguing that Windows doesn’t have dependencies, that’s just complete and utter drivel.
Just Google “Windows missing DLL” and you’ll see that users a frequently greeted with a missing dependency. As frequently as Linux? Possibly not; depending on the distro. But to say that it doesn’t happen on Windows is just ignorant.
Again, complete ignorance. Win8 ARM locks you entirely into MS’s repos. Both Windows and OS X are moving away from allowing software to be installed outside of their ecosystem. At least Linux gives you the chance to add new repos or install via a standalone .deb / RPM or source tarball, even if it’s not the preferred method.
I appreciate that. I’m genuinely not objecting to any perceived criticism against Linux / package managers. As I’ve said myself, I don’t think Linux shared objects are the best solution to the problem. However my issue with you isn’t the conclusion (that I agree with), it’s the reasoning you used to deduce your conclusion. all the reasons you’ve cited have been complete garbage, even if you were absolutely correct in stating that the Windows method is easier for new user.
I guess it’s a bit like arguing that life on earth depends on the sun because it’s a mystical God that breaths life to all plants. Conclusion (life depends on sun); 100% correct. Reasoning (sun is God); idiotic. From a Linux and Windows system administrator and developer, that’s how you’re arguments are coming across, even though I do actually agree with your final conclusion.
You display an amazing ability to keep ignoring my single and simple point. A quick demonstration of it in practice:
[29-10-12 12:40:09] Etienne Wettingfeld: Heb jij wel eens problemen met dependencies onder Windows?
[29-10-12 12:40:33] Remco Ketting: Ik weet niet eens wat het is
Google translate:
[29/10/12 12:40:09] Etienne Wettingfeld: Did you ever have problems with dependencies on Windows?
[29/10/12 12:40:33] Remco Chain: I do not even know what it is
[29-10-12 12:49:20] Etienne Wettingfeld: Heb je wel eens last van Windows dependencies?
[29-10-12 12:49:45] Judith Lievaart: wat zijn dependencies?
Google translate:
[29/10/12 12:49:20] Etienne Wettingfeld: Have you ever experienced Windows dependencies?
[29/10/12 12:49:45] Judith Lievaart: what are dependencies?
And this is my point, my only one and it’s very simple. Dependencies are not an issue for Windows and OS X users. If there are they are very rare.
For some strange reason you try to prove there are dependencies under Windows and I don’t dispute that, but it’s something that is not an issue for users.
I guess part of the confusing is you don’t use Windows. Comparing Windows Update to a Linux package manager and pointing to DLL hell make it seem you’re not up-to-date with Windows. Comparing FreeBSD to OS X is also a very far stretch, much further than my comparison of Debian and Slackware.
You said:
My original claim which to you responded was:
“Windows and OS X users can’t even spell the word ‘dependency'”
And then followed it up with:
“Windows and OS X don’t have package systems that involve dependencies, because it’s no big deal.”
Windows Update does not qualify as a package systems, it’s an Microsoft product updater. The new Apple app store isn’t either, the apps it downloads and update have their dependencies included, the app store doesn’t do any checking.
If you download an app on Windows or OS X and it needs something extra and it won’t install it for you nor Windows (Update) nor OS X will do it for you. Unlike apt for example.
Not knowing what dependencies are isn’t the same thing as not having problems with dependencies:
https://www.google.co.uk/search?q=dll+not+found&aq=0&oq=dll+not+foun…
Agreed
It’s rarely an issue for Windows users.
You’re right, I magically develop Windows applications without ever touching Windows. Excellent deduction there mate
I never said anything about DLL hell. I was listing off Windows dependencies; citing them as evidence that there are in fact more than just the two you listed.
Read what I posted again; I made the Ubuntu / Slackware comparison in that post. I never cited Debian was like OS X.
In fact if anything, I’d argue that FreeBSD is more akin to OS X than Slackware is akin to Ubuntu.
You’re nitpicking there. I will grant you that WU isn’t a complete package manager, but it does handle dependencies (which was the point you were raising when I cited WU). And I do agree that neither Windows Marketplace nor the App Store are complete package managers either, but they are repos – which was in direct response to when you discussed Linux repos specifically.
You made claims that X, Y and Z doesn’t happen on Windows / OS X, I gave examples of where it does, and then you disregard them because they’re not an exact replica of the Linux metaphor, despite them being a direct implementation of a function you said didn’t exist in the first place.
True, it does something even more evil; it just fails. Personally I’d rather have my additional dependencies auto-resolved rather than have an unworkable application that I’m expected to debug myself so arguing that having dependencies not auto download as a bad thing is just retarded. But I doubt you’d ever back down from that position given how deep into debate we’ve now become.
And before you cite that most Windows installers ship the required dependencies within them, that’s still worse than having a managed solution because it means that every games installation medium requires having DirectX (et al) re-distributable regardless of whether that’s already installed on the target machine. Hardly an optimal method really is it.
And to close, while you may nitpick at the definition of “package management system” to suit your failing arguments, take note that the rest of the world has saner definitions:
http://en.wikipedia.org/wiki/List_of_software_package_management_sy…
http://en.wikipedia.org/wiki/List_of_software_package_management_sy…
*awaits yet another reply from you as you desperately try to disprove the weight of evidence stacked against you.
Finally!
Yeah, just a pity you’re ignorant to grasp why :p
I do, but if you want me to explain it I’ll just copy ‘n’ paste your bit just to make it extra annoying. :-p
lol
I have been an arse. I agree
It’s your right to be one, just done expect me to kiss you when it’s your birthday.
“On a server, without GUI, it downloaded X11 + its dependencies because I wanted to install some graphic conversion utility (CLI based!). Removing that tool removes the tool, but not X11.”
apt-get install –no-install-recommends foo
I suspect would have been the way to avoid installing X.
apt-get remove –auto-remove foo
will get rid of all unused dependencies.
You could clean up all unused dependencies with
apt-get autoremove
or use deborphan for those who prefer a gui.
Also
apt-get –simulate ….
will allow you to see what the results will be before you mess things up.
I recommend you have a look at
man apt-get
Do you understand how much of a troll you are acting like? WereCalf is being very nice in putting up with your nonsensical non sequiturs. It really sounds as if you are projecting some other hurt you have in your life into this pseudo-technical argument ( no offense to werecalf who is being very nice and technical). But seriously, relax. Play some crazy violent video games, have a beer and chill.
I don’t think sameer is trolling – he’s just exhibiting some social attitudes one can find in south Asia. WereCatf challenged him, thereby challenging his masculinity and his place in society, he tried to assert his superiority by spouting some half-baked ideas and failed.
you are generalizing attitues of indian hindus to cover all south asians. what you are talking about is attitude of some typical “indian” who is in the west for “jobs” or “higher studies”.
please look up history of south asia when you have time. and please read up the heroes of socialism to know what a “man” is supposed to sound like.
so the qnx company is going out of business i hear, because of linux fan-boys on this site.
good bye forever.
edit : to admins, please delete my profile or tell me how to do so.
Edited 2012-10-27 18:36 UTC
QNX is owned by RIM, the makers of the BlackBerry.
My guess: he read somewhere about OSX all-in-one, drag & drop “install” files …and connected it with the official OSX UNIX certification.
That method never existed. Even in the old days programs had to be compiled for each Unix variant and architecture, and these days UNIX dependencies are almost as bad as Linux.
Even on Windows, so called “stand alone” applications have pre-requisites, be that a specific version of the .NET framework, the latest DirectX drivers or even just Win32 libraries.
Have you never considered that your problem might be running one of the most resource heavy distributions of Linux on a 10 year old PC?
You’d be better off with Puppy than Mint.
Please don’t insult our intelligence with such blatant lies. If you want to appear to understand this subject, then you’re much better off learning the subject (and keeping quiet on the subject until you do) rather than pretending to then covering your tracks with fictitious boasts because everyone has voted you down for posting nonsense.
Scroll to the end:
http://www.openqnx.com/phpbbforum/viewtopic.php?t=8261
Still, it’s the one and only reference to this OS I can find.
That’s the one I googled, too. He says he started working on it in 2002 and by 2006 he still didn’t have any sort of a GUI or task-switching, ie. it was merely a single-process OS. And there is no actual indication that he had done any of the coding himself — with his clear misunderstanding of basic concepts I feel it’s highly likely he has just borrowed code from others.
An interesting fellow.
You’re confusing your own opinion with facts.
Say what? The fact that one is kernel space and the other is user land says exactly that: that there is a separation.
It’s not 2002 anymore, use the right tool for the job. Mint is obviously not the right choice for such a resource starved system.
How is this relevant? What does it even mean?
It’s 2012, we have reliable tools to handle these tings now.
I think you’re confusing Unix and DOS.
I think you’re confusing Unix and DOS.
Well, to be honest, that’s the way Unix works, just copy the files to a directory and works.
From the context I think he mean you don’t need to worry about libraries, dependencies and such.
Perhaps Linux is badly written, complicated and certainly its coders aren’t the most pleasant people, but it’s hard to associate Linux with slowness (perhaps it is when run from a USB flash drive) or crashing.
Linux (the kernel) and GNU wonderland are, in my experience, just fine. The GUI stuff is often buggy and crashes, but it won’t take down the system itself. A Linux server will go on and on for months and years.
If you experience crashes it’s probably defective hardware or some rare buggy driver.
You`ve been on osnews too long. “Perhaps linux is..” Actually if you have had all three mainstream OS`s installed, say windows XP (which can be made to run quite smooth), OSX (actually slow, sometimes even taking 5s for keyboard response here) and for instance Ubuntu (many would call it a bloated linux, but still), you would actually prefer Ubuntu. So how can linux be badly written? Indeed it seems to be the better of them all. If you wanna talk about badly written, think about the product MS sells. That is all, no enthusiasm, just a dollarmonkey, a product sold, just as cp/m once was. Also junk. I think most enthusiasts agree that windows is POORLY written. And OSX shows that even original unix code, can turn to a windows-like annoyance. Linux though, and Ubuntu, lots of choice. Modular mindset. And the best of code. You wanna run a windowmanager from the time of pre-overobfuscated high-level concepts, try IceWM, with a good theme. And you don`t have to worry about all the desktop-soapopera either. “no desktop id dead” “no desktop is alive” “no linus killed the desktop with evil mentalrays”. And here I was running IceWM and not noticing a thing. And Wayland is coming in a big way.
“Poorly written” – no. And it has a lot of innovation, and seems to be incorporating more and more of realtime aswell. Have you ever played an openGL game with ACCURATE fps? It is just so much more enjoyable. Not to speak of how lowlatency/lowjitter improves the responsiveness of the desktop, making activity already on the next frame, after input.
No “lie”, no evil coders. But as many places linux has been associated with several things. And for instance something many people “know” is that Gentoo is for “performance”. However it`s mostly a myth, and in their forums you will get some really bizarre answers from time to time.
What I suggests is really just trying out the most popular distributions like Ubuntu/Suse/etc.
If you`re into realtime, or low-jitter, you might want to build yourself a PC just for that purpose also.
I am doing one, and it looks like this currently: http://paradoxuncreated.com/Blog/wordpress/?p=4176
It`s gonna be great.
Peace Be With You.
I’m a quite able Linux user and I have to fiddle around with Windows XP/7 on a regular basis while I personally use OS X.
Sadly I have reached a stage in my life where I don’t have the time or motivation to check out Linux distributions, testdrive different GUIs or build my own PC. There was a time when I did this and I’m happy it’s over.
Linux desktops are fast, even heavy ones like KDE, but my gripe is with the application software. IMO it’s just not good, certainly when compared to Windows and OS X counterparts. There is some good software, but in few numbers.
When I use Linux I prefer CLI only. That’s fun to use and all the CLI commands and programs are very useful, powerful and just work.
??… (emphasis mine)
For the usual stuff that a typical user likely needs, the software is fine I’d say. Browsers, Open/Libre Office, bittorrent clients, image viewers/organisers are perfectly OK, plus throw in some media player and IM – those last two tend to be somewhat better actually: multi-communicators and plays-everything seem to be more the rule on Linux than on Windows & OSX.
(sorry for a late reply, again …I think I went to sleep when I had this reply window opened )
There’s a little truth to this. Try running
$ dd if=/dev/zero of=~/dumpfile bs=4G count=1
on a system with 2 GB of RAM and any amount of swap space; the OS will hang for a long, long time.
(If you don’t have swap space, the command will fail because you don’t have enough memory. But it’s not safe to run without swap space… right?)
Mind you, Windows is just as bad about this – it just doesn’t have tools like dd preinstalled that can easily crash your computer.
You sure? I get
dd: memory exhausted by input buffer of size 4294967296 bytes (4,0 GiB)
Lowering to bs=1G, dd will complete without much noticeable slowdown.
You’re right, my mistake. For the Bad Things to happen, bs has to be set to something between physical RAM and total (physical + virtual) memory.
That said, I have never seen large writes fail to produce a noticeable slowdown. Not on an HDD anyway, I’m not sure about SSDs. I suspect that slowdowns during big writes are unavoidable on normal-spec desktops.
Well, that is actually the expected behaviour on an average desktop-oriented distro. Of course allocating 4 gigabytes of contiguous memory on a system that do not have that much is going to slow down or fail, you can perfectly well try that on Windows and OSX and get exactly the same thing.
Now, before you go ahead and try to say this is a fault in Linux I have to enlighten you that it’s actually a perfectly solveable problem. Forced pre-emption enabled in the kernel, a proper I/O scheduler and limiting either I/O or memory-usage per process or per user will solve that thing in a nice, clean way, without breaking anything in userland and provide for a functional, responsive system even with such a dd going in the background. If you’re interested peruse the kernel documentation or Google around, there’s plenty of documentation on exactly this topic.
These are, however, not used on desktop systems because usually desktop systems are only utilized by one person at a time and they do not have the need for such and therefore it’s rather misguided to even complain about that — these are features that are aimed at enterprise servers and require some tuning for your specific needs.
EDIT: Some reading for those who are interested:
http://en.wikipedia.org/wiki/Cgroups
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=…
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=…
Edited 2012-10-27 21:56 UTC
Gullible Jones,
The OP’s clearly trolling, but you post an interesting question.
“$ dd if=/dev/zero of=~/dumpfile bs=4G count=1”
I don’t get your result, it says “invalid number” for any value over 2G, probably because it’s using a 32bit signed int to represent the size (on a 32 bit system).
“Mind you, Windows is just as bad about this – it just doesn’t have tools like dd preinstalled that can easily crash your computer.”
My own opinion is that this is a case of garbage in, garbage out. dd is a powerful tool and was not designed to second guess what the user wanted to do. You’ve asked it to allocate a huge 4GB buffer, fill that buffer with data from one file, and then write it out to another. If it has enough ram (including swap?) to do that it *will* execute your request as commanded. If it does not have enough ram, it will fail, just as expected. It’s not particularly efficient, but it is doing exactly what you asked it to do. Windows behaves the exact same way, which is the correct way.
You could use smaller buffers, or use a truncate command to create sparse files. Maybe we could argue that GNU tools are too complicated for normal people to use, but lets not forget that the unix command line is in the domain of power users, most of us don’t really want our commands to be dumbed down.
“(If you don’t have swap space, the command will fail because you don’t have enough memory. But it’s not safe to run without swap space… right?)”
I don’t believe in swap
Look at it this way, if a system with 2GB ram + 2GB swap is good enough, then a system with 4GB ram + 0 swap should also be good enough. I get that swap space is so cheap that one might as well use it “just in case” or to extend the life of an older system, but personally I prefer to upgrade the ram than rely on swap.
Edited 2012-10-27 18:38 UTC
I realize the above is correct behavior… What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack. Stalling or crashing a multiuser system should IMO (ideally) be something that root, and only root, can do.
OTOH, the presence of tools like dd is why I much prefer Linux to Windows. Experienced users shouldn’t have to jump through hoops to do simple things.
Edit: re swap, I wish there were a way of hibernating without it. In my experience it is not very helpful, even on low-memory systems.
Edited 2012-10-27 18:44 UTC
Gullible Jones,
“What bothers me is that (by default anyway) is that it can be used by a limited user to create an an effective denial-of-service attack.”
I see your point. You can put hardlimits on a user’s disk/cpu/ram consumption, but that can easily interfere with what users want to do. I’m not sure any system can distinguish between legitimate resource usage and accidental or malicious usage?
At university some ten years ago, we were using networked sun workstations, I’m sure they’d know something about distributing resources fairly to thousands of users. I don’t remember the RAM capacity/quotas, but I do remember the disk quota because I ran into it all the time – soft limits were like 15MB, uck!
mv `which dd` /sbin/
problem solved.
Edited 2012-10-27 20:18 UTC
That just takes care of one tool which can bring the system to its knees, limiting access to dd is a bandage. The issue is that any application on Linux can cause the system a great deal of stress or bring it down. (I do this a couple of times a year by accident.)
There are ways to protect against this kind of attack (accidental or not) such as setting resource limits on user accounts. Most distributions do not appear to ship with these in place by default, but if your system requires lots of uninterrupted uptime, the sysadmin should consider locking down resource usage.
It the same case for all OSs though. Trying to open a 200MB Excel spreadsheet that some office idiot decided to build a database in will easily bring Windows to it’s knees.
The moment you put an idiot in front a computer than that machine is as good as dead. Regardless of the OS. There’s a saying that goes something like “The more you dumb something down, the bigger idiot that come along”.
jessesmith,
“The issue is that any application on Linux can cause the system a great deal of stress or bring it down. ”
Agree with your post, however let’s expand that to ANY multiuser OS, be it UNIX (freebsd, linux, osx, etc), windows terminal server, citrix, etc.
man sh
Scroll down to ulimit and read about all things you can put limits on. dd is not a problem by itself.
Laurence,
“If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.”
Isn’t the default behaviour under linux to call the out of memory killer? It takes over and heuristically decides which process to kill. I’m opposed to the OOM killer on the grounds that it randomly kills well behaved processes, even when they handle out of memory conditions in a well defined way.
Playing devil’s advocate, OOM killer gives the user a chance to specify weight factors for each process to give the kernel a hint about which processes to kill first (/proc/1000/oom_adj /proc/1000/oom_score etc). This increases the likelihood that the kernel will kill a process that is responsible for consuming the most ram. Without the OOM killer, a small process (ie ssh) can be forced to terminate when another process (dd bs=4G) is responsible for hoarding all the memory. Killing the large “guilty” process is better than killing small processes that happen to need more memory.
I am interested in what others think about the linux OOM killer.
“mv `which dd` /sbin/ problem solved.”
I don’t think that addresses the root concern, which is that userspace processes can abuse system resources to the point of grinding the system to a halt. dd was a simple example, but there are infinitely more ways to do similar things. If our goal was to deny access to all the commands with potential to overload system resources, we’d be left with a virtually empty set. Obviously you’d have to deny access to php, perl, gcc, even shell scripts. The following does an excellent job of consuming both CPU and RAM on my system until I run out of memory and it aborts:
cat /dev/zero | gzip -c | gzip -d | gzip -c | gzip -d | gzip -c | gzip -d | sort > /dev/null
It’s not likely to happen accidentally, but if a user is determined to abuse resources, he’ll find a way!
Well yeah, that’s what i just said.
Yeah, i’ve often wondered if there was a better way of handling such exceptions. OOM doesn’t sit nicely with me either.
Interesting concept. A little tricky to impliment I think, but it has potential.
But that’s true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it’s knees.
The only ‘safe’ option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.
Laurence,
“Well yeah, that’s what i just said.”
“Interesting concept. A little tricky to impliment I think, but it has potential.”
Maybe we’re misunderstanding each other, but the OOM killer I described above *is* what linux has implemented. When it’s enabled (I think by default), it does not necessarily kill the requesting application, it heuristically selects a process to kill.
“The only ‘safe’ option would be to set everyone up with thin clients which only have a web browser installed and bookmarked link to cloud services like Google Docs.”
Haha, I hear you there, but ironically I consider firefox to be one of the guilty apps. I often have to kill it as it reaches 500MB after a week of fairly routine use. I’m the only one on this computer, but if there were 4 or 5 of us it’d be a problem.
This is probably hopeless, but here is what top prints out now:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27407 lou 20 0 1106m 403m 24m R 4 10.0 50:27.51 firefox
21276 lou 20 0 441m 129m 5420 S 3 3.2 869:47.14 skype
I didn’t realise skype was such a hog!
Ahh yes sorry. Thanks for the correction
But that’s true for any OS. If a user has access to a machine then it would only take a determined halfwit to bring it to it’s knees.
Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other’s toes. Obviously some of this is the sysadmin’s responsibility; but I do think it’s good to have default setups that are more fool-proof in multiuser environments, since that’s probably where Linux sees the most use. (I think?)
That said, operating systems are imperfect, like the humans that create them.
Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there’s not enough memory for it. From what I recall of C, this will probably cause the calling program to crash, which I think is what you want in most cases – unless the calling program is something like top or kill! But I doubt you’d easily get conditions where $bloatyapp would keep running while kill would get terminated.
(Linux has a couple options like this. The OOM killer can be set to kill the first application that exceeds available memory; or you can set the kernel to make malloc() fail if more than a percentage of RAM + total swap would be filled. Sadly, there is as of yet no “fail malloc() when physical RAM is exceeded and never mind the swap” setting.)
Gullible Jones,
“Have to disagree; IMO the entire goal and purpose of a multiuser OS is to prevent users from stepping on each other’s toes. Obviously some of this is the sysadmin’s responsibility; but I do think it’s good to have default setups that are more fool-proof in multiuser environments, since that’s probably where Linux sees the most use.”
I agree with the goal and that there may be better defaults to that end, however sometimes there isn’t anything an OS can do to solve the problem if resources are oversubscribed in the first place.
For instance, on a server with 10K hosting accounts, one cannot simply divide the resources by 10K since obviously most accounts aren’t busy at any given time and those that are may be seriously underpowered. As a compromise, the policy on shared hosts is to over-subscribe the available resources to make service good enough for active users most of the time. Unfortunately an over subscribed service will necessarily suffer if too many users try to make use of it at the same time. Every single shared hosting provider I’ve used has had performance problems at one time or another.
I confess, one time I was guilty of bringing down the websites of everyone on a shared server Apparently apache/mod_php has an open file limit and I was running an application that consumed too many of them. It was neither memory nor cpu intensive, but the system was never the less unable to serve any further requests. Using any ulimit quotas at all on shared daemons like apache can result in this kind of denial of service. The issue gets even more complicated because it’s impossible to determine which domain (and therefore user account) an incoming request is for until the HTTP headers are read. It means there’s nothing the OS can do anything about this case and the shared daemon itself has to be responsible; it’s quite the dilemma!
“Re handling of OOM conditions. IIRC the BSDs handle this by making malloc() fail if there’s not enough memory for it. From what I recall of C, this will probably cause the calling program to crash…”
I agree that malloc should fail if there isn’t enough ram. To me the deferred allocation scheme under linux is like a broken promise that memory is available when it isn’t. It should always return a null, (which can be handled by the caller without crashing).
Edited 2012-10-28 04:47 UTC
The reason Linux has OOM is because Linux allows you to over-commit memory. I’m a bit hazy on the exact reason for this but it had something to do with effective fork’ing of processes that used a lot of memory.
The BSD’s does not allow over-commit and instead malloc (or whatever) will fail.
Which approach is better can be discussed at great length but hopefully not here.
Soulbender,
“The reason Linux has OOM is because Linux allows you to over-commit memory.”
It’s not strictly the only reason to have OOM killer, even if memory hasn’t been “overcommitted”, requests for more memory can still fail. Linux will apply heuristics to kill a process other than the one requesting memory.
“Which approach is better can be discussed at great length but hopefully not here.”
I was hoping to hear more opinions about it, but maybe most people aren’t familiar enough to have an opinion.
If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.
Sadly this is something I’ve had to deal with a few times when one idiot web developer decided not to do any input sanitising which effectively ended up with us getting DoS attacked when legitimate users were make innocent page requests. <_<
If Linux gets exhausted of RAM, then the requesting application is killed and an OOE (out of memory exception) raised in the event logs.
Not quite true, that’s what Linux should do. What Linux usually does (unless vm.oom_kill_allocating_task is set to 1) is attempt to kill programs that look like memory hogs, using some kind of heuristic.
In my experience, that heuristic knocks out the offending program about half the time… The other half the time, it knocks out X.
I stand corrected. Thank you
Linux have lots of problems, GNome and KDE create, from my point of view, create a madness with the libraries dependence. Sometimes for installing a simple program i need 30 libraries.
Thankfully there are lot of option in Linux, and for some reason i keep choosing Slackware as a base installation (but without KDE).
And for Real Time O.S. QNX is (sadly) no more a valid option. It belongs to RIM and you can guess what will happen.
Linux desktops have (IMHO) taken a turn for the worse lately, but don’t mistake that for what’s happening under the hood. Newer kernels have some really cool features (and IMO perform better on ancient hardware than the old 2.6 series).
(And fortunately there are still Mate and Xfce on the desktop front. Also Trinity, though that doesn’t seem to be as functional right now.)
Was really looking forward to an in-depth discussion on the merits of different schedulers … and all that’s posted is the same tired trolls.
Is this really what the Internet has come down to?
I would have liked such a discussion, too, but I simply do not know enough about schedulers to say much or have an informed opinion on the various approaches. Sorry.