Ubuntu 13.04 has been released, with the Linux 3.8.8 kernel, a faster and less resource hungry Unity desktop, LibreOffice 4.0, and much more. Ubuntu users will know where to get it, and you’re looking for a new installation, have fun. Also fun: UbuntuKylin.
Good, but the bad part is that Ubuntu GNOME 13.04 ships with GNOME 3.6 and not 3.8.
Edited 2013-04-25 15:06 UTC
Has GNOME 3 even got usable yet, as of 3.6 and 3.8? Last I checked, it wasn’t even worth using, but 3.4 was slightly “better” (note: that’s not saying much) than 3.0 and 3.2. Last I checked, even if you didn’t mind the interface (I didn’t like it), the desktop still required lots of memory and some strong, compatible 3D graphics hardware if you wanted any chance of running it.
Have there been any major performance/system requirement improvements, and noteworthy GUI improvements since then? Or is it the same old load of crap all around?
It needs good drivers more than it needs good hardware. The current Fedora release runs just fine on my crappy five-year-old netbook, with a first-gen Atom processor, 1Gb RAM, and Intel graphics. I mean, it’s not *fast*, but it’s adequate, and no worse than anything other desktop I’ve tried on that machine.
It also works just fine on the onboard ATI chip on my current machine… I forget what it is, but it’s about three or four years old, a 4550 or something like that. Again, not an especially powerful or memory-heavy machine…
Why don’t you try it your self?
Judging by the past comments by this user I doubt he/she wants to try gnome.
Actually, I would consider giving it a try in a virtual machine, *if* the interface and overall environment really has improved/evolved, *and* if it doesn’t require fancy hardware acceleration (which makes trying it in a virtual machine… pretty much impossible). If it is the same old desktop with no worthwhile user/performance, then no I do not want to waste my time trying it.
I gave KDE4 the benefit of the doubt and tried it later on after it evolved… and right now KDE4 just happens to be what I am running (and has been for quite a while).
Edited 2013-04-26 18:05 UTC
And another bad part is that Ubuntu Unity comes with Gnome…
No no, bad part is that they don’t ship it with Gnome 2.32 !
You can always use the gnome3 ppa. It has some pre-release packages currently but I expect it to be updated to stable packages soon.
Really?!? Do you really think a PPA won’t appear (if it hasn’t already) to update Gnome to 3.8?
There is one PPA already, but is not complete and has drawbacks.
There’s always something breaking with each ubuntu upgrade. Last time it was VMware. What will it be this time? VMware again?
Yeah, I have a grudge cause I lost a day worth of work to the lack of care from their part.
And that’s when I switched to OSX.
progrormre,
Ubunutu upgrades have failed me in the past, but in your case without more details, is it possible that you are judging Ubuntu for what may be a third party problem? Vmware is after all a third party proprietary app you had to install from outside the repos.
The last time I installed vmware on linux, it needed to compile some kernel-specific extensions before it could run, that would very likely break during an OS upgrade. Of course I can understand why you want it to just work, but did you try to reinstall it? Just curious.
The first thing I did was to install a stable version of centos (or maybe it was some other stable distro with VMware support, I can’t remember), just to find that it did not support the filesystem I had used for the disk where all my files were stored… By that time I had already spent the morning and stress was going through the roof so I ran to the nearest apple shop.
Sure it was a linux kernel dev who changed an interface, perhaps not knowing it would break apps using the interface…Then all the testers in ubuntu forgot to test using vmware. That or they all knew VMware was broken but couldn’t care less…I don’t know, it just ended up on my desk being broken.
progormre,
“The first thing I did was to install a stable version of centos (or maybe it was some other stable distro with VMware support, I can’t remember), just to find that it did not support the filesystem I had used for the disk where all my files were stored…”
I’m missing something, it’s not clear what this has to do with a vmware instance that stopped working after an ubuntu upgrade? Or if you didn’t have an existing vmware instance, how do you know the ubuntu upgrade broke vmware?
“By that time I had already spent the morning and stress was going through the roof so I ran to the nearest apple shop.”
I’m glad it’s working out for you now. Isn’t it great to have a choice! To paraphrase Voltaire: I may not agree with your choice of software, but I’ll fight to the death your right to choose it
Although VMware does work extensively with Canonical, it is not their, or the Ubuntu volunteers job to ensure that our product works. It’s VMware’s job. As I stated above, we do endeavour to support Linux and try to keep WS and Player compatibility as up to date as possible while at the same time adhering to our own release schedule. We do have an experimental setting in our install that will attempt to automatically rebuild new modules on kernel upgrade, but it does depend on having a sane build environment. Hope this is helpful.
Edited 2013-04-25 17:56 UTC
Or he could do “the right thing” and stick to an LTS version of Ubuntu.
I was hoping someone would point that out. Anyone who uses the bleeding edge of a distro is begging for problems and failures. Anyone who does that in a critical production environment is insane. The only exception is when the person is a developer on that specific distro or a derivative.
Even then, keep a second system with the LTS release handy!
Gosh!!! Only one Day and you gave up that quickly.. I’m laughing myself almost off my chair.
Seriously, that is nothing..
You must not have worked with MS Windows long enough then.
Aaah.. I should have just given up on Microsoft already and saved me all that lost time, mental anguish and pain.. Sigh.. all those lost years…..Waiting… Sigh..
Which VMware?
I’ve had VMware to break with a kernel update (I’m not sure if it was also an Ubuntu upgrade) but that was back with VMware server 1.x where you had to go patching the kernel modules.
VMware Player on the other hand has always run fine, not as featureful but enough for the odd times I need to boot a Windows machine on my laptop at work.
Edited 2013-04-25 17:32 UTC
It is expected behaviour for VMware Workstation to break when you move to a new Kernel or new build of a distro, but it is not fatal. WS ships with precompiled binary kernel modules for the most popular distros available at time of release, however they can’t take into account every linux kernel update in the repros. In the case of Linux hosting Workstation, the linux-headers are required to re-compile the kernel modules — in this case, the kernel that ships with 12.10 is supported with a precompiled binary in WS 9.0.2 and it’s sister product Player, but for the updated kernel, we need to make new modules. You should be able to successfully accomplish this as long as you have a sane build environment on your Ubuntu: sudo apt-get install linux-kernel-devel fakeroot kernel-wedge build-essential should do it. That will give you enough of a build environment to rebuild Kernel Modules. This is not a VMware specific issue. You would encounter the same issue if you had manually installed Video blobs (ATI or Nvidia installed via SH script instead of using .deb, for example would also break). Disclosure: I work at VMware and spent 3 years providing support for Workstation.
Also: do a sudo apt-get install libgtkmm-2.4-1c2a to get WS to use your GTK theme. http://kb.vmware.com/kb/2012664
What, it somehow screwed around with your vmware images causing a days worth of changes to the images to be lost? A likely story….
More likely is that vmware or, even more likely, you yourself screwed up.
Edited 2013-04-26 01:33 UTC
It’s not a walled garden, honest. It’s not a walled garden, honest.
But if you want software the repo doorkeepers don’t like (proprietary), you have to go “at your own risk”. Riight. Same for GPU drivers.
(here come the downvotes from linuxeros)
Edited 2013-04-25 16:56 UTC
kurkosdr,
That’s just trolling. He exercised his right to install 3rd party software, which he wouldn’t have gotten at all in a walled garden. The point was that the responsibility for 3rd party software quality assurance lies with you, the end user, or the 3rd party software vendor. There’s nothing wrong with it, it’s just the simple fact of the matter.
+1 from a linuxero who thinks that over-reliance on the repository system is one of the main issues of the Linux world.
In my dream OS, the repository system would only serve to store a collection of well-trusted, “Editor’s pick” software in a single centralized place. Packages would be perfectly standalone, relying only on system libraries, and of course secure: nothing like Windows installers, demanding to run arbitrary code with admin privilege just to copy a bunch of files to a given place on disk.
Mac OS X actually got pretty close to that with the bundle system. That’s one of the things which I used to like about that OS, before Gatekeeper came around in 10.8, announcing the start of a gradual deprecation of decentralized software distribution.
Edited 2013-04-25 17:26 UTC
You mean like it used to be in MS-DOS, Amiga and Atari days?
Might be, I haven’t used an MS-DOS computer for a long time now and have never used the other two. Besides, decentralized software distribution was the only choice at the time where the infrastructure for centralized distribution wasn’t in place.
A difference, though, is that I’m not hostile to centralized software distribution per se. It’s normal for an OS vendor to want to put what he think is quality third-party software on display, and to provide a set of trusted mirrors for these. What I’m against is the scenario where centralized software distribution becomes the only way to get software on a given platform.
First because it ruins the purpose of having a centralized store at all, since the worst crap ends up getting in, while some gems are left out forever for e.g. licensing reasons. Second because it puts a very large amount of power in the hand of the OS vendor, which no one should feel confortable with.
Edited 2013-04-25 20:20 UTC
This isn’t really a problem with the repo system though… it’s a problem with trying to run 3rd-party software that hasn’t been tested/certified on the new version of the OS you’re running.
Pretty much any big commercial software provider would be laughing at you if you ask for help running their applications on an unsupported OS. At best, they’ll tell you to wait until they’ve certified it themselves, or they’ll charge you big money for doing the certification ahead of schedule. Just try dealing with Oracle on something like this…
You speak of a walled garden but I don’t think you know what it really means.
I am bringin up this quote from the EA dev team mailing list:
“How EA killed their own market:
They tried for years with DRM to protect our multiplayer games, and lost heavily. The new approach is alwasy on DRM and even though we had 4 million pre-orders of sim city 5, we are still losing money ont that turd. Sim city 4 was awesome and is still one of the best selling EA games. EA made the NHL 2003 which is still the best sports game ever made.
Always on is equaling sadness for the consumer, death to free gaming and origin allthough i thought it would just be a steam competitor it has turned in to the worst place to get games, as we can not guarantee our uptime.
Steam DRM might be a teeny bit invasive sometimes, but atleast they are not assholes about their games demanding 100% uptime on the net.
For all the devs, make sure your game is rendering through opengl properly, or you WILL lose all the powervr users (iphone and android stuff) and also all the macs and linux. Linux as a gaming plattform has allready surpassed the mac in steam sales and if your game sells 100 copies it might not be worth it in the first place. But if you are interested in the Linux market: Sell your games through steam until we can get a origin client for linux, 10 cutoff in price is easily worth it. Ask Crotech who sold serious sam 3, a hard title for the PC as a linux game. They have earned more than half of their revenue from the game-starved linux community whilst their competitors squabble over the decreasing windows sales.
“
The poor spelling is in the quote, not my doing!
Edit: The mail can no longer be viewed, reason: Gone, lost or protected. I can not read the responses.
Where is it? I just tried the LiveCD and spotted a radeon driver issue with Unity.
Theoretically, this is true. Practically, Linux Desktop doesn’t care about API stability and backcompat, so, for the average user who doesn’t want to do things “at his own risk” (aka see things getting broken), it has the effect of a walled garden.
I will bring Windows and OS X as examples again. You can run software outside their stores, and it works with remarkable back compat. For most users, such stability for software installed outside the repo (or store) is perceived as preferable to seeing some source code.
Just look at Android. Nobody mentions the ability to see the source as an advantage. They will mention the openness of the store compared to Apple’s (emulators etc), the ability to sideload apps to buy directly from the dev, the hardware variety etc but not the source code. Even if they have an AOSP phone (Xperia Z, Nexus 4 etc)
Browser: Mozilla/5.0 (Linux; U; Android 2.3.4; el-gr; LG-P990 Build/GRJ23) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 MMS/LG-Android-MMS-V1.0/1.2
It’s worse than that. Even things in the “stable” repositories will often break, e.g. GPU drivers and audio support. The latter is, in fact, something Ubuntu is rather famous for doing when they modify and customize Pulseaudio all to hell, breaking many of the ALSA drivers in the process even when they are not proprietary. One no sooner gets it working than a “stability update” breaks it again. I think the real problem is backward compatibility as opposed to the repository system. Repos can work well, as Android has demonstrated, but it has to be done correctly and backward compatibility must be maintained.
Android doesn’t really have a repo system (no dependency management)
yum –nodeps install gimp
and still pulling from a repo.
kurkosdr,
“Theoretically, this is true. Practically, Linux Desktop doesn’t care about API stability and backcompat, so, for the average user who doesn’t want to do things ‘at his own risk’ (aka see things getting broken), it has the effect of a walled garden.”
The linux kernel has kept userspace backwards compatibility strong for ages. It’s remotely possible an API regression occurred here, but it’s far more likely that Wemgadge’s explanation was the case since he indicates breakages are normal for vmware after OS upgrades. Vmware is very atypical in that it bypasses the *stable* userspace APIs all together. It is completely fair to criticize the lack of linux kernel space API & ABI stability for other reasons. However equating that to a walled garden is very…imaginative
Edit: Look, I have no problem criticizing genuine linux distro breakages, they bother me too, but they all lack the most important attribute needed to call something a walled garden – the walls that block you from installing outside software.
Edited 2013-04-25 19:48 UTC
Well, the people behind Linux Desktop won’t directly tell you “we don’t want your proprietary app”, they do it in a more subtle way. They will simply make it a PITA for you to ship your proprietary app on Linux. (same for proprietary GPU drivers)
Let’s see: Ubuntu has like, 3 major proprietary apps, and during the last couple of releases, Ubuntu broke all 3 of them. VMWare, Steam ( http://tinyurl.com/bpbf6cv ) and Skype ( http://tinyurl.com/b8tnjmv ).
Am I the only one who doesn’t get the dream of Linux Desktop? I mean, we are supposed to give up our Windows and OS X boxes, which are binary compatible to apps released in 2006 (to say the least), start using Linux, and confine ourselves to software found in the repos? And anyone who dares to dip it’s pinky toe outside the repos will have to do things “at his own risk”?
I will get downvoted for this, but I will say it. There are lots of people in key positions in the Linux Desktop community with a downright hostile sentiment for proprietary software, who consider API and ABI instability a feature. This makes it impossible for proprietary ISVs and IHVs to target Ubuntu. If this hostile sentiment towards proprietary software didn’t exist, I can easily imagine Linux Desktop having anywhere from 5% to 15% marketshare.
There are of course bright exceptions to this rule (Linux Torvalds and his insistence on API stability) but they are expections.
Edited 2013-04-26 09:02 UTC
Wow, can you please tell me an OS where there are no issues with any software when upgrading between releases? Maybe you can also point me to an OS that has no bugs in the initial release?
Because I still haven’t found any of those two. Ever.
kurkosdr,
“Am I the only one who doesn’t get the dream of Linux Desktop?”
No, and I’ve never said it’s for everyone. It’s up to you to decide for yourself, you are free to use whatever works best for you. There are plenty of reasons people might not want to run linux. For what it’s worth, I’m not criticizing your opinion, I’m criticizing your misuse of facts.
Aaah, Fedora, the most advanced unstable distro Shouldn’t be called Fedora Core but Fedora pre-alpha.
Kochise
You’re just jealous, because you still didn’t switch from Ubuntu. >:)
It must be something like that, especially since Fedora 13’s installer Anaconda aggregated all my ext3 partitions together by default, scrubbing out my Ubuntu and my swap partitions, fucking my partition table.
Sure, I should have read more carefully, but I never thought that Fedora would follow a destructive behavior by default. Even Microsoft Windows’ installer is by far the most careful and data preservative installer out there.
Kochise
Can you give an example of that? Because while the *kernel* doesn’t care much for backward compatibility, the desktops are pretty good at it.
Prior to the Gnome 3 release, those guys had been maintaining not only API but also ABI compatibility going back many years, and even with Gnome 3, they made sure the old libraries could be parallel-installed. So if by some chance you have some ten-year-old Gnome app you want to run, odds are you still can.
Better performance but not by much. Needs a bunch of stuff adjusted in order to be usable.
Transparent hugepage support -> OFF. Memory compaction makes desktops lag like mad during I/O.
zram swap -> ON. Much better than swapping (or using up all your RAM and running out of FS cache space).
“laptop mode” support in pm-utils -> OFF, because it screws up important sysctl settings.
Sysctl variables: vm.dirty_ratio -> 2, vm.dirty_background_ratio -> 1. Modern computers have lots of fast RAM and lots of slow disk storage; you don’t wait until pending writes occupy 50+ MB of RAM before starting to flush, that’s stupid.
Also, zcache -> ON. Not as helpful as zram swap, but can’t hurt.
Overall it’s not bad. But the enforced 3D desktop still hurts, as does waiting 5 seconds for the alt-tab window to pop up.
BTW, a note on performance. Pretty much every major serious performance problem I’ve run into on Linux has involved disk I/O. Whenever you have
– Thrashing due to low memory conditions
– Large programs being read into memory
– Big disk writes
things slow way down. Especially with the latter.
Current “solutions” on Linux seem to revolve around delaying writes for as long as possible. IMO this is doubly stupid, because
a) Eventually the data must be written, and when it is you want the write to be of manageable size.
b) If your desktop freezes, your kernel panics, or your power fails, you do NOT want saved data to be stuck in RAM.
However, I’ll readily admit I don’t actually have a clue what would constitute a sane, broadly applicable solution; or if such a thing could even exist. If anyone has ideas on that, I’m all ears.
Edited 2013-04-25 20:27 UTC
Same for Windows : having 8 GB of RAM I disabled the virtual memory : 1- faster application starting, 2- snapier computer, 3- saved 12 GB on my hard disk.
Kochise
LOL, you know that a modern OS expects virtual memory.
These hacks from Windows 2000 day that people used to do are laughable.
lucas_maximus,
Swap was always a bottleneck, a modern system shouldn’t really need swap given how cheap ram is. Most ram already goes towards disk caching anyways, so there’s usually a very large safety margin on systems with 4GB+.
There’s the “just in case” factor, but consider this: a user with 2GB ram might be recommended to have an additional 2GB swap, yet Kochise’s 8GB of real ram without swap still has a safety margin 3x greater than the 2GB swap. Any set of applications that can run on the 2GB system (which is most of them) should easily be able to run with 8GB. I’d say the need for swap is practically obsolete on performance systems.
The OS still expects swap to be there. Windows 7 and 8 pre-fetch regularly used programs. Which is why it appears snappier.
Turning it off doesn’t make the system any faster when there is plenty of memory and makes the system more likely to fail when running out of memory.
Turning it off has absolutely no benefit.
Edited 2013-04-27 12:15 UTC
lucas_maximus,
“The OS still expects swap to be there. Windows 7 and 8 pre-fetch regularly used programs. Which is why it appears snappier.”
I’m not sure how MS’s implementation works, but swap isn’t fundamentally needed for this feature. Pre-loading to ram could work without swap too.
“Turning it off doesn’t make the system any faster when there is plenty of memory and makes the system more likely to fail when running out of memory.”
Sure, if swap gets zero use, then the performance should be identical. The “benefit” of having swap is having it “just in case” as both you and I agree. But you have to recognize the implicit truth in that if 4GB total ram+swap is enough for your work “just in case”, then 8GB total ram without swap would clearly be as well.
The whole reason for swap is to make software work in ram constrained systems. I haven’t seen the “your system is critically low on resources” warning on systems with even less total ram than my new ones. You can make fun of Kochise for not running swap, but frankly he doesn’t need it.
“Turning it off has absolutely no benefit.”
It increases disk space. Haha, just kidding, just be more careful with “absolutely”
I also disabled swap on my 1 GB Windows XP without any problem so far, for 5 years now, running up to 3 or 4 applications at the same time. I used to work on Atari machine with 1, 4 or 14 MB and things were working pretty fine. We have now 1000x more power and complain ?
Kochise
Edited 2013-04-27 20:26 UTC
There is some swapping going on there. Otherwise the OS would just fall over.
http://lifehacker.com/5426041/understanding-the-windows-pagefile-an…
It also been known for a long time that a lot of the XP tweeks such as this are bullshit.
Here is a list of some of the other ones:
http://lifehacker.com/5033518/debunking-common-windows-performance-…
TBH when people claim you don’t need swap etc. I don’t think the really understand how most Operating Systems work … THE KERNEL EXPECTS IT TO BE THERE … irrelevant on whether it is needed.
Edited 2013-04-27 20:46 UTC
When there is enough RAM, the kernel SHOULDN’T use swap, period. I save the size of the swap file and several accesses to it, saving not only time but power consumption and hard disk life time. No “hack” there, just common sense. Windows is still allowed to create temp files, but a 8 GB swap file ? Come on…
Kochise
IF you aren’t using it then you aren’t writing to swap anyway.
The only advantage to doing what you are doing it to save a few gigs of hardrive space that is allocated … when a terabyte hardrive is dirt cheap.
At the end of the day it shows how little you understand about an Operating System.
lucas_maximus,
“At the end of the day it shows how little you understand about an Operating System.”
This is such a silly thing to say. If it works it works, no need to accuse the man of not understanding an operating system.
It isn’t being rude.
http://www.ics.uci.edu/~bic/os/Pajarola/Ch08.pdf
The operating system expects it to exist. I did this stuff in my first or second year of uni.
Edited 2013-04-28 09:06 UTC
If it was really THAT necessary, why could you then disable it in the first place ? Without breaking the operating system ? RAM is more important than virtual memory, it’s long gone the time when you had only 16 MB and needed virtual memory.
It’s not also just a tera byte issue, but when I defragment my hard disk or do a disk image of my OS, I’m fed up having such a waste of place. If it was of a real benefit, but that’s not even the case. I preferred to invest in 8 GB of RAM instead.
Kochise
Be ignorant then.
I read your links, and none, NONE of them told me that during an average usage, when you don’t speak about peak memory usage, no loads of applications run at the same time, virtual memory was of any need.
They only “debunk” the “myths” when projecting the operating system stressed under heavy load. Not my case, at all. And if the operating system and/or the application cannot manage properly the available ram without a virtual memory handler as falloff, then that’s not my problem.
Ignorance is bliss, sometimes, because it keeps the KISS principle alive.
Kochise
Well you didn’t understand it.
The OS maps pages to frames, it expects the paging to be there. There is zero benefit other than saving a few gigabytes on modern drives which are massive.
If you disable swap, the mapping is set accordingly in a 1:1 fashion. When something crash, I don’t give a fuck about the crash report, memory dump, so it’s not an issue. When applications request for memory, they should allocate when necessary, not reserving a big chunk for later usage. It’s not because we’ve got loads of GB that I would lend them to lazy developers that will waste resources.
I don’t open tens of applications side by side because I don’t need to. After all I only have one brain, two hands on one keyboard. Opening two or three apps, and some background apps (torrent, music) is enough and my 2011 quad core computer can handle that. Just Opera waste a load of memory, otherwise things are pretty well.
I will NOT enable swap file for your convenience. You just failed to convince me, either you links or your claims still doesn’t provide me with enough clues. Even Microsoft tells that virtual memory was introduced when memory was expensive and limited, but with hardware featuring so much GB of RAM, the swap is of no use anymore (even starting with 2 GB RAM, if not less).
Keeping swap active is useless, it would just makes Windows happy to have another thing to chew on (after all, the code is there, let’s make it run). I have indeed another plans for my hard drive and my RAM usage. After all, Windows can work without swap file, especially when enough RAM is fitted in the computer (my case) so why should I care wasting my hard disk space ?
Kochise
Whatever.
Kochinse,
“After all, Windows can work without swap file, especially when enough RAM is fitted in the computer (my case) so why should I care wasting my hard disk space ?”
That’s the thing I think he’s (deliberately) overlooking, a system with 8GB ram is better than one with 2GB ram + 2GB swap or 4GB ram + 4GB swap. So unless there are artificial OS based restrictions (which tons of anecdotal evidence suggests otherwise), that configuration is still superior to all officially condoned configurations with less total ram + swap.
*Theoretically* 8GB ram + X GB swap shouldn’t ever perform worse than ram alone if there’s enough free memory to avoid swapping, however the link I posted earlier showed that according to stopwatch time, swap does add some overhead. It’s a surprising result and it brings into question what MS is doing with swap. An educated guess is that windows is preemptively swapping at the expense of some performance, which if true, gives some credibility to those who claim that simply having swap can cause more system activity than having none (even when there’s plenty of free ram).
The configuration gives you a less stable system if I dunno you are actually using the resources that you have, I have similar specs on my Xeon Rig at work … I run at almost 6GB memory. I actually use the hardware I buy otherwise I would be still using my 2005 laptop.
This is pretty much the same inane chest puffing that the gentoo ricer flags crowd et al get into while bragging about how they are only using 10MB memory and the CPU idles at less then a percent; their install is basically openbox and urxvt open with a uname echoed to the console.
It not worth messing with because of all the potential problems you might be exposing yourself to.
Edited 2013-04-29 03:06 UTC
There is no “potential” risk, Microsoft admited that even if there is STILL plenty of RAM left, it WILL swap less used pages for the sake of making real ram available, IN CASE OF. This is the point I choke on. IN CASE OF. Of what ? Have they made stats to figure out the ram consumption according to the applications loaded ? Nope, they just PREVENTIVELY swap less used things, IN CASE OF.
While I can understand the motives (make ram available for more serious usage) I would take the problem inside out : what the fuck are doing application allocating memory for nothing, that needs to be swaped out ? Seriously ? So I should raise defense barriers around my system to skirt the flaws in the third parties applications I paid ?
No ways. I got enough ram for the system to feels at ease, for not him having to swap around endlessly. If whatever quantity of ram provided (64 GB ?) the system still swap, I disable swap, pure and simple.
Come one guy, 8 fucking GB of RAM ! Do you even realize HOW MUCH space it is ? And Windows or applications can waste this ? It’s not even a server or things like that, just a normal laptop (Dell Vostro 3555 A8). If applications cannot manage this much free space and I’m forced to turn swap back on, I’ll just get rid of the poorly coded application.
Kochise
You just proved my little rant about computer ricers, that don’t know what they are doing and are precious about their resources like we were computing in the last century.
You can get as angry as you like, it doesn’t make any difference that you are running your OS in a unsupported configuration for dubious benefit.
Unsupported configuration ? UN supported ? Please elaborate…
Kochise
No I won’t. I am quite bored of this.
lucas_maximus,
“It not worth messing with because of all the potential problems you might be exposing yourself to.”
You are still no worse off disabling swap than using another computer having less total ram+swap. The underlying need for swap decreases as ram increases. On a modern high performance machine, where swapping is completely undesirable, it’s justifiable to add more ram instead of swap. Yes it’s more expensive, but for a performance system, so what?
If you could provide real concrete reason that one needed swap to have a functioning system in theory or in practice that’d be one thing, but the problem with that argument is that windows actually runs fine without it.
At the end of the day it just show how little you know about memory allocators and such. And no, by default, go figure, Windows do use the swap for every unknown reason so far. Removing the swap force Windows to use real RAM.
Kochise
lucas_maximus,
“TBH when people claim you don’t need swap etc. I don’t think the really understand how most Operating Systems work … THE KERNEL EXPECTS IT TO BE THERE … irrelevant on whether it is needed.”
You are simply wrong, many of us have done it with no ill effects what-so-ever (I did it on XP, but I’ll try it again on win7 when I get the chance). I’ve experienced one single instance where an application stupidly *demanded* swap (adobe photoshop). Even when there was plenty of free ram, it’d error out. This was most certainly due to a bug or poor logic on adobe’s part, but in general the OS doesn’t care as long as there’s enough ram.
Edited 2013-04-28 01:42 UTC
lucas_maximus,
This thread is full of anecdotal evidence of people permanently running windows 7 without swap and experiencing no problems. And it’s also full of people preaching ‘oh but you can’t do that’ quoting some (old) microsoft material…haha it’s funny for such a simple issue to be a hot button.
http://www.overclock.net/t/448818/pagefile-on-windows-7-do-you-even…
I found someone who researched the issue in detail. Believe it or not he had marginally better bootup and shutdown performance on win7 with swap disabled! Some games also showed improvement. Nothing huge, as he acknowledges, but it still shows that the windows 7 pagefile adds -some- system overhead. Evidently windows does a bit more work when the pagefile is there.
http://www.tweakhound.com/2011/10/10/the-windows-7-pagefile-and-run…
In any case neither he nor I are saying it’s worth disabling swap, but hopefully this information convinces you that running without swap is perfectly viable and not worth criticizing someone over.
Gullible Jones,
“Whenever you have
– Thrashing due to low memory conditions
– Large programs being read into memory
– Big disk writes
things slow way down. Especially with the latter.”
Actually it depends greatly on whether the reads/writes are sequential or random. Just now I conducted a short test:
I timed the time it takes to read 2048 (10MiB) sectors individually using a sequential access pattern at random starting disk positions. On my system it took 0.03s or 333MiB/s.
I read another 2048 sectors, but this time with a random access pattern. This took 27.71s or 0.36MiB/s.
As you can see, the slowness is almost entirely caused by disk seeking, raw speed is plenty fast and this is on equipment that’s 5 years old already. Oh, I would have tested writes as well but I don’t have a sacrificial disk to write to in this system
“Current ‘solutions’ on Linux seem to revolve around delaying writes for as long as possible. IMO this is doubly stupid, because a) Eventually the data must be written, and when it is you want the write to be of manageable size.”
You can disable delayed writes, but think about what that would do in terms of total seeks. I could write each sector immediately to disk as soon as I get it (and cause lots of seeks), or I could try to bundle as many sectors as I can together, and then write them all at once.
“b) If your desktop freezes, your kernel panics, or your power fails, you do NOT want saved data to be stuck in RAM.”
Fair point.
“However, I’ll readily admit I don’t actually have a clue what would constitute a sane, broadly applicable solution; or if such a thing could even exist. If anyone has ideas on that, I’m all ears.”
There are some answers to the problem. Servers have battery backup disk cache, but these aren’t found on consumer equipment. It seems like it should be feasible on a laptop, since it already has the battery Also, solid state disks don’t have as much seek latency as the spinny ones. As for software FS solutions, we’d have to research ways to consolidate reads. If we are reading 2000 files at bootup, that’s almost 30s wasted to seeks in my little experiment. The FS should really be able to keep them ordered sequentially, that way the OS could read all 100MB it needs within 2s.
So true.
I don’t know if it counts as a solution, but ionice’ing every big “offender”, while giving priority to the UI could help. Unfortunately, nothing like this is being done — whenever e.g. the apt cache kicks in in the background, everything just becomes unresponsive (it took me a while to find out why I experience sporadic lags). Or, we’d need a saner scheduler…
They should have changed their slogan from:
“Ubuntu – Linux for human beings!”
to:
“Ubuntu – Linux for the advertisers!”
Also, they are actually making their users’ lives so much harder and less secure/safe/private. And last, but not least – they break Free/Libre software rules intentionally. They seem to transform to pure OSS model. They don’t need human beings anymore – not in the process of deciding to whom this will go eventually. They need ad clickers and app buyers.
It’s all actually pretty sad view.
Maybe it’s you who don’t understand what users want.
Really. What rule are those, exactly?
I’m sure meant to say something else because this would actually be a good thing.
… have you tried looking out of, and not into, your rectum?
Edited 2013-04-26 06:14 UTC
What the downvoters don’t understand is that for most people, a computer is a means, not an end. They buy a computer to run programs on, like they buy a DVD player to watch movies on. They don’t just buy Windows or OS X itself (the UI, the kernel etc), [u]they also buy the means to run the awesome apps ISVs have written for Windows or OS X[/u] (“they buy the ecosystem” in marketdroid-speak).
So, there you have Linux Desktop, killing a significant part of it’s “ecosystem” by making it hard to impossible for proprietary ISVs to ship software on Ubuntu.
But hey, the people in the key positions in Linux Desktop who made the decision to have an unstable API and ABI weren’t planning to use that proprietary software anyway, so it “works for them”.
Browser: Mozilla/5.0 (Linux; U; Android 2.3.4; el-gr; LG-P990 Build/GRJ23) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 MMS/LG-Android-MMS-V1.0/1.2
kirkosdr,
“So, there you have Linux Desktop, killing a significant part of it’s ‘ecosystem’ by making it hard to impossible for proprietary ISVs to ship software on Ubuntu.”
“hard to impossible”? Seriously?? And what evidence do you have of that? This is isn’t the ipad, it’s ubuntu. You can install outside software by downloading the appropriate tarball from a webpage, extracting it, and then running it. Maybe the installation process could be improved for software outside the repo, but face it, it’s the same as windows. Heck if you have wine installed you can even run a ton of compatible *windows* software too.
appdb.winehq.org/objectManager.php?sClass=application
“But hey, the people in the key positions in Linux Desktop who made the decision to have an unstable API and ABI weren’t planning to use that proprietary software anyway, so it ‘works for them’.”
None of this affects usespace apps, which is most often what the “walled garden” refers to.
However even if you wanted to apply it to the kernel, the linux kernel is still more open than windows. Not only is windows source closed, but MS requires developers to buy code certificates to install their own drivers. That policy basically killed the open source scene in windows kernel development since the certificates cannot be shared and all individual would-be contributors need to buy their own certificates to build kernel modules for their own machines.
You are completely exaggerating the difficulty of running proprietary software on linux. If anything, proprietary software doesn’t do well on linux because linux users are wary of closed software, not because there’s anything preventing users from installing it.
It’s ironic, because your need to disclose the inconsequential details of your browser on your posts seems to indicate you view your computer as an end.
In any case, there are plenty of commercial software being developed and deployed on Linux. Also you keep using that term “ISV” which does not mean what you want it to mean. Linux has is plenty of traction among ISVs, specially in the middle and back ends of computing infrastructures.
Yes, Photoshop may not support Linux anytime soon (if ever). But then again, none of the commercial EDA packages we use on linux are going to be ported over to OSX either. And neither are some of the commercial databases, accounting/personnel/financial middleware, industrial control suites, etc, which run on Linux and not on OSX (or even Windows). Applying your own logic, that must mean OSX is hostile towards ISVs then?
Edited 2013-04-26 17:45 UTC
Excuse me? The (retarded) mobile version of osnews inserts this junk at the bottom by default when I post from my phone.
Browser: Mozilla/5.0 (Linux; U; Android 2.3.4; el-gr; LG-P990 Build/GRJ23) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 MMS/LG-Android-MMS-V1.0/1.2
OS X? Windows when upgrading to SPs? Of course there are always chances something somewhere might break, but Linux Desktop is just ridiculous. Sometimes it’s *designed* to break, like when PulseAudio got released, or every time X.org breaks Intel and Nvidia proprietary GPU drivers, and it’s practically *designed* to break the upgrade and drop you to a blank screen or CLI.
Anyway, Ubuntu’s problem is that even on a clean installation, sometimes old app binaries just don’t work, because the API is not stable.
And don’t get me started how most distros get upgraded on a breakneck pace. I can do a format and clean install everytime a new Windows version rolls out, but every 6 months?
Browser: Mozilla/5.0 (Linux; U; Android 2.3.4; el-gr; LG-P990 Build/GRJ23) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 MMS/LG-Android-MMS-V1.0/1.2
No.
No.
Ah. Lets look at the products you mentioned.
Steam has only been in Ubuntu for a few months so it hasn’t even broken “in the last couple of releases”.
Steam is also incredibly buggy even on Windows so it’s not surprising that Valve would screw something up at some point in the Linux version. Yes, screwing up a dependency is Valve’s fault, not Ubuntu’s.
As for VMWare, well, the guy who works there already said it’s not Ubuntu’s fault if/when it breaks. That’s not even mentioning how suspect your described problem is.
Finally we have Skype. Skype is, and always have been, badly engineered to begin with so I’m not really surprised it breaks now and then. This is the first time it has been broken in a new release though and I’m sure a fix will be out for that from Skype or Ubuntu soon and there’s a workaround so its not like the sky is falling.
Sometimes old software doesn’t work in a new/clean Windows or OSX either. Also, there’s no Ubuntu API so I don’t know what API you’re talking about that’s supposedly not stable.
So, uh, don’t use something that’s obviously not suitable for you? Just because it doesn’t suit you doesn’t mean there’s something wrong with it.
Steam isn’t that buggy on Windows, the problem was that Steam was expecting certain dependencies to be present that happened to be present on an older version of Ubuntu.
Ubuntu broke Steam.
It probably broken because it was compiled with a certain library that is either in a different place or a different version. The problem is that unlike Windows or OSX to a lesser degree, libraries in the system change.
Getting spotify working on fedora required me to symlink and it still wasn’t stable.
Doesn’t happen nearly as often. Which is the core of the argument. Yes there are work arounds, but you just don’t have to do it on Windows.
TheDailyWTF tells a different story.
No. Things changed and Valve wasn’t on the ball. Ubuntu is their only officially supported Linux distro so we should really expect them to handle stuff like this.
I can’t remember when a program I use broke between Ubuntu versions and I’ve been using Ubuntu a long time.
Of course, I can’t remember he last time it happened in Windows either.
Oh-comon .. most of those screen-shots were actually content update errors from their Content Management System on their webstore. So their webstore was error’d not the application itself.
There were 2 actual application errors.
Well we have to differ on opinion on that one. I don’t have to worry about Steam breaking with Windows Updates.
If it is open-0source I doubt it will break, but anything outside the supported repos. You are on your own.
Edited 2013-04-28 09:24 UTC
Perhaps but lets say, for arguments sake, that a Service Pack would change some functionality and break Steam I’m sure you would expect Valve to handle that and not Microsoft.
Depends whether it was documented functionality or not.
Only if you choose a distro with such an upgrade cycle.
Slackware is yearly, CentOS and Scientific Linux are far longer.
Also I have upgraded fedora quite a few times between versions and most of it has been okay. While it isn’t Windows levels of support … it isn’t quite as terrible as you are making out.
Hi,
I wrote a little step-by-step guide with screenshots to illustrate the upgrade process from Ubuntu 12.10 Quantal Quetzal to 13.04 Raring Ringtail very precisely.
I hope this screenshot-guide is of use for those looking to upgrade to the latest version of Ubuntu.
I am looking forward to your feedback!
http://www.maknesium.de/upgrade-to-ubuntu-13-04-raring-ringtail-in-…
You mean, it’s not as straightforward as “Click next, click next, accept eula, click next” like in Windows ?
Kochise