“To be clear about this article’s intent, it’s not to bash Microsoft, or Windows. Because to be fair, despite using Linux 95% of the time while I’m on the PC, I can find more faults with it than Windows. So, this article’s goal is to highlight some of the major pluses of Linux, and also showcase where Windows could improve in the future, should Microsoft take heed of the suggestions.”
Disclaimer: Unlike the article author, I am not a full time Linux user; ergo, my complaints are from an opposite perspective that the better informed Linx user (that is, you dear readers) will see as an attempt at trolling. Now I can’t fool you, but I tell it how I see it and I hope that this will actually help in the discussion with the topic at hand^aEUR”what Linux does that Windows could do better. The extra info you provide to make Linux better for me is always very useful.
- 1. Partitioning
-
Rob Williams (the article author) provides GParted as example. Almost anything is better than diskmgmt.msc. However, the author should have mentioned that the command line tool DISKPART provides much more power, though still without the ability to handle non-Microsoft filesystems.
- 2. Activation
-
Agreed; just a few weeks ago I had to deal with a false positive on a Vista computer. The customers had no idea how to deal with it. The activation warning was unhelpful, stating that the machine was in fact illegitimate, and no link was provided to activate the machine; I had to use search to find Vista’s OOBE and phone Microsoft.
When false positives hit, it hurts only legitimate customers. How Microsoft can continue to do this to users is beyond me. The news may be full of all the scary new ways Apple are restricting the iPhone and iPad, but OS X has no anti-piracy measures, no activation, no licence keys^aEUR”and Linux has no concept of a pirate copy in the first place.
- 3. Customisation
-
That double-edged sword. It’s all down to how it’s presented and the effort that has gone into making it difficult for the user to shoot themselves in the foot. KDE versions prior to 4 are known for being particular extreme when it comes to customising opportunities, with the user dumped into a badly chosen set of defaults and expected to sort it out themselves. Things have improved, and KDE 4 shows a side of customisation that is less MySpace and more interior decorating.
- 4. Automatic User Login
-
Is control userpasswords2 actually documented? Where are you supposed to find this out? In Linux, the option is easier to find than arcane run-box magic.
But I have this memory niggling at the back of my mind; oh yes! That for a long time, GNOME network manager would ask you for your chainring password every time if you had autologin enbaled, and that this bug persisted for two or more releases of Ubuntu before getting fixed. I’m still angry about that. How could it be so low priority to make autologin unusable for two years!?
- 5. Troubleshooting
-
Definitely a tough one to quantify, especially given that I fix PCs for a living and generally know how to get out of a bind in Windows. Windows always fails though when it comes to data. In Linux, you can backup the home directory and know that you should be able to transplant it to another system. In Windows, your data is scattered everywhere and since Vista, is basically untransplantable (USMT is useless) unless you were wise enough to have a backup. You do do backups, right?
- 6. No-Nonsense OS Updates
-
Except for that time when I updated Ubuntu from 8.10 to 9.04 and the update process called for key-presses to continue, which was impossible to know unless you expanded the details view to show the command buffer and had realised it had paused waiting for a key. Not forgetting that the update hosed the system afterwards anyway.
Anyway, that’s just one bad experience. Windows is certainly no better (ever had a crash half way through a service-pack install?) Linux has the benefit that packages of all kinds are managed universally across the install, and that updates encompass all software on the machine. Windows software has scattered update mechanisms, and even though OS X has no Apple-provided third-party update mechanism, developers have more or less all agreed upon using Sparkle, so it’s still better than what Windows users have to endure (Would you like the Yahoo toolbar with that Java update?).
- 7. Easy Installation of Common Applications
-
The most refreshing thing about Linux software is that you know that installing it is very unlikely to invite along a whole ton of stuff you didn’t ask for that appears every time you start your computer. Being a Windows user is a chore, keeping the startup clean; it is just something about the Windows environment that is unlike both Mac OS X and Linux, where by vendors feel the irressistable urge to abuse your machine. It’s like screwing over users is the culture on Windows. I have to deal with this every day, with customers innocently trying to get the basics on their computer (because Microsoft don’t provide the basics with the OS anymore) and getting a whole ton of crap they didn’t ask for.
If anything, getting only what you asked for is the most refreshing part of Linux for an end-user.
- 8. Interoperability
-
Linux has to work extra hard here to work alongside Microsoft, who have erected road blocks at every stage. Dual booting is a pain because NTLDR/BCD choose to be ignorant of choice and neither has Microsoft been exactly forthcoming with interoperable SMB. The fact interoperability works at all in Linux is testament to the hard graft done on their part.
- 9. Command Line
-
The history of command line functionality in Windows is long, boring and uneventful. The only interesting thing to happen in twenty years ended up sidelined because Microsoft didn’t ship Monad (PowerShell) with the OS. I suppose it all helps sell copies of Visual Studio, or something.
- 10. Performance & Stability
-
Whereby two whole podcast episodes and a few hundred comments were devoted to X alone. Not being a full time Linux user, this is one that’s hard to quantify. Every experience I’ve had with Linux has been unstable. The variables that contribute though are so many (hardware support for one); so given Microsoft’s massive advantageous position in this regard, Linux does surprisingly well in this. In fact, all major OSes are pretty stable now and stability debates feel a lot like taking a trip back to 2001.
Windows has had to deal with unstable third parties forever, and the OS wraps functionality around providing stability when things go wrong. On the other hand, Linux seems to treat stability as Somebody Else’s Problem. If X crashes, blame the driver. Blame anybody. It doesn’t matter whose fault it is, you should be mitigating the damaging effects to the user. Crashes will happen. Throwing blame around when the user just lost their data is no good because the user will blame you. In this regard, I have to say that Windows 7 is simply the leader in being both stable, and dealing with crap when it happens.
Final Thoughts
Rob has chosen the ten things that matter to him:
So there they are… my ten picks of things that Linux does better than Windows. In truth, I could have made this “top list” much larger, but these ten are those that have been stuck in my mind for quite some time. There are of course many other benefits of Linux over Windows, such as security, TCO and so forth, but those are rather boring to talk about. Since I deal with both Linux and Windows everyday, I’ve developed some rather fierce opinions about what Linux can do that Windows should do.
For myself, the list would be different. The complaints I’ve added to his ten points are examples where I believe Linux should be doing better for itself, but besides these user interactions, there are numerous benefits to being part of the Linux ecosystem for the user. Freedom is simply not a tick on a feature chart. It encompasses a culture that provides the user a greater level of respect as the owner, and operator of the computer than simply a consumer who will be force-fed the interests of companies. It is that, that I prefer beyond the nit-picking (that’s just the polish).
Just nitpicking
9 – Powershell indeed ships with latest Windows versions. (from Wikipedia: Windows PowerShell 2.0 was released with Windows 7 and Windows Server 2008 R2)
7 – Linux is unfortunately going backwards in this department. I seem to be enable to get rid of “experimental” stuff distributed with latest versions of Fedora (after v10). I had crippled sound functionality due to Pulse Audio, and accelerated ATI card graphics still to no work after three iterations. I’m no longer able to choose my own sound sound system, or easily compile my own (older) kernel.
Oh, sorry! It had not passed my radar that Monad had shipped with Windows 7; thanks very much for the tip off!
Your second point is mostly BS. If you care about not having pulse audio or compiling a kernel then there are distributions that cater to your needs. I know of no distribution that prevents you or makes it MORE difficult to install your kernel than before (though some are better at this than others).
Unfortunately ATI catalyst under linux is completely backward (i.e. crap), for the free driver you’ll just have to wait GFX is hard. or you could go to nvidia with a good binary blob, nouveau (will have to wait) or intel which is free but has rather crappy hardware. Linux isn’t going backward in this respect, just a lot more slowly than the other oses…blame nvidia,ati or intel.
Edited 2010-04-09 11:43 UTC
You’re missing the point. Fedora is the distribution I’m accustomed to. Switching to Ubuntu did not work for me in the past, and I’m not very adventurous to try again.
Edited 2010-04-09 15:33 UTC
What you mean is that your distribution acted irresponsibly. I assure you that in Debian land I still haven’t installed PA nor been required to install it to fulfill a dependency. Will it happen? Probably, but not until the next major release (at least) and likely not event hen if it’s still not stable enough.
That’s great that Powershell is shipping by default now, but why didn’t they ship it with vista? That was always the plan to send it out with longhorn. Then for no good reason, they stripped it out even though they ended up releasing it before vista.
The first version was released as an optional update in XP/Vista/Server. Maybe it wasn’t completed on time, who knows.
If they should be bashed for anything related to powershell it should be for not including ssh. Sure you can install it yourself but………that is just plain lame.
Here’s the history.
PowerShell was released November 14, 2006
Vista was Released to manufacturing in November 8, 2006;
While its public retail release was January 30, 2007
Still ridiculous, IMHO, to not include it in Vista.
Here’s a period article.
http://www.microsoft-watch.com/content/operating_systems/monad_scri…
They thought Powershell would take longer so they took it out, but it ended up only requiring six more days. MS was making lots of bad decisions back then, so I guess its sort of like beating a dead horse complaining about it.
But, yeah, lack of ssh is also a silly draw back ,but SSH isn’t really a Micorosft ish technology. Is there a more mircosoft way to do it? Like Remote desktop for servers?
For a lot of tasks you can use MMC to connect, but this does nothing if you need to do even the most basic data management (move, copy, delete, etc..). 90% of the reasons I often connect to Linux servers via SSH is doing tasks I have to do using RDP.
This is actually someone of a relevant issue for me with the new breed of mobile OSes and their lack of multitasking. I run more Linux servers than Windows. I can use my mobile phone to VPN to the office (helpful since I am often out of the office). With SSH I can easily do most tasks, while switching to other applications (e-mail/web). Without the ability to MTS I can not keep a persistent connection to the servers. Now when I SSH to the server to copy large amount of files to a web enabled directory that I can download onto my phone…multitasking would be kind of nice thing.
Yea they want you to use rdp but that isn’t very useful when you want to ssh into a nix server from Windows. A lot of people have complained about it so maybe they will include it at some point.
They do include it, you just have to install Subsystem for UNIX-based Applications if you have a version older than Vista or 7, otherwise it is included on the enterprise and ultimate versions.
http://en.wikipedia.org/wiki/Microsoft_Windows_Services_for_UNIX
Being included means being able to open powershell and type ssh. I already noted that it can be installed separately.
Anyways from your own link:
SFU does not contain the following (but binaries are available for a separate installation[1]):
bash, OpenSSH
the guy who wrote the article says:”I still do believe it to be the most stable OS out there” … I can tell you that this guy doesn’t know a lot about computers by this sentence !!!
He has probably never use something like OpnVMS or NSK (Non bStop Kernel).. because if it was the case he would not declare that linux is the most stable OS !
Knowing that you take his writings with some salt …. and conclude that he’s just some young kid from elementary school or something !
just this: in Unix/Linux … you just don’t talk about “command line”, dude, it’s called a “shell” !!!
Come back to writing reviews with opinion in t , let’s say in 10 to 15 years when you will have had the opportunity to gain experience concerning OS by using then daily for long period of time !
Currently yo say so much dumb things that you are not credible at all !
Because of big differences between Windows and Linux networking Linux is not only manages network better, it can use several connections simultaneously. You can assign cable connection to be used only to download torrents and use wi-fi for browsing, in Windows you can load balance two, but cannot use both at full speed. It’s because Windows can use only one default gateway at a time.
Can you please provide me with some pointers on how to do that?
You’re right.
I discovered this quite by accident, because I put a wireless card in my desktop several Ubuntu releases ago and couldn’t get it to work, so ended up with a wired solution.
I noticed some time after installing 9.10 that the wireless card was not only working, but trading off transfers seamlessly with the wired connection. I could never get XP to do that; haven’t tried with 7, but it couldn’t be easier than with Ubuntu.
Pretty darned amazing.
That has to be one of my favorites in the network area; bonding two networks into the same MAC/IP is a natural ability on Linux based systems. With Windows, I either have to disable one of my NICs or end up with two separate IP for the same machine; Booo!
I’d love to get true NIC bonding in Windows what with two NICs on the motherboard being pretty standard these days.
You can bond two nics in windows and have a unique mac/ip address while not having a ip assigned to two adapters. Just select both adapters and choose bridge. it will create a VIF and you can then assign IP addressing or DHCP.
I’ll look at that tonight on my XP box though I didn’t think bridging was how it was done; always good to learn something new though. If I can bond the two NIC under a virtual device then assign that a MAC/IP then I’m all set.
Forget about all the nifty things you can do with NICs (bonding, vlan, fail-over, bridging, etc). The greatest strength of Linux over Windows is the network transparency.
For example, being able to run programs on machine A with the graphical output showing on machine B (thin-clients). Or, being able to boot off a network server, without any harddrive in the local machine (diskless mode). Or, being able to combine the two (diskless thin-clients). Or, being able to access remote resources in GUI applications via ssh://, fish://, sftp://, http:// etc, as if they were local disk resources.
Windows NT may have started as a client/server OS with integrated networking. But it’s non-Windows OSes that truly excel at network-related tasks and setups.
What are these things that Windows can do better than Linux?
Edited 2010-04-09 11:16 UTC
Well for starters, getting billions of people to use it as opposed to a few million nerds.
How does that makes Windows a better platform than Linux?
If we use the same logic, then is McDonalds better than the other hamburgers just because more people go to McDonalds?
All the things that the article claims linux does better could equally be applied in article called “things windows does better than linux.” It all depends on your perspective.
I mean come on, how many “normal”* people know about point one – or need to know? partitioning.
Point seven is a joke. Obviously never actually used windows if he thinks Linux does that better.
People don’t use windows just because it comes with their computer. They use it because they couldn’t give a 5h!t about partioning, command lines, etc – or OS’s.
* by normal I don’t mean the 6 year old kid brothers and grandmothers that according to sites like this all have no trouble using Linux. I mean the hundreds of millions who buy a computer and never have any problems with it or the version of windows it comes with.
Edited 2010-04-09 11:44 UTC
Well, if we’re talking “normal” non-tech consumers then they buy Windows because it’s what they see on all the store shelves and have no idea there is any other option outside of Apple. Consider how many people think Windows == computer and couldn’t tell the difference between hardware/OS/application.
In terms of partitioning. Users who do get preinstalled OS outside of Windows also don’t worry about partitioning. Users who install Windows themselves have a partitioning step just like they would with other OS. The complexity of accepting a default partition setup is really no more complicated across the average user targeted OS either. Mandriva, Debian, Windows; default partitioning even a “normal user” could manage.
Re: point 7. You seriously think hunting the web and downloading separate exe and zip files of questionable origin, and which each take minutes to install, is easier than just selecting packages in a searchable package manager and have them automatically downloaded and installed with all necessary dependencies? You’re doing something wrong.
Uninstalling Windows apps is even worse. Just waiting for the list of installed programs to be populated takes more time than the whole process in Linux. And is that list even searchable?
What hunt are you talking about? I never had to hunt anything down. There are no separate zip and exe files. They are always exe. There are of no more questionable origin then your linux apps. What kind of apps do you install? Ones from porn/warez sites?
Unistalling is a peace of p!55. Must be your system if it takes a long time for your list of apps to appear, and I have never had the need of a search function.
How can you *not* complain about the remove software screen? By far one of the worst areas of windows.
Have you ever had a need for a program that did not come with you’re operating system? Ever need a good image editing program, zip program, gui ftp program,bit torrent program, encryption program, video editor, hex editor, chat program, or any other program you that didn’t come with windows and you can’t find at best buy? Then you’ve had to search for it on the web on sites like tucows, download.com, ect. Its just not as trustworthy, fluid, and searchable as a linux repo is. There has been a level of QA done so that all the programs play nicely with each other and won’t destroy your system with hidden malware.
From your comments, it really does seem as if you aren’t that familiar with the pains of windows. You don’t seem to have much experience installing/removing software from windows. I’m guessing you aren’t much of a power user. Which isn’t a crime, but you may want to sit out a discussion on the benefits/ drawbacks of an area you don’t have much experience in.
Packages in the Debian repository (and Ubuntu, and no doubt Red Hat and Suse as well) are cryptographically signed. That makes them less questionable than anything you download manually off the web (especially virus prone exe files) or get from removable media.
And the fact that you never needed a search function is irrelevant: the world doesn’t revolve around you.
You’ve never done any development on Windows I can tell.
Whenever I want to compile someone’s project in Visual Studio it is a big hunt for every component that is needed and there is no method to make it easy or find updates to components.
I admit that commercial Windows applications make themselves easy to install. But they do it by being huge downloads.
How much of the 70 MBs of ATI driver package is really needed? Most of that is probably crap your Windows machine already has, but in order to make it easy you have to download the whole pile every time.
While the big downloads are a problem, and yes it might not all be needed. The reason it is packaged is because it is a binary distribution, as a publisher of software you don’t want to rely on the pc it is installed on having all the binaries needed. Package management tries to solve this but in my experience doesn’t always work, and therefore to ensure successful delivery of working software that is what we have resorted to.
And um…we have broadband….70mb dled is nothing.
“Whenever I want to compile someone’s project in Visual Studio it is a big hunt for every component that is needed and there is no method to make it easy or find updates to components”
Those structuring the vs projects are doing it incorrectly.
Edited 2010-04-09 18:26 UTC
You might have it. I don’t.
At home I am seriously considering $350/mo for a T1. That’s over $4,000 a year. I’m already paying $100/mo for IDSL which is 1/10th of a T1.
A 70 MB ATI driver download is a big deal for me.
But if most people go to McDonalds even when your own hamburgers are free then you have problems that can’t be blamed purely on taste.
It’s not my (ie. Linux’s) problem, it’s theirs.
1) If people can’t be bothered to look around for better alternatives and just take what’s in front of them or what everybody else uses, they deserve what they get.
2) Linux is free and the people who make it largely don’t know and don’t care about who is using it. So I don’t see how they can have a problem about people who don’t use it.
But that is not what happened.
Microsoft managed to make deals such that stores won’t sell anything but Windows, and OEMs won’t install anything but Windows (unless you go to a Mac specialist store, which is even more expensive).
The only way I can buy a Linux machine is to either: (1) go to some very obscure on-line supplier, and hunt through many levels of options on their web page, and if I’m lucky, I can find a selection to opt for a Linux OS, or (2) I just buy parts, including a blank hard disk, and assemble my own systems.
I personally follow the second option. In this way, I can get a nice Linux machine for about a quarter of the (total hardware+software) cost of a store-bought Windows machine of equivalent performance and functionality.
Unfortunately, that second option is way beyond most people, and so most people are simply unable to choose Linux.
However, getting back on topic: One thing that Linux does infinitely better than Windows is that it is way, way easier to “roll your own” system on Linux. You can start with a bunch of computer parts in separate packaging, plus a Linux LiveCD, and assemble, install and configure a fully-functional Linux machine in a couple of hours. The hardest bit is often getting some of the parts out of the plastic packaging.
I have just finished using Unetbootin on my desktop machine to prepare a bootable USB stick for Kubuntu 10.4 beta 2. I had downloaded the .ISO file earlier today. Excuse me while I go now to put this new OS on my netbook (which has no CD drive). Back in 10 minutes or so …
(You just can’t do that with Windows).
Edited 2010-04-09 11:44 UTC
re the bit in bold:
Last time I checked, Dell and ASUS weren’t obscure manufacturers.
However you’re right that little to no effort is put on Linux-preinstalls, which is a shame.
Dell Australia doesn’t sell Linux machines.
If I was going to buy a Linux machine online, I’d go here:
http://www.vgcomputing.com.au/
For a desktop:
http://www.vgcomputing.com.au/linuxinfo.html
or for a laptop:
http://www.vgcomputing.com.au/nsintro.html
However, I ordered my netbook from here:
http://www.kogan.com.au/shop/kogan-agora-netbook-pro/
It is important to me that any website where I buy stuff from has the ending of .au
Edited 2010-04-09 12:15 UTC
Fair point.
If you’re after a desktop, then you better off be building anyway (regardless of OS) as it’s cheaper and you get exactly the spec you want rather than whatever approximation is on offer.
Laptops are a different story obviously, but some OEMs do refund you the cost of Windows if you return the CD and provide photos proving you’ve declined the EULA. Emphasis on the “some” though. Plus, from what I’ve heard, the whole process can be a PITA.
But going back to your point: you are abosultely right that Linux should be an option that’s more widely available (or, at the very least, blank systems)
Edited 2010-04-09 13:37 UTC
Yes, but then the problem becomes “which linux?” There are hundreds of distros, 4, 5, or 6 BIG distros that tend to be more user-friendly (more support for example).
The manufacturers can have an option when you buy like :
Which Distro of linux do you want preinstalled? That would mean they would have to keep a bank of disks to image copy, and also make sure they maintain the versions of linux on those disks… then there is the issue of what apps to pre-install, if any… etc.
With Windows it’s “Windows 7”, home, business, ultimate… and the versions rarely change often, maybe once every few years. Most linux distros are on some sort of 6 month to 1 year upgrade plan.
It just isn’t a feasible option for most PC manufacturers/retailers.
No reason why a manufacturer would carry anything more than one or two distros, no need for them to keep updates, that’s done at the distro’s repo site. As far as preinstalled apps that’s done on the distro’s install discs.
But how would that satiate the people who actually know and use linux?
I suppose it would at least offer the possibility to introduce linux to people who are unfamiliar with it.
They only need to support one distro. So long as one distro is supported then the hardware will support every distro so users can install whatever they want (and as Linux is free, it’s not as if there’s a financial restraint like there is with Windows).
So they’d only need to support something user friendly enough the the average user wouldn’t need to reinstall but the experienced user could with confidence in the compatibility.
Edited 2010-04-09 19:08 UTC
A linux pre-install maybe not, but there’s no particular reason not to offer a “no OS” option. Or, well, there shouldn’t be.
It’s not “Windows” but Microsoft that has gotten those billions of Winodows users and few of them obtained through product quality or functionality.
Popularity is not an indication of product quality either; it only indicates retail market success which involves many variables beyond the product attributes.
– Work with projectors.
– Audio. Linux audio is currently more broken than ever
– Gaming
RandR works just fine.
Nope. Works beautifully out of the box (KDE). Can’t say the same for Windows (you will often have to find a 3rd party driver for your audio).
Fair enough.
I’ll see your “gaming” and raise you “formats supported”, “interoperability” and “cross-platform support”.
Edited 2010-04-09 12:42 UTC
As a DJ and producer, Linux audio is rubbish compared to Windows.
Sure, for basic desktop uses, Linux audio is mostly ok. But move into the professional spaces and it’s a complete mess.
For example: It’s taken me longer to get one external sound processing unit recognised as the primary sound card in OSS than installing an entire XP system from scratch including studio apps such as Soundforge, Ableton, FL Studio, and more than a dozen VST(i)s.
And that was /JUST/ setting up /ONE/ device in OSS. It still doesn’t work in ALSA et al. I still haven’t got Jack working either.
So as much as I love Linux and use it as my primary desktop, I’m not going to waste my entire time setting up a Linux studio workstation when I should be writing music on it. For me, Linux audio isn’t even at Windows 2000’s level – and that’s just unacceptable.
The worst thing is, it’s not even as if Linux couldn’t work as a functional professional audio workstation. If FSF/GNU/whoever just sat down and agreed a concise standard and then spent some time giving it a little love, they could bring Linux audio into the 21st century with in no time (comparatively speaking). But as always, there’s too many cooks in the kitchen and nobody serving the food (excuse the analogy).
Edited 2010-04-09 13:26 UTC
It’s also an area where hardware manufacturers cripple the end user. Your gear probably includes a vendor provided Windows driver. Even Creative is guilty of this though in there defense, they at least released the X-FI driver source to the Alsa project when they decided not continue it’s development. Driver source or, at minimum, driver interface specs would go a long way to fixing audio along with other hardware issues.
Heck, Creative’s guilt isn’t even limited to Linux based platforms; the Windows drivers are horrible to manage. Sure, the base driver is just an EXE but go to the update applet and you’ll find a confusing list of extra crap regardless of if it’s installed or not. And printers, why can’t any printer company deliver a simple driver without the added crap bundled into it; I really don’t need a quarter of my screen taken up by a stupid graph GUI of ink levels why my document prints. And, not just consumer grade printers, our beasts in the office have a nice useless status popup which provides no benefit to the business users who have nothing to do with maintaining the printers.
There’s all kinds of guilt to spread around for hardware related grief in both OS categories.
At least Alsa’s beta drivers with X-FI support has consistently been easier than any audio related Windows driver I’ve done.
Edited 2010-04-09 15:15 UTC
With the emergence of CUPS and the use of PPD files in Unix, printing has become so simple, easy, and useful … especially compared to Windows.
After using CUPS for the past few years, at work and at home, trying to get multiple Windows systems to print to a single shared printer … is a hassle. It’s actually easier to install a Linux distro, install CUPS, install Samba, and use that as a print server for Windows stations, than to install printer drivers on each of the stations.
Yup, if your printer is recognized then your golden. CUPS is great at what it does.
On the other side, you can map shared printers through login scripting or AD easy enough if you’ve that kind of setup (not likely a home user). Also, with business oriented network printers, they’ll actually host there own drivers and feed them into the Windows machine the first time it tries to send a job. I just hate the crapware that Windows printer drivers seem to always include now.
Lexmark takes the prize here.
I was at my parents house and all of sudden I heard a man shout NOW PRINTING followed 5 minutes later by PRINTING COMPLETE.
The freaking Lexmark crapware had installed a voice notification.
What are you talking about? Sharing printers is easy in Windows. Sharing between XP and 7 can require a little extra tweaking but nothing that would justify installing Linux and Samba.
The secret to quick printer installation in Windows is to not mess with those automagic cds. Find the driver on the company website and install the printer manually. For Vista and 7 you can just plug the printer in with a USB cable and Windows will install the driver.
If you want to build a dedicated print server then Samba is useful but as a general rule you really don’t want to mess with cross-system printing unless it is necessary.
Not for an office of 30+ Windows XP stations, with 8 networked printers that need to be shared. Going around to all those machines, installing drivers, configuring the printers, etc is not fun. Using Samba+CUPS, with the processing done on the server, turned it from a multi-day job for multiple people, into a 2 day job for 1 person.
Perhaps if we spent the tens of thousands of dollars on a dedicated Windows Server box to manage this, it would be simpler … but that money is better spent on having productive people than boxes on a shelf.
There is also something called “Print Services”. I just installed it and it will automaticly install the drivers on the client. This would have been a 4 hour job i guess. And if you are too stupid to do this you could install the printer on the server and share it from there. This way you have thesame setup as samba+cups.
The mention of samba+cups already shows you are not into windows, please don’t comment if you don’t know what you are talking about.
I think you missed the part where they did not want to by a dedicated Windows server just to host printing. In our office, we have a shared services printer which manages printer hosting but I wouldn’t assume our decision to mix jobs on the same box is right for them. Maybe it’s not simply because the person is stupid as you suggest.
“Print Services” won’t install anything on my machines if they were the client machines in question … my machines run Kubuntu or Arch Linux.
If I install a samba-cups server, it can become a print server for any mix of client machines. You don’t have to use Windows machines exclusively as the clients.
I can back this up further by using Alfresco and Openchange (or Zimbra) rather than Sharepoint and Exchange respectively. In this way I can set up servers that are client-agnostic. As a bonus, there are no CALs to pay. Zero. No matter how many client machines I have to serve to.
With cups as the print server, I can turn any cheap printer (even a USB inkjet printer) into a networked Postscript printer, that any client machine can print to, without having to install any new printer drivers on any of the client machines.
You can’t do that with Windows.
Edited 2010-04-10 13:10 UTC
Yeah how I love that! At my work I unfortunately have to use a Vista laptop for running on of my experiments (it controls a device which only has windows software, although not for much longer). Apart from being the godaweful slowest machine I’ve ever encountered (5min boot time WTF?!) it’s not running everyday. So when I turn it on I get stupid messages about installing new printer drivers for all the printers on the network (~20). If the windows would just pop up and I could klick them away ok, but no they come up in 10-20s intervals and are always on top. And no, ticking “don’t ask me again does not work”.
And the idiots have come out of the woodwork. Congrats on entering the sunlight. Perhaps now we can all have fun calling each other names?
Tens of thousands of dollars for a dedicated Windows server? Who gave you the quote on that? Your imagination?
$5,000 for a good slab of hardware
$5,000 for Windows Server license
Tens of thousands is not an outlandish estimate though it would be a closer estimate for mail server once you add in the Exchange license costs (more if your updating the workstations Outlook version with it).
I can’t speak for everyone, but I’ve not found this an issue.
Creative soundcards are terrible for laptop DJs and music production so I’ve never used them. The audio hardware I do have, some has Linux drivers but everything (thus far) has worked out of the box.
Ironically I’ve had more trouble getting consumer hardware running than niche professional gear.
But that’s just my personal experience. I may have lucky.
Edited 2010-04-09 19:19 UTC
The Asus Striker 2’s included dauter board is terrible. For such a high end gaming motherboard, it’s trully disappointing. The hardware does apear to be ok and you get your 5.1 sound out of it but I think I had one game that it worked with properly. Assassin’s Creed was barely playable and I never did get the full story until I replaced the soundcard due to muted cut scenes. My first soundcard was a graphis altrasound which was rocking for sound and professional work at the time but required a SB16 emulator for gaming. By contrast, I’ve never had support issues from a Creative card until the X-FI. It’s supported perfectly across my games and adiquately for my needs through Alsa’s beta driver. Granted, I’m in the relm of consumer audio. Good to know the professional gear includes non-Windows drivers more often.
http://distrowatch.com/?newsid=05992
http://puredyne.goto10.org/
See http://ubuntustudio.org/
I have some really obscure sound hardware and windows 7 found it out of the box. typical that linux users use XP or earlier problems as problems with windows…when really those problems are in the past.
This is both true and not true. At the moment things are *so close* to being perfect that I can smell the finish line, if I may mix my metaphors.
If distributions would set up everything to just use jack by default then 90% of everything would work correctly automatically. The other 10% is mostly the same 10% that fails to work with PulseAudio, too.
Perfection is on its way. I am not a PA fan but I understand that it has some advantages that users apparently want. As such my proposed ideal audio stack in Linux is
PulseAudio -> Jack -> ALSA
And stack everything else on top as PA recommends. Each application targets jack if it can, PA if it must, or a higher level library (e.g. libao). Anything targeting ALSA gets routed through jack for mixing.
This gives you a stack that is flexible and friendly. The only issues are PA being a resource hog, jack stability and the unfriendly fact that they both require everything to be run as the same user. All three of these problems have solutions that will arrive sooner or later. Meanwhile audio *does* work, it just has to be configured with care. This makes it like any number of issues Linux has had in the past–from X to wireless networking–which have gradually gone from horrid to Just Works.
PulseAudio is a step backwards. It’s a sound server just like ESD and aRTS were. It’s solving problems that should be solved at the driver level and because ALSA either can’t or refuses to “fix” their stuff the Linux guys just coded around it. DO a search in news groups for how much people hated aRTS or ESD and I don’t see how PA is any different. The network sound stuff is cool, but the rest of the “features” it has should be in the driver or hardware level. For example, the FreeBSD OSS drivers just create virtual sound channels so my apps all share the same sound card without issue. I install Linux (Gentoo or Debian) and whammo, suddenly I can only use one sound source and when I’m streaming radio the sound channel is locked. Under FreeBSD I can stream audio, boot up Windows 7 inside VirtualBOX and play a Flash video there and then open Xine and play a video. Sure, the audio is “garbled”, but all 3 play without issue.
The audio “problem” should be fixed with ALSA at the device level and not with some software hack or drop ALSA and go back to OSS. FreeBSD did it right. Linux slapped some patches together and then went on to the next “shiny object” without bothering to polish and refine the original.
Not even Windows does audio at a “driver level” anymore. It’s just stupid and leads to impossible to debug problems and crashes.
Suck it up and learn to live with a user-space audio server.
Why should we? It’s slow, inelegant, and nothing more than a hack.
You must be a artsd user.
arts was always bad because, as I recall it, one guy wrote it in a week and then abandoned it.
ESD was actually a clever interim solution to problem that everyone had… 12 years ago. It wasn’t so much bad as it was a workaround that should long since have been replaced by something robust.
PA is ESD+, it is halfway between a workaround and the kind of robust solution we needed 12 years ago.
Face it! Sound servers are a reality. They’re necessary, in one form or another, and the consensus is that they *will* live in userspace. Stop debating this point, the debate is over! Instead start asking “How can we do sound servers in a way that doesn’t suck?” – it is possible.
What’s funny is that even before the Linux devs started on ALSA, the issue was solved on every other Unix-like OS. Without the use of a user-space sound server. It was only the Linux version of OSS that had this “single audio device/stream” issue.
Back when aRts and ESD started, I remember using FreeBSD 4.x with KDE 1 and 2 wondering what all the fuss was about.
You toggle a sysctl to set how many hardware channels to use, you get a bunch of /dev/dsp0.0 /dev/dsp0.1 /dev/dsp0.* devices, and away you go. Later (and up to the present), you could skip this part, and use software channels automatically via /dev/dsp.
For apps that let you select the audio device, you give it one of the /dev/dsp0.* devices, one per app, and continue on your way. No audio server required.
For apps that don’t let you select the audio device, /dev/dsp automatically selected a free /dev/dsp0.* device, and you continue on your merry way. No audio server required.
OSS audio has worked on pretty much every Unix system except Linux for ages. And yet, instead of fixing OSS, the Linux devs decided NIH would save the day, and started ALSA. How many years have we waited for Linux audio to reach the level that others already had? And how many years are we going to have to wait before things are usable?
They’re only required because ALSA is backwards, brain-dead, and broken.
I’m interested in this argument because I don’t know enough to say whether you’re right or wrong.
If you don’t have a sound server how do you mix audio? Who does it? Is it in user space or kernel space? If the hardware supports mixing itself is that used instead of the software implementation? Who guarantees that it will work the same way regardless?
When I say “who” here I mean “what software” of course.
It seems to me like whatever software it is that takes audio channels and routes them and mixes them is a “sound server” of sorts, whether or not it runs as a daemon. It seems to me that this functionality is going to be living in userspace on most platforms (or anyway that one cannot rely on it being in the kernel and must also have a userspace answer.)
(Me neither, but we won’t let that stop us, will me?)
The audio driver framework in the kernel handles all of that, via the OSS API. If the hardware provides multiple channels and mixing, the driver uses it. If the hardware doesn’t provide multiple channels and mixing, the audio framework provides it. Either way, the API is the same.
So what would be wrong with the exact same thing in userspace? The only thing I think that would be different is it might be harder to get a good idea of what the hardware supports (but this can be done) and it would make audio less likely to kill the system.
In the system you describe the kernel itself is the sound server. There’s no technical reason I know of that it couldn’t work just as well if done in userspace. In fact you could write an oss4 API wrapper for alsa and it would probably do just fine.
So far as I know, you’re pretty much correct: most desktop sound cards are single-channel, and the big question is, whether the mixing is done in the kernel or in user-space. So far as I understand it, this comes down to “user-space mixing introduces more latency than in-kernel mixing” versus “data-processing algorithms in kernels are almost always a bad idea!” At least, that’s the argument I and Stevie had the last time Pulse came up, back when he was still lurking about these parts.
I personally think most of the problems are code-quality and API design issues, rather than there being anything fundamentally wrong with the basic architecture at work. Pulse works pretty well for me: it’s the best sound experience I’ve had in Linux since I started a few years ago. Then again, my standards are low: after some of the painful deathmatches that I’ve had with ALSA, I’m happy to have working multi-channel sound mixing in software with low-enough-that-I-can’t-hear-them latencies. I don’t really care that much if the channels are mis-labeled and/or respond strangely, or whatever else is wrong: at least it fundamentally works!
With regards to latency: this is not a real problem these days. Jack is in userspace and introduces as close to zero latency as possible. It’s not “good enough”, it’s acceptable to audiophiles and professionals! This means that the latency issue is a non-issue for everybody (because nobody else cares as much as those users) and since latency in userspace is not an issue we have no need to put mixing in to the kernel.
You’re right, the APIs are the problem. Some people say that PA makes audio and mixing just work (as you say) and that’s great… for them. I hear just as many people complaining that it’s major broken junk. I find that Jack solves, and has for years solved, the same mixing problem. Furthermore, it did it first. Furthermore, it does it better by all accounts.
So I am left where I started: wondering why we have PA when we have jack. At best PA can be used as an ‘easy’ API for people who don’t want to target jack. Fine by me, if that’s what it takes! Just… why do we need two sets of tools? Why is PA gaining jack features and not the other way around?
Sound servers are not a reality. They are a hack, however well written and quality they may be, to get around a crap driver level design / implementation. A sound API like phonon is a good idea as it abstracts some of the lower level system calls and allows a central “grand central” style system library. However, sound interleaving should be handled at the hardware / driver level and not in a sound service as this uses your CPU vs. using the sound card hardware to process that.
That depends. Starting with Vista, Microsoft seems to be trying to nudge the industry in the direction of a standardized, software-mixed driver included and tested with the OS. (Less potential for kernel crashes, I’d guess, given how much of the instability in both Windows and Linux is now in the drivers)
We’ll “suck it up” when applications like Stepmania stop coding their drivers to select “hw:0” over “default” because, for whatever reason, selecting “default” causes popping, crackling, or unacceptable latency.
(I get crackling when running Stepmania on top of dmix unless I use “aoss stepmania” and its OSS output, the devs apparently get unacceptable latency)
Apparently current Ubuntu versions are annoying the hell out of the Audacious Media Player devs because they use either a libalsa patch or an LD_PRELOAD hack to force all apps to go through dmix whether they want to or not.
…and the cycle continues.
it should go through dmix or pulse or jack. The audacious devs should go complain to the alsa devs. Idiocy like hw:0 is why everything is so broken and never gets fixed
They’ve tried. nenolod even wrote the beginnings of an X-Fi driver by modifying emu10k1 back before Creative released theirs. The ALSA devs ignored him.
All in all, I see ALSA as hovering in the state XFree86 was in for ages (Broken and tied to a management “team” who worked as a boat anchor but still waiting for that one bit of stupidity that’ll push things over the edge and cause a catastrophic fork)
No, Windows use a coherent API akin to KDE’s phonon to talk to the sound devices. It’s not running a sound server. I have no problem with a audio abstraction API, but sound should not rely on a software service to be running in order to interleave audio streams. That’s just pathetic.
Haven’t used Vista or Win7 yet then?
It is using a sound service.
Here’s the first web hit I got regarding it. There’s more if you want to read about it.
http://www.extremetech.com/article2/0,2845,1931917,00.asp
Yes and no where on that page does it mention an audio server. It’s just a user land audio stack / driver, similar to Phonon, that abstracts and modifies, audio system calls. Basically, take that link and compare to this one: http://phonon.kde.org/cms/1030 and you get mostly the same verbiage. Looking on my Vista box there is a “Windows Audio” service, but I still contend that this is nothing more then a user land abstraction layer and the only really it’s a service is that Windows smooshes together something similar to HAL or udev into that “service” and it’s only running constantly in order to be aware of new devices or changes coming into the system. Even your article states the only reason MS did that is because crappy device drivers were causing system crashes, so they to patched around crappy drivers and implemented a software-layer with non hardware acceleration.
Isn’t the moment that you start mixing in software(or providing logic to determine mixing and resampling) the moment you get a sound server?
The Windows Audio Service is a mixing layer, much akin to the pulseaudio design. I remember reading that the pulseaudio dev took the Vista sound stack as ‘inspiration’ for his work.
I don’t have to do a search to find people who hate artsd and esd, I only have to remember my own experiences. Both suck. Sound servers in general have always sucked. PA is ESD writ large, and it’s true that in a lot of ways it sucks.
This is something that should have been solved in the kernel by alsa a long time ago. Or, it should have been solved by alsa in libalsa a long time ago. But, Linus doesn’t want audio processing in the kernel (fine, I can get that argument).
And, alsa people don’t think there’s a problem… but since alsa’s just-above-kernel layer is *already* in userspace…
The next level of solution is “just above alsa” and that is precisely where jack lives.
Incidentally, this problem actually ‘should’ be solved at the hardware layer for hardware that supports it, which is why I think alsa should have solved this in the kernel–where better to determine whether or not to pass work off to the hardware?–but that’s all water under the bridge.
If you treat alsa as your “hardware layer” then the jack approach is immediately, obviously correct. Jack works, jack does $everything. The only complaint I’ve seen is that it’s “too hard for ordinary users” which is *true* but I seem to recall seeing the same complaint about everything in Linux at one time or another. When I hear “too hard” I interpret this as meaning “the tools aren’t good enough or simple enough yet” and “the distributions don’t set good defaults yet.”
I agree with you! Fix it at the alsa layer. That would be *correct*–but it’s never going to happen as long as Linus and the current crop of alsa devs are alive. Forget it. Move on. Code around them.
Very well said. Too bad we can’t fire these devs.
There’s a perfectly reasonable argument for not having sound mixing and the rest of the advanced features directly in the kernel.
Technically anyone could rewrite that first layer above the kernel–libMyOwnAlsa or whatever–and add in whatever it is he wants. It’s just a matter of doing it and then convincing people to use it.
But, as I said, if you’re already doing it in userspace… there’s very little harm to doing it on top of the exiting alsa instead.
You are aware that windows (since vista) and osx have sound servers with userspace audio?
I agree that alsa should be fixed but there is a need for software mixing and pulse audio is currently the only modern sound server designed for users (no, jack does not count at least not in its present state).
to those oss4 proponents I would say oss is the reason that alsa came along in the first place.
Edited 2010-04-09 19:09 UTC
Docs please? OS X and Windows as far as I know have a API abstraction layer, DirectSound and GrandCentral that handles all the inputs to the hardware. However, it’s not a service.
Please tell me exactly what’s wrong–or supposed to be wrong–with jack. When I needed a sound server about a year ago I looked at PA and jack and, from my layman’s perspective, they look to be about the same except
– jack is older and more mature
– PA is still in flux and breaking things
– jack has a lot more features
– jack has a lot more and better tools
I have only two tiny little complaints about jack
1. Cannot be run from a service account.
Why do I need to run this as me instead of just connect to it from any authorized user? AFAIK, PA has the same issue.
2. Sometimes jackd will stop working and need to be killed and restarted. I’m not sure whether the fault is in jack itself or somewhere else–and it doesn’t happen often–but it’s an issue. From what I’ve seen PA has had very similar problems since I can find “here’s how you recycle PA” posted in many end user support forums.
PA doesn’t seem like it can do anything jack can’t. If there’s something it does that jack doesn’t–why wasn’t that just added to jack instead of to a brand new system?
People say “Jack is only for pro audio” as if professional audio users don’t deserve a good solution and don’t need audio to work as well as regular users. Why is industrial strength and fully featured bad for normal users? “Too complicated” is not a reason (see below).
If PA is The One True Audio System then it should support pro audio users. If pro audio users find jack to be good enough–GREAT! It’s always possible to solve a less specialized problem with more specialized tools, but not the reverse. Slap some friendlier UIs on jack and ship it to non-pro audio users (like me) too!
I am not a proponent of oss4, but comparing linux oss (you remember? the pre-alsa one?) to oss4 is an apples-to-oranges comparison. Not very appropriate.
I don’t think it is unreasonable to settle on jack, it just needs more utilities and more flexibility.
Jack is designed exclusively for very low latency audio. Pulse can have variable latency thus allowing fewer wakeups.
Applications need to have add special jack interfaces whereas pulse was a drop in replacement of esd and had an alsa compatibility module making the transition easier.
At the time of introduction pulse had better user tools like volume control (still does I believe).
I’m not particular about what is underneath pulse has never given me problems (except under kde) and neither has jack. I just think we need to have a sound server.
I never said you were a proponent of oss but the problem is that apps once again need to change api to use ossv4 and kernels need to be compiled with it and I guess some feel burned by what happened to oss.
Actually, they don’t. There’s an ALSA driver/layer/compat thingy in OSSv4. You can even run ALSA on top of OSSv4. Thus, all your apps continue to “speak” ALSA, but all the actually sound processing is done via OSSv4. You even get proper multi-channel mixing, without dmix or pulse.
Other than the lack of pretty mixer (the ossmix UI is pretty horrid), it works really well on Kubuntu 9.04 and 9.10. I ran like this for several months at work. However, I had to revert back to ALSA+Pulse to be compatible with the software stack in the schools.
There’s quite a few tutorials on the web for doing this.
alas..It would be nice to try it.However, I have a setup where I am streaming rtp streams to various headless computers around the house using pulse. that and I love the fact that when turn on my bluetooth headphones it automatically routes music to them and reroutes back to my speakers when i turn them off.
Such a thing would be rather complicated using oss wouldn’t it?
Edited 2010-04-09 21:33 UTC
Advantage: Jack. Jack wants low, but it can do high if you configure it. So it’s a superset of PA here.
False. Maybe this misinformation is why people avoid jack? Fact: You can make all alsa apps use jack with about 60 seconds of configuration (your speed may vary) and it can be set that way by default by your distribution.
ESD was never a good idea, just a clever workaround. Jack runs just fine atop alsa, so no compatibility mode is needed–just talk to alsa! You could say that some apps require esd… but you can run esd, or even PulseAudio, on top of jack!
So here’s my point again: It’s 2006 and you want to get rid of esd and fix desktop audio. Do you (a) Take an existing, stable sound server in use by thousands that does 99% of what you need and add the things that pertain to desktop use, or (b) Build a brand new codebase–from scratch–that has fewer features and worse performance and will take years to stabilize?
It seems like (a) makes sense without even hesitating, yet (b) was chosen. Why was (b) chosen? Surely there must be some reason! I’d like to know what it was.
If you’re curious, here’s my .asoundrc:
Is there per application audio control (integrated with ui) like kmix which now works with pulse pretty nicely.
I have no idea why pulse was chosen instead of jack. You have to ask lennart poettering if you want to know why, which I am sure could express the pros/cons of PA vs Jack far more clearly than I. but I’ll refer you to this entry in his blog:
http://0pointer.de/blog/projects/guide-to-sound-apis.html
Where do you see per-app volume controls in KMix?
Running KDE SC 4.4 here, with Pulse installed, Phonon configured to use Pulse, and there’s only the master volume controls showing, nothing about “per-app”.
I’m pretty sure if you install a five year old distro, you’ll revoke that point about sound in Linux. I’ve never seen the sound preferences UI look so good and work so well and to me it looks very comparable to the one in Windows 7..
Gaming? Well, Until a Linux distro gets big market share then you cannot compare, since game developers will always make games for the most used system or the most profitable one. Most game developers seem to primary develop for consoles now days and gaming on Windows is at an all time low thanks to pirating.
5 year ago we had Warty and Hoary. Audio worked like a charm with Alsa.
Pulseaudio should work just fine as long as everyone buys into it (yes, this includes KDE). In windows, fighting over audio subsystem (not everyone likes pa) is unnecessary because there is just one.
Well, except if you want to do stuff that becomes more commonplace these days. For instance, for streaming audio from one device to another is hell with OSS or ALSA, but something that PulseAudio promises to make far easier. I am streaming audio from one machine (Mac Mini/MacBook) to another (Apple TV) very frequently, and it is really convenient. Note: this is not the same thing as streaming mp3 or ogg.
Pulse was quite problematic, partly because it was introduced in some distributions to soon or it was not properly integrated. But I think that in the end it will improve audio in Linux.
I’ve never understood the point of streaming audio, when you can just play the audio locally off a network share.
I have yet to see anyone come up with a genuine, useful, use-case for “streaming audio between devices”. Sure, there are a lot of contrived, made-up use-cases, but I’ve yet to see a genuine, real, usable, use-case for it.
I’ve never understood the point of streaming audio, when you can just play the audio locally off a network share.
I have yet to see anyone come up with a genuine, useful, use-case for “streaming audio between devices”. Sure, there are a lot of contrived, made-up use-cases, but I’ve yet to see a genuine, real, usable, use-case for it.
If you are playing dynamic audio instead of just a set of files then you can’t just “play the audio locally off a network share.” Like f.ex. if you have a PC hooked up to your TV, your surround sound system and expensive speakers then wouldn’t you like to use those speakers for all of your audio needs?
One such case could easily be games: atleast I like to listen to my game sounds from the large speakers instead of some crappy headphones.
Another could be f.ex. that you’re on your laptop, have your jukebox application up and set music playing, but want the music to play from the large speakers. You could always tussle with cables and such and hook your laptop to the surround system, but that’s messy and cumbersome. Then, you could also set your MPC to play music, but then you couldn’t control it directly from your laptop unless you used VNC/NX/RDP/etc for remote desktop and that would again be messy…
How is that “streaming audio from one device to another”? You connect the audio-out on the PC to the surround system and use the selector dial on the surround to select the audio input. I do this at home already.
Again, how is that “streaming audio from one device to another”? Just plug in the speakers instead of the headphones.
How is that messy? You plug in one cable to the laptop, flip a switch on the stereo, and voila! Audio on the big speakers.
Sure you can, it’s called a remote. They come in various hardware and software configurations, depending on your needs.
Like I said, a lot of contrived, made up use-cases.
Edited 2010-04-09 18:02 UTC
First my surround system don’t have any selection dial nor any way to connect more than one audio source.
Second, why would I want any more cables when I can do it over wifi?
Again I don’t have any switch to flip nor do I want any more cables laying around, more so when we are talking about a laptop.
Your claim about the non existence of uses for audio streaming could also be applied to wifi: why would you need it if you can just plug a cable? Contrived made up cases?
Time for a new surround system, perhaps? What do you have plugged into it right now?
Okay, so what’s plugged into the surround sound system, and how is that connected to your media store?
I’m not seeing the big picture here. How are things connected, where’s the music stored, what’s plugged into the stereo, how does music get from A to B to speakers?
For example, I have a MediaPC plugged into my TV and stereo. When we want to listen to music, we turn on the stereo and TV, select the music we want, and listen to it. The music is stored on the server, which the MPC accesses via NFS over wireless.
If I’m sitting at my desktop, I just fire off some remote commands to the MPC and it switches tracks. I grab the stereo remote to control volume. And can switch to the TV whenever I want to watch it as well as listen to music.
How would “streaming audio via Pulse” make this any better/easier?
Edited 2010-04-09 21:17 UTC
Why should I waste money to replace streaming with cables?
It’s working perfectly fine now.
I have a computer plugged to the TV with all the music and movies, so that’s not the point of streaming.
It’s just a convenient way to get decent sound on every computer I have at home (4 currently, mostly laptops) in a matter of a couple of clicks without bothering with cables.
Whenever I get home with my laptop from a recording session, or with some videos from the DZ on my netbook I can check all that streaming the sound to the speakers without even needing to go to the living room or touch anything besides the laptop/netbook.
Use cases:
– Sitting on your sofa with the Laptop and want to hear that YouTube video being watched through the Sterio system which has its reciever accross the room.
– You are an AV guy for an institution which frequently has guest lectures showing up at 5 mins till start with “Oh by the way can I use my Netbook/Nettop/Tablet/MID/PDA/UMPC/Macbook/Slate/Smartphone up on the screen and through the sound system and btw I don’t have the custom proprietary adapter that is needed to connect to what is standard.
– You have a back yard which does well when hosting a barbecue, cookout, pool party or just plain events. You install as part of a landscaping remodeling project all weather outdoor speakers.
All of these cases could be extensively simplified with the use of DLNA compliant devices.
I disagree. The real problem (and I have personal experience with this) is that ALSA is an under-documented, over-engineered, fragile API and PulseAudio is a buggy, fragile wrapper.
Audio will continue to be broken on Linux until the developers get over their “Let the distro packagers configure it and any users who change the defaults deserve what they get” attitude.
It doesn’t help that, because ALSA is buggy and fragile, some applications add their own workarounds which force-ignore dmix/pulse/etc. resulting in crazy ALSA hacks by people like the Ubuntu devs which force-select dmix/pulse/etc. in a vicious, untenable cycle.
KDE devs don’t “buy into” broken technology (that’s why there’s so much duplication between KDE and GNOME. GNOME won’t use C++/Qt components and KDE didn’t wait for GNOME to get their act together on things like GnomeVFS) but if you can hammer PulseAudio into shape, Phonon will “Just Work^a"c” on top of it.
For that matter, Phonon was originally written as a way to ensure a stable API for GStreamer for the entire KDE 4 lifecycle. Several GST devs promptly threw a tantrum.
Here, here!! Well said.
It’s too bad that the general consensus in Linux-land is to add more layers to a broken stack instead of fixing the foundations.
ALSA is crap. It needs to either be fixed, or replaced. Preferably replaced with OSSv4 so that Unix audio is standardised again.
Stop building skyscrapers on top of matchsticks.
My KDE installation works beautifully for audio without either gstreamer or Pulseaudio.
Phonon + Xine + ALSA.
-per app volume control?
-network audio?
-automatic audio routing? (no futzing around with alsa configs or scripts)
-event enabled volume control?
Vista doesn’t even do some of that — at least, not reliably. I’ve brought down Vista’s audio service trying to toy with per-app volume control. And the interface for doing it is annoying — it’s usually better to just turn down the volume using an application’s volume control than futz with Vista’s system-level per-app volume controls.
ALSA never “just worked” for me. As I’ve said in other threads, I’ve yet to see DMIX actually work at all, and without it, you won’t be able to play sounds from multiple software sources.
– Firefox is much faster on win7 64bit than ubuntu10.04 64bit.
– win7 has 1 homegroup password to enable sharing files easily
Edited 2010-04-09 15:27 UTC
Understand the last two, but what do you mean by working with projectors? Its just another screen right?
Just a personal thing – I made an ass out of myself recently because Karmic didn’t recognize the resolutions supported by the projector, forcing 640×480 maximum and ruining my presentation.
(nvidia binary blob on Dell Precision M 2400 – “nv” driver wouldn’t have seen the projector at all).
Those surprising moments are the ones when you wish you could your daily work on windows or mac; that’s probably why people that do lots of presentations usually have one of the lesser operating systems, even if they would otherwise have the heart & skills to operate completely in the Linux domain.
That happened to me last Friday, using Fedora 12. With my co-worker’s Mac and Vista laptops, you just plugged in the projector, and it showed up as a second monitor (usually as accessable space to the left or right of the current screen). When I plugged it in… nothing happened. I had to reboot to get my Fedora lap-top to notice the projector at all, and even then, it just displayed a 1024×768 patch of my primary desktop, slid around to always include the mouse pointer’s current location. Less than optimal.
I can’t really argue with that. That’s why I dual-boot.
For TF2, there’s Windows; fore everything else, there’s Slackware.
Provide a stable stack of APIs.
Provide a consistent keyboard driven UI (really!)
Provide slow-moving driver interfaces.
That’s about all.
If you can say that with a straight face, you’ve not used Windows 7 extensively via the keyboard. Consistent is the last word I’d use to describe it. Controls and areas that were formerly simple to navigate are now a mess with the keyboard. And as for the apps themselves (Microsoft and third party)… well, let’s not even go there. GNOME/GTK+ is quite a bit better in most cases (unsure about KDE). However, the only os as of yet to have true keyboard consistency across all areas of the os and 99% of applications is OS X. It operates differently from the two above mentioned, but is much nicer once you get used to it. Other oses would do well to take a leaf out of Apple’s book here (wow, did I really just say that?).
I cannot comment on win7 since I’ve never used it, but Prior to that keyboard navigation has been if not 100% consistent across all apps at least highly consistent across most apps and, importantly, *almost alyways possible*. GNOME has gotten a little better recently but I still see many GTK and GNOME apps where you can’t actually use the keyboard for everything, sometimes inexplicably. KDE has been so-so for a long time in that most things are fine but some things just aren’t.
OS X has consistent keyboard *shortcuts* but I don’t think they like keyboard driven UIs and most of the rest of it sucks. Just my NSHO.
OS X works fine from the keyboard, you just need to turn on full keyboard access either by pressing ctrl+f1 or going into system prefs -> keyboard. If you don’t know that, you’ve obviously never even attempted to use OS X via the keyboard. On all platforms there are apps that don’t integrate well and that includes the keyboard, but in general from best to worst from the point of view of a keyboard-exclusive user is: OS X, GNOME, Windows. Well, technically cli-based systems are the best for keyboard usage but I’m not really counting them here.
While I’m a Linux user I must say: Stability. Both API and crash wise. When the graphics card driver crashes in Windows (XP) then the desktop stays up. It switches to a fallback driver with less resolution and color depth but your applications still all run unchanged (well, maybe not 3D apps). THAT is awesome, I want that in Linux! Also the GUI (and apps like Firefox) feel a LOT snappier under Windows. Other than that Linux wins.
As I understand it, they’re working toward that (eg. Kernel Mode Setting) but I haven’t seen any improvement because nVidia has been slow to support new X11 APIs.
Replace “has been slow to” with “has no interest in”. Nvidia’s driver *is* very good at what it does, no question. But it does things in it’s own way, a way that suits Nvidia, but which completely conflicts with the direction Linux graphics support is going.
The binary driver will *never* support KMS, DRI, Gallium, or any other part of the Linux driver stack – it’s just not written that way, and Nvidia have no incentive to rewrite it to fit in better.
Play games and uhm…. yeah. Play games.
First of all a link to the original article, which I couldn’t find in the article here: http://techgage.com/article/10_things_linux_does_better_than_window…
The CLI is mentioned above as a more powerful way of doing partitioning under Windows. I must disagree, for the sole reason that partitioning using the CLI is a major PITA, or at least it is under Linux and I’m assuming it’s just as bad under Windows. Neither, I’d say, is a real option to an end-user. And I’m not at all afraid of using a CLI, quite the contrary.
On the stability front the reaction that’s given states that Windows deals better with problems than Linux, where blame is thrown around and the user is left to fix the problem. I’d like to disagree. Thoroughly. When Windows crashes in one way or another there is no recovery whatsoever, other than that provided by the applications that were running. If you’re lucky the desktop environment restarts correctly, but I’ve been running in crash-cycles there.
A larger problem, though, is the actual stability. Now, I can’t speak for Vista and 7 which are supposedly better at this, but at least under XP pretty much any program can cause the system to crash. This is a lot harder under Linux and indeed I’ve only once seen a kernel panic in my life… hardware related. The rest of the environment is build to be defensive and modular, kicking our applications or even subprocesses (Flash, anyone?) that cause problems. Although, admittedly, I do see my display drivers crashing now and then (I abuse them quite heavily with multiple X-windows running full screen OpenGL), which renders the computer quite useless as well. On the other hand I still have the CLI then to shut down correctly and save my work. No such luck under Windows (where I’ve also seen the display drivers crash).
And about ‘throwing blame around’: I have yet to see what you mean. I’ve experienced crashes, but never have I seen blame being thrown around. The situation there is pretty similar for both OSes: they crash, maybe they recover, maybe they tell you that something crashed (no blaming done here) and it’s up to you to try and fix things. The reality is that there’s not much the OS can actually do about you loosing your data due to a crash, other than containing crashes. And I think therefore that saying Linux throws blame around is not only very unfair, but as far as I can see even factually incorrect.
ARGH! Stupid mistake. Need a function that checks if the teaser has no hyperlinks and warns that you^aEURTMre about to do something stupid XD
The worst part is googling for your article title pulled up, for me:
1. An unrelated article someone had done two years ago.
2. Your OSAlert writeup.
3. The original article you were referencing.
This is a high barrier for most people, I suspect. But, it’s OK! Since we probably all carry some culture with us from Slashdot it’s not like most of us tried to RTFA anyway.
XP has substantially the same CPU privilege system that Linux does. So if programs can cause the system to crash, it’s the same way they can on Linux: by triggering a bug in the kernel. And if you haven’t had X or DRM crash and leave you with no option but the magic sysrq key or the power button, then you are a lucky (wo)man.
Of course, in my experience of using Windows XP and also 7, I haven’t seen this multitude of crashes from programs. With non-shitty drivers, it seems to be quite stable.
For me, the biggy is Linux’s handling of storage devices, support for files systems and mounting.
* The ability to ‘cat’ a USB stick device / CD-R and redirect the output to a file (thus creating an ISO) has proved very handy in the past when making quick back ups.
* The ability to mount anything from wikipedia to NTFS partitions to ISO images to network shares has been invaluable over the years. I can’t stress enough how much Linux’s mounting tools have made my life easier
Most of Linux’s other advantages I’ve been able to find a Windows approximation but I’m yet find a unified tool or set of tools that can mount even network shares to directories (if anyone knows of any, please tell me)
Vista/win7 only:
mklink /d tst \\server\share
You get a symbolic link to the network share which effectively looks like you have it mounted at some folder to user applications.
I still end up with more than half of my alphabet used up on drive letters though.
Plus I’m still (for the moment) on XP
It’s good to see Windows finally supporting symlinks though. It’s needed it for a while in my opinion.
The server share doesn’t take up a drive letter if you use the symlink approach.
Ahh sorry, you’re right (must have misread your original post).
Sadly it’s still no use to me as I’m XP at work, but I’ll remember your post when the inevitable upgrade happens.
What I appreciate most about Linux is *freedom*.
I’ve received two threatening letters from Microsoft’s front organization, the Business Software Alliance, asserting the “right” to enter my home and audit my computers looking for copyright infringements. Now, I take copyright seriously, and am careful to abide by the licenses of the products I use; thus I believe this was just a blind threat hoping to frighten me into paying them more money. However, it is in my opinion dis-honorable to threaten your customers without probable cause in this way. It certainly cost them my business.
In contrast, abiding by the Linux license is a pleasure, since it’s primary goal is to *protect* my freedom rather than to abuse it for mere profit. I’m happy to pay for Ubuntu-based products and other free software, since I am uniformly treated with respect.
That said, I’ve found free software to work better in general than the proprietary software on which I cut my technology teeth. But even if it didn’t, freedom is beyond price.
Just my $0.02…
Back in XP, autologin was a pain. I ended up using the TweakUI powertoy for that.
In Vista and 7 you just press WindowsKey+R for the Run dialog, type NETPLWIZ and press Enter. After you deal with UAC, you see an old friend from back in the Windows 2000 days. From this User Accounts dialog you can enable Autologin or make login more bureaucratic by requiring CTRL+ALT+DEL before login or unlocking the computer.
Wow!!! that’s so simple!!!! Even my grandma can remember that (not!)… /sarcasm
It’s not rocket science either. You don’t have to hack the registry or put up with long, unpronounceable command lines.
Edited 2010-04-09 14:20 UTC
OK, I’ll bite. How *do* you pronounce “NETPLWIZ”?
To be honest I think autologin should be a pain to setup since it encourages poor security habits. Operating systems should encrypt user data and require a local password by default.
Great post
The thing about powershell though is that its not really used as an interface to your computer, but more as a scripting environment. My problem with it is how heavily they modeled it after bash.
I mean, I hate writing bash scripts, and have never seen the point when perl ships with pretty much everything, and ruby is on any machine I have. Both of those are MUCH better languages then bash with superb command line integration — anything in backticks executes as the commandline and results can be assigned to a variable, both have great file APIs, and both have regexs baked right in as first class citizens with the =~ operator.
Apart from the shell though, standard UNIX tools are the biggest thing lacking on the commandline for windows. They are more complete, more functional, and better designed for scripting then what is available on windows.
Pipelines and job control are easier in a shell. Below a certain complexity bash is to be preferred. Once you get beyond a certain complexity it might be better to do it in a more general purpose language, but it’s often easier to make one more change each time you need one more thing than to rewrite the whole script.
He doesn’t go in to it, beyond saying “And when you’re REALLY hosed on Linux it’s at least easy to recover your files,” but troubleshooting on Linux is generally far, far easier than Windows and is one of the little joys that make going back to Windows hard.
On a Windows system if something goes wrong there may be a pop-up alert that tells you what went wrong. If there is, it may be informative enough to fix the problem–but in many cases it is just a generic “oops” type message and you must figure out what really happened yourself. There also may be an entry in one of the system event logs, but it has at least an even chance of not getting a logged event. If you know that you can do so there are some options to enable more verbose logging in Windows… but most users and system admins don’t know that.
If you are lucky enough to get an event log entry then you will know that something happened and when. You might be able to know what happened, but it is not likely. Event log messages typically include an opaque error code and, if you are lucky, a brief textual message describing what it means. In nearly every case the description is unhelpful; it’s also often misleading. There is usually a much better writeup on Microsoft’s web site, if you search for the error code, but locating these is not always simple.
If you locate a good description of your error code then you may have an answer or workaround described, but you might not. Many times the error has a myriad of possible causes but there is no way to know which applies to you without blindly trying the fix for each in sequence and then trying to reproduce the error.
I could go on, but I won’t. Those who are familiar with troubleshooting Windows issues will be nodding along, those who are unfamiliar will by now have the idea. The summary to this is that errors often arise from mysterious sources, fixes involve magic incantations (“run this exe, modify this registry setting, cross your fingers!”) and also frequently require a slash-and-burn (“just reinstall the whole application”) approach to fix properly.
Contrast troubleshooting under Linux.
If something goes wrong you may get a pop-up alert with all of the same potential problems as the Windows alerts. More likely, you get a message to stderr. If you can’t see stderr–reopen the application from a terminal and reproduce the problem. The message to stderr is sometimes cryptic and sometimes extremely specific–this varies by application–but can usually be used to accurately identify what went wrong. A google search may be required to find out what the fix is.
If something goes wrong with a daemon you almost certainly get a message in syslog. The messages in syslog frequently identify exactly what happened and identify the probable cause. Even if it doesn’t, punching the message into google will likely get you a page which will describe the same error and exactly what to do to isolate what’s causing it and the minimum change necessary to fix it. You will also usually get a high level description of the reason for the problem.
If no solution is available from Google and you run Windows you can ask in a community forum. You will either get lucky or spend a long, frustrating time weeding through misleading suggestions and shot-in-the-dark solutions. Or, if no help is forthcoming, you could call Microsoft. Pay some money, sit on the phone for a few hours and hope that the guy on the other end has seen something like this before.
If no solution is available from Google and you run Linux you can ask in a community forum–possibly the mailing list for the specific program that caused the error. You will likely receive specific information-gathering instructions and then specific suggestions on fixes. You may even get an authoritative response from the persons who wrote the software. Or, if no help is forthcoming, you can *get the source code* and do a search within it for the error message. Even if you are not a programmer you may be able to glean some additional insight into the cause of your issue.
In all cases, under Linux you tend to get *more* information in error messages, *more* indication of the actual problem, *better* help in forums, *better targeted* solutions and spend *less* money.
Troubleshooting is never a fun thing, but on Windows it is an exercise in frustration and on Linux it is merely an exercise.
Best thing about Linux?
It’s the web in the form of an operating system. It baffles me how people can vigorously promote the open web and fight Adobe, while at the same time use Mac OS X or Windows without at least seeing the irony.
It was rosier in 2006, mate.
For me, the thing Linux does best is software updates. When I am getting ready to use a program, I do not want to stop and wait 5 minutes or so for the software to update and then restart the computer. I don’t know who decided in both Mac and Windows world the best time to try to update the software is when the user is getting ready to use it, but its very annoying. Ubuntu once a week presents a list of all the updates available, I hit OK, and it’s done. I can even generally use the software while it is updating. Linux does software updates better than anyone.
inodes, proc folder, sys folder, tty files, signals, mounts, easy hard and sym links, FUSE (Ok, there is Dokan now), universal timestamps right from the beginning, massive collection of supported disc formats, can run on watches to super computers, number of supported devices, and best of all, no black boxes. Oh and it’s free in every sense.
As just purely a OS, there is no serious debate here. When it’s big servers or thin client machines, the numbers speak for themselves. There is perhaps a debate for a desktop OS, but that depends what you want to do with you machine. If you just want to check the web, play games and watch videos, use what ever OS you like. It becomes about software, not OS.
mounts – volume mount points
hard links – mklink
sym links- junction point
watches – EGP-WP98
super computers – windows datacenter edition
i REALLY hates those lunix fanboys who have absolutely no knowledge of windows at all saying it’s bad. Every os has it’s strong and weak points, but please only comment if you know what you are talking about.
You’re making a big assumption about my knowledge of Windows.
Yes volume mounts are the same as mount. But few file system support them, only really NTFS to my knowledge.
Junction points are like folder only sym links. In fact junctions points, sym links and volume mounts are all NTFS reparse points. NTFS also supports hard links. mklink (which doesn’t come as standard) does hard links, sym links and junction points. What really annoys me is how this all translates into how it’s used. As the underlying functionality is only really NTFS, it can’t be used. Explorer namespaces try and give you one file hierarchy, and you can do “mount” of a namespace at a folder, and shortcuts are used as sym links. But it might look right to a user, but because it’s not actually an OS filesystem, it can’t be used from code (not with the standard file calls) or command line, which greatly reduces its use. In code there seams to be some push towards using ITEMIDLIST instead of paths, so you can use namespace objects as files, but to me, it would be better to fix the underlying system and use paths.
Windows CE is not the same kernel as Windows NT. It’s a different OS. Not the same. The embedded linux devices are running the same kernel as the super computer. Compiled for different architecture, with different options on and off, and maybe a few code tweaks, but basically the same kernel.
Windows super computers – not saying there aren’t any, but there aren’t many, least not at top of the league.
inodes – why do I care?
ttys – what advantage does this confer?
signals – These are difficult to program correctly. Windows has APCs which are very similar but are only delivered at specific control points, greatly improving the interface.
I care about inodes. Files in Unix system are reference counted. By deleting a file, all you are doing is removing a reference. This means it’s of no issue to delete/move/replace a file in use.
TTY gives you an easy communication channel. So for instance the SheevaPlug has a usb-tty that you can use to communicate with the boot loader. Then there is the virtual instances, if my desktop is being killed by some heavy task, I can give up on the GUI, and go to tty1 with Ctrl-Alt-F1, login and find out what is going on with the GUI or just restart it. In short the TTY system is big part of Unix command line. There are very useful character devices.
Signals are really useful in process control. Got to love ‘kill -9’ it has a much higher success rate the killtask or desperately trying to kill something in task manager.
A little nitpick It’s Partitioning.
I should have just put this on Pg.2 and been done X) I^aEURTMm not the right person for writing up this news as I^aEURTMve not had a good track record with Linux and don^aEURTMt enough ins or outs either, but I wanted it on the home page because it^aEURTMd make for a good discussion for those more knowledgeable than I.
I think we should move over this discussion of what OS is the best…
Windows 7, Linux ( choose your own Desktop env.) and Mac OS X are great Operating systems. All of them have their pros and cons so my advice is to use them for a while and then choose whatever fits your needs.
IMHO this and the Smartphone discussions are getting really pointless…
I am honestly interested in hearing about things that bother people about Linux. No, I don’t wish to hear trolls ranting or such, but actual real-world issues you may have bumped into.
I was trying to set up 2 displays and I ran into issues: both screens were detected properly, but whenever I tried to set them in non-cloned mode I got an error about the screen size being too large. They’d only work in 640×480 mode. After hassling and tussling around I found out I had to add a line in xorg.conf, a line that defines the virtual desktop size to be a large number. After that the screens worked just fine, but I was amazed that you STILL need to tussle around in xorg.conf for that..
Second issue I ran up to with 2 screens was desktop effects: my graphics card only supports a maximum texture size of 2048×2048, but as both of my screens were at 1280×1024 their combined size was way over that limit. The issue? Compiz would not render the whole Nautilus desktop window, instead clipping it off at the right end. That struck me as REALLY annoying, horribly ugly, and heck, even Windows gets this correctly.
Was this a binary blob graphics driver? xrandr should allow you to create a single virtual desktop from multiple screens. There are graphical front ends for xrandr. All avoiding the xorg.conf. But this all falls down with the binary blob graphics drivers (“Nvidia” cough) which seam to be dragging their feet. As I need the TV out I can’t drop the binary blob yet, so I’m still manually editing my xorg.conf, but I’m watching nouveau with hungry eyes waiting for it to meet my needs! I know it sounds like a broken record blaming closed drivers, but that’s where the problem is, no one but people with the source can fix it.
Was this a binary blob graphics driver?
Nope, it was up-to-date plain old Ati open-source driver.
And it couldn’t be done with just the xrandr GUIs? That is disappointing, least you can always fall back to the config file.
Problems I run into with some frequency:
* Some Office documents doesn’t work well in OpenOffice.
* Linux sucks for games. Really. Heck, it’s petty much the only reason I have dual boot. Please don’t tell me about Cedega and Wine. Unless you have an Nvidia card it’s not an option.
* Currently my Huawei broadband modem only works if I first boot into Windows and then do a warm boot to Linux. Really odd and it used to work.
This is a problem of the deliberately obscured Office formats.
Infinitely more problems with OpenOffice documents not able to be handled at all by Office.
A game console does an even better job of playing games, far cheaper than trying to support a Windows system. Use a game console and a Linux desktop and you wont have to dual boot.
When friends of mine bought a new Vista laptop, they found that their existing printer was unsupported entirely. They had to purchase an new printer as well as the new laptop.
Very typical, and it used to work.
I don’t care why, it’s a problem that it doesn’t work.
Not all the games I want to play run on a console. Also, I already have Windows so I don’t really want to purchase a console. Or maybe I should support MS by buying an Xbox?
So what? How does that help me? I’m sure mine is just temporary and will probably be fixed sooner or alter though.
Here are some of the things I find easier in Linux
1. Saving an image file from the clipboard.
Sounds pretty basic, but in Windows, if I have copied an image to the clipboard and want to save it, I have to open MSPaint, paste it there and save as a jpeg. On Linux, I open Nautilus and Paste into the folder where I want to save the image.
2. Password Management.
Both Gnome and KDE come with integrated utilities that save passwords and lock the information with a master password. On Windows, I have to store the info in something like an excel file and password protect it.
3. Software availability and installation.
I was recently working on something and needed a hex editor. All I had to do in Linux was click on the “Add/Remove Software”, enter the description of the type of application I was looking for and within a minute I had a hex editor installed from a repository, which I could (within reasonable limits) be sure did not contain malware.
4. Segregating work into Logical views.
For a long time I did not really use the Virtual desktops that Linux provides. My mistake when initially using them, was trying to keep one type of application on one desktop, for e.g. putting all email and browsers into one desktop, and then putting my word processor into another one. Once I started using a virtual desktop to hold all my applications for a specific task, everything seemed so organized.
I find this especially helpful, when working with a large monitor. On one Virtual Desktop, I put all my applications for say, development, which include – eclipse, a couple of instances of the terminal, a browser, etc., usually positioning them so that I can see a few windows at the same time. On anther desktop, I have my email, browser and word processor. On another desktop, I have a couple of terminals that are monitoring some of the systems I work with.
The concept seems so simple – when I switch from one task to another, I just switch desktops and get all the applications I need for the task.
Perhaps Linux does some things nicer than Windows but Windows has far more commercial software and quality games. Contrary to popular belief, there are not antiquate FOSS alternatives to all commercial apps. Gimp, for example, lacks the Pantone color system and things like adjustment layers. Additionally, the FOSS that Linux has is also available on Windows, which doesn’t translate to a Linux advantage.
This is why I don’t use Linux without Wine/Crossover Games and VMWare Workstation w/ Windows guest in unity mode.
The stability of the latest distributions releases are also troublesome. I tend to stay one release behind the latest. For example, I still use Fedora 11 versus 12 on my Linux machine.
One thing that Linux does absolutely terribly is media. I don’t care how great it is for word-processing programs, retrieving software, protecting your computer and even partitioning if the thing can’t play all media that I throw at it. Yes, I am aware that it can be configured for most media files, but why is it that the OS doesn’t allow me to take advantage of my HD-DVD/Blu-Ray player after this long? I mean, get it done already.
Besides, the fact that DVD and media playback isn’t usually enabled by default is unacceptable if it wants to cater to casual users.
One thing that Linux does absolutely terribly is media. I don’t care how great it is for word-processing programs, retrieving software, protecting your computer and even partitioning if the thing can’t play all media that I throw at it. Yes, I am aware that it can be configured for most media files, but why is it that the OS doesn’t allow me to take advantage of my HD-DVD/Blu-Ray player after this long? I mean, get it done already.
Besides, the fact that DVD and media playback isn’t usually enabled by default is unacceptable if it wants to cater to casual users.
You don’t understand the problem: both DVD playback and Blu-Ray are protected by patents. Thus it’s ILLEGAL to provide software that allows one to play those. It’s legal only in countries which do not honor software patents. If you want legal DVD playback etc then go ahead and buy the license or use the software illegally. But you can’t expect Linux distros to install such by default.
I agree with everything but the point number one, GParted screw one of my HD once and Windows Server 2008 partition maganer detected the fail and repaired it.
I would be a full-time linux user but it comes down to 2 things: 1) my company lives and breaths MS products… 2) I really like my games – tried them on Linux via emulation (well, wrappers like WINE) and it just isn’t the same.
Otherwise, linux is a HUGE pool of GOOD (free) applications to choose from, it is super stable, looks great… no down side except for what I mentioned above.
Don’t get me wrong; I have no problems with Windows 7, but every software product you find yourself needing costs $$ (some not much, some donation-ware but even editors like UltraEdit or HippoEdit, or Postbox, etc., cost between $30 and $50 a pop). And of course with my pretty good hardware (except for the crappy ATI drivers – tempted to go back to my 9800GTX+) modern games are silky smooth. And my networking and data sharing is seamless with my co-workers.
This was a fantastic list. The only other things I would have added was that Linux is low cost, and the security issue that its easier to secure your user accounts.
I recently noticed the partitioner of Windows (7) is crap. I wanted to convert back a partition from dynamic, but I wasn’t able to do this. There was no official way. After googling a while I found out that it possible to do this with a simple hex editor, by chaning ONE byte and with ZERO side effects.
Is Linux (or GPL licensed software) better?
Gparted wasn’t able to do this.
I have done so much stuff, which isn’t possible with standard windows tools and that’s sad. I think Windows 7 goes into the right direction, especially when it comes to sane default configuration. Especially where older versions have wasted ressources for features nobody uses anyway.
As a developer and part time gamer, I find it real hard to justify using Linux. Don’t get me wrong I love linux, but the fact that I am running an ATI HD 5870 with 3 24″ monitors and an ssd, I can’t help but use Windows 7. ATI’s drivers are well great in windows but linux no, multiple monitors in linux has always been a pain, and ssd documentation on linux is minimal. Granted I know NILFS is brilliant …. Further more, being a developer I like to use multiple languages, and despite being a big java developer, I cannot ignore the fact that what MS has done with .net and Visual Studio being simply amazing is very compelling to use a platform the supports the tools and languages to develop with. As for all the other issues between the OS’s it is all a matter of preference, neither will ever be perfect. I agree linux wins on a security front, however, an os is really only as secure as it’s users are. Windows has vulnerabilities because large groups of people use it, and a lot of those are people who don’t have a clue what they are doing….Linux while built on a more secure foundation would also contain a lot more security whole with larger user bases because these flaws would be exposed on a greater level.
In addition, MS got lazy because there really wasn’t competition. Now that Mac and Linux gained ground…MS started paying attention and you have to really hate MS to not be able to say that Windows 7 is not a great improvement and a step in the right direction.
Edited 2010-04-09 18:19 UTC
Multiple monitors on Linux using ATi cards has been brain-dead simple for several years now. Ever since the introduction of MergedFB support in the ati and fglrx drivers.
While other people were struggling with multiple X servers, xinerama, and whatever the nVidia thing was called, us ATi users just added two lines to the X config file: one to enable MergedFB, one to set where each monitor was in relation to the others.
Even after the advent of RandR, MergedFB was still simpler. It wasn’t until RandR 1.2 that it became as easy to use as MergedFB.
I used to laugh at my co-workers who swore by nVidia. When we all switched to multi-monitor setups with Debian 3.x, I was up and running at 2560×1024 within an hour (Radeon X-something). They weren’t running at that res for another week or so (nVidia GeForce-something).
Sure, maybe I couldn’t pull as many FPS as they could, but I’ll take easy-to-configure over super-fast anyday.
What documentation do you need? It’s a disk. You use it like any other disk. Plug in, partition, format, mount. If you want to get fancy, you dig up the vendor docs to find out the sector size and erase-block size to plug into the partitioning step … but you still need to do that on Windows.
FYI….SSD are not just another disk, paritions need to be properly aligned and configured manually.
And your saying ATI multiple monitors is simple to setup? Even with the newest catalyst drivers? And isn’t fglrx dead….
Edited 2010-04-09 19:14 UTC
Yes, but you have to do that regardless of the OS.
fglrx == catalyst drivers, I believe.
On my P4 system running Debian 3.1 (later upgraded to 4.0 and then to 5.0 before being replaced with a Kubuntu system), it was brain-dead simple to setup multi-monitor using a Radeon X-something PCIe card. Using both the OSS radeon driver, and the fglrx driver.
The author appears inexperienced in both Windows and Linux. He seems to not know about powershell or how to access ext2/3 file systems from Windows. One of the resident osnews Linux fanatics could have done a much better job.
His claim about Linux having non-nonsense updates gave me a good laugh. Linux has endless dependencies due to the shared library system which makes updating more complicated and not as reliable as Windows or OSX.
As for Linux stability it is fine as a command line OS but not if you load X. Kroc is right about how it doesn’t matter if drivers are the underlying problem. When it comes to a straight comparison of reliability all that matters is uptime.
I also agree with Kroc when it comes to false positives. I’ve only seen it once with XP but it was a big headache since I couldn’t get through to their call center.
I think the benefits of Linux over Windows are more on the server side. Winserver is plenty stable but Linux and the BSDs are nice in that you can just download them for free and get a basic system going rather quickly.
Exactly. But we don^aEURTMt have a resident Linux editor to write up this stuff.
When someone knowledgeable and passionate about Linux, and willing to contribute (and stick around) steps forward to help process the incoming news, then the sooner we can have quality Linux articles on the home page instead of my junk. I specialise in web-dev, certainly not Linux.
This is not possible on windows.
http://www.paradoxuncreated.com/articles/Millennium/Millennium.html
Ignoring the fact that closed source is a dead end, and putting people unessecary to work on similar projects, without co-operation, without the benefit of eachothers progress, resulting in inferior solutions..
I tried setting the windows timer to max, which is 1000hz or around this value (?), with an obscure closedsource shareware application, which in turn, probably came from obscure documents. Windows (XP) was stumbling. This linux-based system “Millennium” runs at 3 times that rate, and happily.
And ofcourse reason 88..
88. If Steve Ballmer desires to make the whole OS an object for attracting gay love, you can always make fork a new distro with Linux. The alternative is not thinkable.
1. Programming tools (compilers, IDEs, editors etc) that come as standard with the OS. You could indeed expand this to cover the huge number of pre-installed apps you get with Linux vs. the poor selection you get with Windows.
2. Vast support for filing systems as standard. BTW, one thing the article missed is that Windows staggeringly cannot recognise the 2nd or later partitions on a USB stick (even if formatted as FAT32 or NTFS), whereas Linux can.
3. Out-of-the-box support for hardware (i.e. without requiring downloads) is far better on Linux than Windows. Windows 7 has tidied this up somewhat with its decent Windows Update driver downloads, but the first time you hit the desktop on Linux, everything tends to work, whereas Windows requires one or more further downloads usually.
4. Cut’n’paste in X is dead easy – left-click-drag to select an area, move mouse to destination and middle click. Windows: Left-click-drag to select, go to Edit -> Copy (or use the keyboard, which is counter-intuitive since you’re now using *2* input devices instead of 1), move mouse to destination, left-click there, go to Edit -> Paste (or keyboard again). Linux wins that one by a mile.
5. No broken “lets jump to the top or bottom of the window if my mouse strays left or right when dragging the vertical scroll bar button up and down”. This ludicrous default has remained even with Windows 7 and is the #1 thing that causes me to profusely curse when I have to use Windows (and is infuriating enough to inspire a whole OS News article recently!).
6. The installer in Linux will create a boot loader that recognises Windows and adds it to the boot menu. Windows does not – it *destroys* any existing boot loader and will only list Windows installs, which is terrible. Never mind that Linux installers are generally more comprehensive than the Windows one (and if booted from a Live CD, you can play games, surf the Web, word process etc. whilst installing Linux – you can’t do that with the Windows installer!).
7. Automating and scripting is generally a lot, lot easier with Linux. Shells like bash are better than Windows shells (even the Powershell) and it’s pretty easy to cron stuff up in Linux too. Most Windows apps are GUI based and often don’t have command-line equivalents, making things very hard to automate on Windows.
8. Linux, due to its rapid development pace and frequent release schedules, tends to innovate a lot more than Windows and include many features as standard that Windows may have the equivalent of but only as optional solutions (often costing money). Virtualisation is a recent area that’s come on leaps and bounds and is now supported as standard on all major Linux distros, unlike Windows where you have to look for third party solutions.
9. 64-bit support is superior in Linux in every respect. It wasn’t until Windows 7 that Microsoft have actually tried to put 64-bit on an equal footing (e.g. include the 64-bit DVD with the 32-bit DVD and ensure 64-bit drivers were more available), whereas Linux has given 64-bit equal status ever since AMD released its Athlon64 many, many years ago. 64-bit Linux distros come with the same set of packages as 32-bit ones (along with 32-bit compat libraries to run any third-party 32-bit apps) and third-parties usually produce 64-bit and 32-bit versions of Linux apps too. Contrast that with the Windows environment, where we’re still in the bad situation that 99% of third-party apps are 32-bit only – crazy! Even Microsoft didn’t help the 64-bit cause by making 32-bit IE the default in 64-bit Windows, ho hum.
10. Windows systems (including Win 7) have this tendency to get “crufty” over time – apps don’t always uninstall their files cleanly, things are added to the startup sequence that are sometime hard to get rid of and don’t forget all the viruses/malware out there (necessitating anti-virus/malware software that you don’t need on Linux). About every 6 months, I end up re-installing Windows just to get it clean (and don’t talk to me about the hard-disk-thrashing services like Windows Search indexing, ReadyBoost and SuperFetch that I have to turn off after each install!). With Linux, 6 months means a new release of Fedora or Ubuntu or a decision to stay with the older release for a further 6 months without any re-installation needed.
While i agree with most points, i have to say powershell beats any scripting language in linux i tried. Generally you manipulate a stream of objects instead of a stream of untyped text. This way you don’t have to use strange awk or sed to get an attribute.
Handling complexer data structures is way easier in powershell. But as an interactive shell it’s not that good. I can also open and directly manipulate excell via com objects, it’s impossible to do something like this under linux (i mean like generating graphs in openoffice via bash, but please correct me if i’m wrong!) .
Take a look at http://w3.linux-magazine.com/issue/78/Bash_vs._Vista_PowerShell.pdf and try to convert listing #3 to bash, then you wil know what i mean
Depends on which scripting languages you allow. If you limit Linux to just the shells (csh, bash, etc), then that’s a fair point. But PowerShell is a *lot* more than a mere shell – it’s able to invoke system APIs, like the “System.Net.WebClient” in the listing you cite.
In that regard, it’s a lot closer to languages like Python… particularly since you consider it an inferior interactive shell…
Listing #3 is easy to implement in ksh93. What was your point again?
“…OS X has no anti-piracy measures…”
Ah, but it does ^aEUR” Apple hardware. In general, any non-Apple PC will work just fine with whatever non-Apple operating system you throw at it. But Apple sells an integrated system in which both Apple’s hardware [or at least Apple-approved hardware] and Apple’s OS are essential to the proper operation of the system. This extends beyond desktops and laptops to all of their “i” devices.
http://linuxologist.com/ubuntu/iphone-ipod-management-on-linux-just…
http://www.libimobiledevice.org/
There are still a few things on the “todo” list, but it seems one can have a portable “i” device these days and use it reasonably well without running an Apple OS, or having any other Apple-approved desktop or laptop hardware.
Edited 2010-04-10 15:33 UTC
I generally agree with most of the stuff, it’s good, however, if you can take the best of both worlds. I’m extreme fan of the new W7, believe it or not – it makes my user experience much more easy. The thing that delights me is the UI look and feel – it’s a 2010 definitely – what it’s called, Aero, something, I don’t know, but I like it. I was able to achieve the same level of good looking UI on OpenSuse. Running Linux as a primary OS for more than 6 years has been good experience. The uptime and stability on Server side is something that I don’t want to jump in … we all know …
“But what about stability? Some of the more ambitious Linux users may tout that the OS simply doesn’t crash, but that’s absurd and no one would ever believe it. That said, I still do believe it to be the most stable OS out there, and that most instabilities that arise have more to do with applications you’re using than the OS or desktop environment (this has been the case in my experience).”
This is just absurd. If Windows crashes, then the MS people say “no, Windows did not crash. It was actually the graphics driver that crashed. Or the sound driver. Or… but windows did not crash”.
To this all Linux people say “so what? Windows does not work anymore, it is down, it crashed. You can not blame the graphic driver, because Windows in that case, can not handle problems in the graphic sub system. Therefore, Windows is a piece of shit. It crashed, it doesnt matter whose fault it was. It crashed.”
Now, Linux people say the exactly same thing. “No, Linux is not unstable. It was an application that was unstable and it made Linux funky. But Linux has no problems, it is the application that has problems”.
What a joke. Is Linux unstable or is it not? I dont care whose fault it was, if Linux is unstable, then it is unstable.
Imagine saying this to a stock exchange “yes the computer crashed, but linux did not crash. It was an application that crashed. Not Linux”. Probably the manager would say “I dont care whose fault it was, Linux crashed. Change to a real OS, not this piece of shit”.
You can never have a stable OS, when you have unstable ABIs as Linux has.
And Linux overcommits RAM by default. Linux allows you to allocate more RAM than the swap file. When all RAM gets used, Linux will kill processes randomly to get back RAM. Imagine a computation that has run for a month, and suddenly Linux kills it? Piece of shit.