Mark Shuttleworth: “I simply have zero interest in the crowd who wants to be different. Leet. ‘Linux is supposed to be hard so it’s exclusive’ is just the dumbest thing that a smart person could say.” He’s right. Lots of interesting insights in this blog post – I may not agree with everything Ubuntu does, but at least it’s doing something.
Every article I’ve seen about this blog post goes on about the “I simply have zero interest in” line, which was at best tangential to main content of the post (to move to a rolling release or not). Why?
maethorechannen,
That’s a good point, what’s more the line is a strawman. The *real* argument may go something more like advanced users not appreciating having their platform dumbed down for nonsavy users. Instead of addressing the real argument, shuttleworth chose to argue against a strawman.
(If I’m wrong and someone actually did say what shuttleworth is quoting, can someone key me in with the source? Thanks)
IMHO a desktop OS can be powerful and friendly at the same time, and Ubuntu was doing a great job of it until a few years ago when I left the OS for Mint which was doing a better job at being a user driven OS whereas Ubuntu became a fiat os.
Edited 2013-03-08 18:24 UTC
You say user-driven, but where are the users for Mint or Linux in general? Desktop Linux has failed, seeing this, Canonical has decided to try to create something that can change that situation. Linux Mint is for people who resist change at all costs, they have essentially recreated Windows 95 out of a modern desktop – its underpinnings are still essentially Gnome Shell.
Why would people move to Linux when there is nothing appealing which is actually tangible? Sure, there is the story about low cost, but for most computer users, the OS is free anyway when they buy their computer – few actually care that the price is included in the purchase. Unity Touch is truly stunning, and is already getting rave reviews by the mainstream. People actually want Linux because it is better, not because it is cheap.
Lunitik,
“Linux Mint is for people who resist change at all costs, they have essentially recreated Windows 95 out of a modern desktop – its underpinnings are still essentially Gnome Shell.”
I sincerely think you’ll find that people like myself abandoned Ubuntu *not* because we’re opposed to change at all costs, but because of Ubuntu’s attitude that we should accept their changes regardless of how useful it is for us.
We understand that gnome2 was an aging code base and we are in fact happy to participate with the next generation desktop, however we don’t believe in dumbing down our desktops with replacements that drop features and whose primary assets are eye candy.
The message to shuttleworth: go ahead and make a friendly desktop, but apply more common sense in making features that look good and work well for pros. Above all, listen to your users (for god’s sake this shouldn’t even need to be said!) Make the desktop flexible enough to be configured how WE want it. We want to promote linux by making it powerful and easy, not by dumbing it down. By ignoring us, you may be making more enemies and throwing away an opportunity to make a truly better linux desktop for everyone.
Edited 2013-03-09 02:39 UTC
Wrong-O. Gnome3,Unity and Windows 8 and the rest of that ilk are *NOT* Modern Desktops/OS’s.
They are piles of reeking crap meant for asshat’s like yourself who are too f*cking stupid to turn off a computer when you’re *NOT* using it.
I’m not sure if he quoted something specific, but unfortunately I do recognize the attitude he describes. In fact just this week I got a rather similar response in the #archlinux IRC channel (though they don’t appear to have any public logs).
As for sources, a google for “you should not be using linux” turns up some of it.
But what does that have to do with rolling releases? What does that have to do with display servers like Mir, Wayland, and x11? The technical details of all of the above is not relevant to the novice which isn’t going to know which technologies are in play. The details are invisible to them as they should be.
Being right about elitism=bullying=wrong doesn’t automatically make you right about everything else. It really feels like he’s deliberately trying to throw the conversation off topic.
It’s amazing people keep falling for this kind of thing.
And that’s why we have a choice of a large number of different distros.
Seriously, this sounds a lot like those folks who brag about how they liked certain bands before they “sold out and became popular”.
“I simply have zero interest in the crowd who wants to be different. Leet. ‘Linux is supposed to be hard so it’s exclusive’ is just the dumbest thing that a smart person could say.”
Shuttleworth got what he wanted, he steered the attention from the actual issues raised by the community. How about talking about “Mir”, “lack of community input”, “secret projects”… ?
I don’t see any issue.
Canonical is only getting flack because they are no longer pandering to enthusiasts. Android is open source but developed behind closed doors, it is pretty successful. What has Desktop Linux accomplished? I love KDE, but I have never met anyone outside a LUG who has even heard of it. My own father has come to me excited about what Ubuntu is doing, mainstream is excited about a Linux user interface! This excites me far more than anything people are complaining about brings me down.
Xorg is crap and Wayland introduces a new protocol for everyone to speak to, apps are already speaking to each other via dbus, all that is necessary is something that can take the in-kernel stuff and render on screen in the many form factors and multiple screen scenarios. There is a reason no one has adopted Wayland since it was announced in 2008, and frankly I think Canonical just got sick of waiting. Here’s the kicker though, Mir could implement the Wayland protocol if necessary, it competes with Weston, it just isn’t based on Wayland right now.
Come to BC, Canada. We expose ~13,000 students to KDE3 every year. And about a third of those get to experience KDE4.
Unfortunately, we’ll be migrating the KDE3 desktops to XFce this year, instead of KDE4.
There are many big Linux rollouts, there is a few in the hundreds of thousand system range. Again, I ask what this has accomplished. How many people using your systems care that it is running KDE, or have ended up migrating to Linux at home?
The fact that you’re migrating to XFce really just shows your deployment is about cost, that is fine but I am sick of seeing the only Linux offerings in stores being extremely underpowered low end machines. How can Linux impress when it is confined to targeting people who are cheap?
Without impressing people, no one will ever adopt it by choice.
We’ve received many comments from graduating students complaining about the lack of Linux in the universities around here. Especially the lack of NX Client access to their accounts, from home. We’ve also received similar comments from staff members that have moved on to other districts in the province.
We’ve also noticed a lot of student netbooks running Linux. And many Linux .iso files in student download directories.
So, while I have no definitive numbers, there are students from our district using Linux at home.
Our diskless stations aren’t uber-powerhouse machines, but they aren’t bottom-of-the-barrel either: tri-core AMD AthlonII CPUs, 1 or 2 GB of RAM, onboard nvidia or AMD graphics, onboard 5.1 sound, onboard NIC. No harddrive, no CD-ROM, no moving parts other than the CPU and case fans. Under $200 CDN right now.
Tell me more. I’m in Ontario and what you describe sounds appealing as a way to take advantage of my gigabit LAN to hang a second seat off my PC (currently running Lubuntu).
The thing is no one is going to buy a 3000usd machine with nvidia titans in SLI inside it to run linux. Cheap facebook machines are all that you can rely on really (talking about desktop market). If you pay 3000usd for a computer you damn well expect it to run the latest call of honor: battlefield not tux racer and openttd (a great game, not knocking it here).
We’re talking about Canonical, not Red Hat.
I think there’s room for both the accessible and the “l33t” distros in Linux. That’s one of the reasons I why I love it; I mean sure it has it’s faults (so many of them), but at least the fragmentation also offers users a choice.
My grip with Canonical is that their proposed changes often threaten to break that choice. The latest scare being their proposed switch to an in-house display server instead of Wayland or Xorg.
Ubuntu Announces Mir, A X.Org/Wayland Replacement:
http://www.phoronix.com/scan.php?page=news_item&px=MTMxNzI
Canonical’s Mir Project Retracts Wayland Criticism:
http://www.phoronix.com/scan.php?page=news_item&px=MTMxODY
I wouldn’t be the slightest bit surprised if 5 or 10 years from now, Ubuntu will be as incompatible with “GNU/Linux” (or whatever the hell you want to call the other desktop / server distros) as Android is with them currently. Which is quite a troubling thought in my opinion. It’s one thing not to care about the elitists, but Shuttleworth don’t seem to care about compatibility either, which is only bad for Linux in the long run.
Edited 2013-03-08 16:47 UTC
It’s kind of a double-edged sword. Desktop Linux has been a ‘thing’ for about 15 years (give or take a few), and they still don’t have shit to show for it. I mean, what are they rocking these days… about 1% market share? It’s obvious that what they’re doing is not working, so somebody has to take the bull by the horns and do something different, if you don’t want to see desktop Linux rocking a 1% marketshare still in 2020.
If it breaks compatibility with all other distros, you know this is going to have to happen sooner or later, if you want more than the 1%. And I suspect that if Ubuntu really takes off, the other distros will follow its lead, so everything works with Ubuntu.
Personally though, I think desktop Linux is, and always will be, a ‘hobbyist’ OS. Having said that, I’m sure I’m going to get modded down for expressing that opinion on this site
Edited 2013-03-08 17:10 UTC
The problem hasn’t been technical ability nor direction, it’s been more finances. Shuttleworth has basically been running Canonical out of his own pocket to get it to the stage it’s at today. But Ubuntu wasn’t the first “desktop Linux” with Shuttleworth’s vision (albeit without Shuttleworth at the helm) – sadly all of Ubuntu’s predecessors have basically fallen by the wayside because they simple were not sustainable.
Desktop computing is such a tough nut to crack (between Microsoft’s monopoly on PCs and Apple seen as the “go to” whenever anyone gets jaded about PCs/windows) that it takes a bottomless pit of cash.
I think this is also the reason why Canonical are moving Ubuntu onto mobile platforms as well. All this talk about “the desktop/laptop are dying” is sensationalistic crap, however it’s also not a market that’s likely to see exponential growth again. So I can’t blame Canonical for wanting to hedge their bets. And let’s be honest, the mobile / tablet market is far from stable at the moment so as much as Ubuntu has an uphill battle there, I think it’s more of an achievable goal than desktop Ubuntu being anything more than a niche on the desktop.
That’s not the way how Canonical should be working though. I’m all for them driving change, but they should be working with the community. Instead they work in isolation and only really release the source as part of the OS releases. There isn’t much (if anything) in the way of submitting patches back upsteam. So every other Linux developer and distro maintainer is expected to find any Ubuntu-fixes themselves and re-engineer those patches. It’s not that far of how Google run Android these days.
Linux might have a number of things backwards (as I said before, it’s far from perfect) and it might be terrible for fragmentation, but at least the community collaborated well. Canonical doesn’t. And that’s what worries me – I don’t want Linux to become Shuttleworth’s vision or the highway. I like the fact that there’s a whole plethora of choices out there and the fact that they’re all largely compatible with each other.
I can’t say I agree with you, but you’re entitled to your opinion
Edited 2013-03-08 18:28 UTC
I think these announcements are an invitation for community involvement. Now that they actually have something to show, meaningful discussion can happen around that.
Canonical have a single vision, and that is where they want to invest. They aren’t taking away any choice, they are providing a new one. Lots of people say “why do that when you can fix this?” or “we already have this, why do that?” but such statements are very ignorant. Just look at the current landscape, people are complaining about Mir because they think Wayland is software and we already have Xorg. Weston isn’t being used anywhere right now, and even Xorg developers don’t understand that codebase. Wayland introduces a new protocol needlessly because Xorg developers created it to be a simplified version of the same. It just isn’t necessary…
Linux is about freedom, but I don’t think we understand what freedom is. It isn’t necessarily true that if they weren’t working on or using one thing, they would be concentrating on the thing you care about. Just as likely is that they’d simply not be in the space at all, not contributing anything – which was a complaint people leveled at Canonical for a long time.
People are supporting Android, has that taken away from support for desktop Linux? Who is complaining that Google is competing with Gnome, KDE or Xorg? Who is clamoring for Google to adopt Weston? Where are the app developers clamoring to implement Wayland in their software?
Just about everyone that works on weston and wayland has worked on X. The people working on Mir appear to be the ones that don’t understand wayland and weston.
The main point behind everyone’s derision is that they could have done everything Mir does on weston (or the wayland). The fact that they didn’t understand this (or even ask the devs if it was possible) speaks volumes.
Canonical so far has sucked when doing Linux plumbing (i.e. few contributions). Even if they are correct in going their own way, they may lack experience about what works and what doesn’t. They also lose community mentors that DO have the experience (Xorg and wayland devs).
For me, it is good they aren’t particular familiar with those projects. I simply do not think that there is anything there that is beneficial to the use case Canonical is shooting for. The fact Wayland has reinvented the wheel in the IPC space is reason enough to balk at it, I think one of the key reasons for Mir – as can be seen in their docs about it – is that it is protocol agnostic.
What that means – as they point out – is that they can support X11 or Wayland or DBus, it is utterly irrelevant to Mir what everything is using to speak to each other, its job is to make sure everything ends up on the display as intended.
I would ask though, do you think Apple or Microsoft developers are familiar with Xorg or Weston? They seem to be doing pretty well in the graphics department. I would bring up SurfaceFlinger on Android too, but something tells me they probably are familiar with Xorg.
Edited 2013-03-08 20:41 UTC
I suspect that Apple or MS have FAR more resources to throw at infrastructure than Canonical (and plenty of experience). Also It does help to know low level linux infrastructure. The fact that their critique of Wayland was utter BS does not bode well (if you’re going to make a technical critique, you maybe should know what you’re talking about).
The way Canonical has executed on their other projects does not leave me hopeful. Just look at the number of false starts with Unity.
SurfaceFlinger is a really bad example. Only recently has it become decent enough to compare to other display servers (with basic features like mirroring or changing resolution). It has taken several years to get to this point (SurfaceFlinger has been in development since the beginning of android, before its purchase by Google).
It also should be noted that they want Mir to be equivalent to SurfaceFlinger AND Xorg. That implies complexity that I just don’t think they’re prepared for. By the time they’re done they will have reimplemented the Wayland protocol (SurfaceFlinger uses Androids Binder as an IPC).
Edited 2013-03-08 21:48 UTC
I am quite sure their rationale for writing Mir was based on the situation when the project was started. It seems that in the mean time, Wayland/Weston have fixed the issue and so the rationale was no longer applicable. If you read the complaints pertaining to input, for instance, it seems to be something recently fixed in Wayland/Weston and thus something so fresh in the minds of those developers.
What about how Canonical has gone about things doesn’t leave you hopeful? There were no false starts with Unity, there was a Netbook addition which was apparently started before bringing in real design people. Since then, there have been two implementations that shared most of their code: Unity2d and Unity3d. Unity3D was functional, and the lead developer of Compiz was around working on it. Whether he left because of Mir or Mir was written because of his complaints related to Wayland is unclear (he at least said in his blog that he doesn’t want to implement Wayland in Compiz) but that situation changed. Unity2D was a test essentially, and was used in Ubuntu TV. It seems that Ubuntu Touch is using Qt5 though, along with QML2 – neither of which were around for Unity2D. Experimenting with this, they seem to have decided it was better, but prior it seems Compiz+Nux was better for their use. Compiz was a technology that served Unity well enough, but I think Qt5 offers far more for developers. For me, what you call false starts are really just a display of the developers being dynamic, utilizing the best technology available while maintaining a consistent feature set. Looking at the Ubuntu Touch interface, it seems to me they have made the right decision, I do not think that would have been possible with what they had before – yet, still we see the basic elements are the same, evolving naturally despite being radically different, amazing!
I have brought up SurfaceSlinger to show another example of a company neither going with Xorg or implementing Wayland. They have preferred to develop an alternative because these options are not aligning to their needs. We have to understand that SurfaceFlinger is C++ too though, and was what the Ubuntu Touch interface was implemented on in the examples we’ve all seen. It would be surprising to me if Mir didn’t reuse a lot of SurfaceFlingers code, and I would bet the test-based development of Mir has been done based on it – they write test cases for what is required by a feature they want to implement, then see if the code matches the test case.
I think this kind of development lends itself to the complexity they are aiming for. By coding to satisfy a testcase, development is far more deliberate and thus can happen with a sort of tunnel vision. It also permits the developer to actually see their advancements and thus become more motivated to move onto the next step. I think this is how they have managed to keep Unity consistent across the four toolkits it has been implemented in – whatever it was before nux, nux, qt4 and qt5.
I think Mir is primarily being done because Canonical do not like the Wayland protocol, I do not think it makes sense for a whole other IPC just for the client and server side of the display server to talk. It is essentially repeating the mistake of X11, where again we have an IPC which probably made sense back then, but now no one really actually understands it. Something else I like is that Canonical has said they will only implement features that are necessary, that bloat will not become a factor.
Essentially, all a display server really ought to be doing is handling all form factors and different display configurations, and communicating that to hardware. Input is already in the kernel, drivers are already in the kernel, everything else should not be dependent on a particular IPC unless it is DBUS.
So instead of going to the Wayland developers with suggestions, they decided to say absolutely nothing.
SurfaceFlinger was developed well before Wayland existed
That is all Wayland does. Dbus was looked at but rejected in favour of a protocol optimised for display. It should be noted that SurfaceFlinger uses the Kernel based Binder IPC mechanism. If one were to use Dbus all IPC would have to be routed via another daemon (makes no sense for graphics). Maybe Mir will also use Binder or something like it…
Edited 2013-03-09 01:13 UTC
The two are utterly different, there is no way to compromise here. Wayland IS the protocol Ubuntu doesn’t want – as evidenced by “protocol independent” statements in the Mir spec. It would be like going to the KDE guys and saying “Qt sucks, lets use GTK for everything”, the problems are that core.
What the hell does that even mean? There is no ‘community’; there’s only people with 9,000 different opinions. It’s not like you have thousands of people working toward a common goal, and Canonical going the opposite direction as everyone else. It’s more like many groups who are doing their own thing, and somehow manage to cobble together an OS that actually works.
If Canonical has to say ‘fuck all ya’ll’ and rebuild the whole thing from the ground up in order to build something that really feels cohesive, then I say go for it.
The problem with desktop Linux is due to
* Fragmentation.
* Backwards Compatibility (I mean for large programs released 10 years ago, not “I can compile the version of screen from 15 years ago”.
* Drivers (and don’t give me the speil about how it has more drivers than Windows, most of the drivers that aren’t actively worked on are of poor quality).
* Lack of single vision.
While the situation is better than when I said fuck it and went back to Windows in 2006.
“The problem with desktop Linux is due to
* Fragmentation.”
True, but on the other hand I’d honestly rather have more choices than a become dependent upon a single commercial entitee who’s idea of a successful OS is vendor lock. At least with linux and open source in general one has a very practical hedge against monopolisation. That more than makes up for fragmentation in my opinion, but opinions will vary.
“* Backwards Compatibility (I mean for large programs released 10 years ago, not ‘I can compile the version of screen from 15 years ago’.”
Linux lacks kernel stability, which sucks, but I’ve found linux userspace application interfaces to be amazingly stable, do you have a real example we can try or are you just being hypothetical?
“* Drivers (and don’t give me the speil about how it has more drivers than Windows, most of the drivers that aren’t actively worked on are of poor quality).”
That’s partly fair. But consider that the difference is that the hardware you’ve bought was manufacturer certified to run on your version of windows, while you probably failed to buy hardware certified for linux (right?). Still, as an engineer you should be *astonished* that you can even do this with linux and have a good chance that it will just work out of the box.
Edited 2013-03-09 15:25 UTC
You guys don’t know what lock-in really means. Lock-in is where you are dependent on a third party for supporting a bespoke application where all you data is tied up in and the company charges you ^Alb10000 for a bug fix which you know is only a few lines of code.
Sorry Windows is hardly lock-in compared to what I have experienced being on the other end.
It maybe, but fragmentation causes problems with software and hardware support. Microsoft try really hard with a few exceptions to keep things backwardly compatible.
If it isn’t open-source and doesn’t bundle all the required libraries I suspect you would have problems getting an old program running because you tend to get into dependency hell.
You can symlink libraries etc, but you are relying on it having the same API as last time.
I understand the reasons why, which is fair enough. But I don’t really care why when I am end user. I just got other shit to get on with. That is why I buy software that either provides the support or can easily emulate what i need to do.
It swings and roundabout. I use Linux quite a lot at home but I’ve seen people stuggle when things go wrong with any computing platform.
Android and Chrome OS will become the mainstream Linux on general purpose devices.
Edited 2013-03-09 19:35 UTC
lucas_maxiumus,
“Sorry Windows is hardly lock-in compared to what I have experienced being on the other end.”
The other end meaning what? Linux? We know that’s not true, but then I can’t tell what you meant.
“If it isn’t open-source and doesn’t bundle all the required libraries I suspect you would have problems getting an old program running because you tend to get into dependency hell.”
I’ll wait for you to provide a realistic example from a decade ago that can be tested.
“But I don’t really care why when I am end user. I just got other shit to get on with.”
You know, if you were serious, then you’d have bought hardware and software that were linux certified. Most likely, like most people, your expectations of linux are so high that you download a free community supported distro, add arbitrary hardware and then expect it to work without any issues. It is a testament to linux that this works as often as it does, but if you want *guaranteed* results then you should be going to a linux vendor that *guarantees* results, otherwise you are taking the enduser supported route and you should be prepared to support your personal configuration.
If you are one to complain about self supporting your own personal configuration, that really means you should have gone with a vendor supported route. Next time you know, right?
I think you’ve already made up your mind here, but please consider what I’ve said seriously. It’s reasonable to expect a linux certified system to run as well as a windows certified one. But it’s also reasonable to expect some tinkering when you put together your own uncertified hardware.
Didn’t really read what I said did you? I was talking about being locked in on bespoke software is far worse than being locked into Windows or other Microsoft products.
Try compiling Latest Firefox on Ubuntu Warty.
The point is that I am happy to ticker most of the time at work I am not (I don’t use Linux at work).
People I know in IRL aren’t nerdy enough to want to piss about with the computer. This goes back to fragmentation etc etc.
This really isn’t hard to understand.
Edited 2013-03-10 09:38 UTC
lucas_maximus,
“Didn’t really read what I said did you? I was talking about being locked in on bespoke software is far worse than being locked into Windows or other Microsoft products.”
I did, it was just out of context in our discussion on OS based vendor lock, but it looks like you meant it that way.
“Try compiling Latest Firefox on Ubuntu Warty.”
So in other words, you were bluffing when you made this statement “* Backwards Compatibility (I mean for large programs released 10 years ago, not ‘I can compile the version of screen from 15 years ago'”
We can all make up hypothetical examples, but if you had actually experienced a real backwards compatibility issue that effects real end users in practice, I’d have been curious about it.
“The point is that I am happy to ticker most of the time at work I am not (I don’t use Linux at work).”
So when you complained about getting shit done earlier, that was being facetious? Haha, there’s no way to win with you.
“People I know in IRL aren’t nerdy enough to want to piss about with the computer. This goes back to fragmentation etc etc.”
People like this aren’t always open to trying a new OS in the first place, but if they are you could point them to supported hardware that should get them up and going with minimal fuss:
http://www.ubuntu.com/certification/desktop/
Oh he completely has. His arguments are cyclic, hypocritical and often completely misinformed. ie your typical fanboy.
When are they any of those?
Making cheap shots is what you are good at. Missing the point when you know what the other person is saying is another thing. Having your head up your own arse and making snarky comments is another.
Canonical have pretty much killed any chance of Linux as a gaming platform because they aren’t going to include Wayland and instead work on their own display server.
I like Microsoft’s stuff. It works for me, but every criticism I made about Linux is the one reason why it doesn’t get anywhere as a desktop OS for the masses.
I even quite like using Linux when I get the chance to tinker.
Every fork of Linux that is commercially successful is always either embedded or tightly controlled by one company (Android, Chrome OS) or is used used in “Enterprise” scenarios (Redhat, Suse).
MacOSX is the desktop *nix, like it or not.
Edited 2013-03-10 13:45 UTC
How ironic when it was your “I’ve work with people like you before” comment that first lowered the tone.
It’s those conclusive statements about situations you can’t even pretend to know the final outcome of that make your arguments so ridiculous.
I do agree with you that Canonical’s move is really going to fuck things up (and not just for gaming). In fact that was my opening argument in this thread. However it’s more than a little premature to be making any conclusions when Mir isn’t anything more than an announcement.
And this is where your arguments keep falling down. You accuse me of missing the point yet you’re constantly jumping to conclusions.
There is your problem; you load a hands on distro to tinker with rather than an “it just works” distro. Then complain that Linux doesn’t “just work” like Windows.
No shit Sherlock. Do you have any idea how expensive it is to employ developers, designers, advertisers and so on? And you can’t have commercial success without having a company running in the first bloody place.
But no, apparently in your world Linux should be commercially successful without anyone working full time on it and without any companies to back it. Apparently software becomes successful by magic.
…and your point is that there’s only room for one UNIX-like OS on the desktop?
It depends how you choose to take it.
Because they are the natural conclusion to jump to.
The canonical going their own way, MIR is going to be buggy as hell for the first few releases. Valve is going to start asking them what they are playing at.
There is enough issues trying to get games that worked on Windows 7 and Vista to work on 8 (Crysis just won’t launch unless I really fiddle). How do you think it is going to play out on a different kernel, userspace etc? Even John Carmack has said they should probably just invest efforts into WINE instead.
Sorry it is not called jumping to conclusions it about reading between the lines.
I don’t expect it to work like Windows, However I can do this thing called looking at it from a different point of view.
Other people don’t care how something works as long as it works.
Yes I do understand the expenses.
Which is why I said fragmentation is a problem, resources being split because of stupid arguments in the community. The currently small amount of resources aren’t being focused.
Linus’s mentality doesn’t help either.
Apparently while we are name calling, that was too much of a stretch in your imagination.
No that isn’t what I meant. I am saying that is the only one at the moment which is getting it right.
A lot of prominent people have moved from MacOSX in the JS community and I have spoken to people at work who have done the same … they don’t have time to fiddle any longer and MacOSX has the tools they need and they are happy to pay the premium.
Now it seems you are jumping to conclusions ;-).
Edited 2013-03-11 10:54 UTC
Cheers mate.
No, it’s a natural argument to speculate. My issue is how you keep touting your assumptions as fact.
Don’t even both with that bullshit. You’re not in possession of all the facts as events haven’t even started to play themselves out yet. So while your point of view might be a logical assumption (after all, I did raise the same concerns), it’s still far too early to make an informed prediction. Thus you are jumping to conclusions.
The difference between you and the others is that everyone else at least had the maturity to say their conclusion was speculative; however you seem to think that anything you dream up is an absolute fact – even if your statement is years off actually happening.
<rant>
And this is why there’s so many f–king arguments on sites like these. People are unwilling to ever make a reasonable argument. It’s always one extreme point of view, or the complete polar opposite. And then those instigators get all pissy when people don’t agree with their absurd extremism.
</rant>
So can I. I’m a huge Linux critic. I’m also a huge Linux fan. I can do this thing called “seeing things from all perspectives and not seeing things as one extreme or another”
And I resent the implication that I’m a fanboy given I was the one who kicked off the complaints about Linux in this topic.
That I do agree with. Personally I like the fragmentation but it is a double edged sward and, from the new user perspective, it does create more issues than it solves.
The wider community doesn’t help either. And in that I mean when new users post thread like “which Linux for a newbie” only to get met by a page of posts, each recommending a different distro. I facepalm every time I see threads like those.
As much as I personally hate Ubuntu, I love the fact that there was a distro that stood out for new users. It’s just a pity that (in my opinion at least) Ubuntu is marred by bad design decisions.
Linus isn’t really too bad in my opinion. I think he’s more pragmatic than some. It’s RMS who I wish would just shut the hell up (don’t get me wrong, I admire his vision. But I don’t think his constant outbursts are at all helpful these days).
erm, you’re the only one name calling.
I wont deny that I’ve been blunt and condescending, but you’re the one who resorted to name calling.
It was a question, not a statement. You can tell because it’s postfixed with a question mark rather than a period/full stop.
Edited 2013-03-11 11:18 UTC
No I don’t. I call it as I see it from experience. I could be wrong. I don’t really care it is an internet message board.
The same things have plagued desktop Linux for the past ten years and nothing has changed. In fact I think it has got worse in some respects.
Of course it is speculation … it is an internet message board.
There so many arguments because unless you love linux the linux hoard will descend upon you.
Boo-hoo.
RMS is a waste of space, but telling nvidia to fuck themselves and being generally rude to companies that work with him isn’t helpful either. He only gets away with it because he can.
Linus doesn’t lead by example, other software projects tend to take his attitude and it is hurtful.
[/q] It was a question, not a statement. You can tell because it’s postfixed with a question mark rather than a period/full stop. [/q]
No it wasn’t really a question by the way you asked it. Don’t pull that bullshit with me.
Right. So basically you’re being rude and extremist because this is a message board and thus it’s ok to make dumb statements. And because it’s a message board you’re also allowed to demonstrate these traits while hypocritically accusing others of it.
There’s people on here who don’t like Linux who were still disagreeing with your comments.
Real mature.
Of course it was a question. Have you never heard of the difference between open and closed-ended questions? Just because it was targeted, it doesn’t mean it was rhetoric.
Honestly though, I really think we should just give up before we both start acting like (even bigger) idiots.
Edited 2013-03-11 16:37 UTC
Dumb statements when, my first response was Linux has problems because of X, Y and Z on the Desktop. I only started name calling when you made it personal.
It was a loaded and you know it.
Edited 2013-03-11 20:05 UTC
You’re just reiterating my point though.
Canonical are creating more unessential fragmentation and further breaking compatibility.
That’s bullshit. But I expect no less from a windows fanboy.
Repetition. You already said fragmentation.
So you’ve not used Linux in nearly a decade yet feel fully qualified to start a flame war. Nice one troll. But how about you actually comment on the topic (Shuttleworth and his vision of Linux) instead of constantly regurgitating your same half backed OS bigotry than you spew out whenever any topic other than Windows crops up.
Lawrence… Your opinions are the cancer that is killing desktop linux.
You do not understand this because you are only 12 years old, but if the linux community does not eject harmful people like you there is no future for desktop linux.
Troll or alias? I can’t quite tell.
Because of copyright and patent law it’s illegal to fork windows, it’s illegal to use their source code, their API, their file formats, etc. This is why there is hardly any interoperability between windows and other operating systems.
And since Microsoft standardized the desktop market, and becasue windows is closed source/proprietary it’s almost impossible for any software vendor to provide the public with a windows-like operating system to the consumer.
It has nothing to do with Linux being only a hobbyist OS because there is a demand for an alternative to windows, but right now Linux is as close we get to an alternative. It’s just that Microsoft has a monopoly in the desktop market, the consumer has no choice but to use windows.
If you understand Microsoft’s monopoly and how they obtain their profits, you will understand how difficult for it is for any OS to take market-share from away Windows. There is a reason why windows has had 90% market-share for two decades. The fact Ubuntu is becoming more user friendly and getting more support shouldn’t be laughed at. Under this IP world that puts restrictions for competing with Microsoft, like I said it’s as we close as we get and hopefully over time Linux will take a substantial market-share away from windows. Of course if the government got rid of the copyright and patent laws in software right now, Microsoft’s monopoly in the destkop market would be over.
Edited 2013-03-08 18:45 UTC
I just can’t understand why we continually come back to the issue of “market share” as it pertains to the “Linux desktop”. The fact of the matter is that “Linux desktops” will always be cobbled together from a jumble of parts because the community and open source philosophies allow it. That’s not a bad thing because it gives users more control over the tools that are their computers. And yes, that also means that “Linux” will lose out on “market share” because most computer users just want something prepackaged and uniform.
If Canonical wants to chase market share, fine. They’re a commercial entity and will behave as such. And if they gain customers by breaking compatibility with other “Linux” focused projects, they’ll likely draw from a crowd who wouldn’t have been interested in open source or the community, anyway.
What you describe is great for servers, use the best tool to deliver your services, great. Canonical have addressed this with juju and various other tools focused at developers.
Not everyone uses their system as a tool though, many want to also use their systems to play and be social. Market share in and of itself isn’t important, but what it does show is a kind of democracy, people are voting with their money. In this way, it is a meaningful statistic. It means we need to look at those who are popular in the space and find ways to become more attractive.
In the server and developer spaces, Linux enthusiasts know what to deliver because they are just creating things to make their own lives easier. It is entirely symbiotic, the tools are created as a function of trying to use the system. This breaks down though when you are trying to design an appealing desktop. Many Linux enthusiasts still insist the commandline is the best interface, and for them it is far more productive. That doesn’t cut it though for those that do not care about the details of an operating system, that just want to communicate with friends and play games.
They are voting for those systems that do not necessitate them becoming computer scientists, and it is wrong to ignore them. Not only is it wrong, but it goes against the primary reason for Free Software. We aren’t enabling them to use free technology because the barrier to entry is so high. This only really harms us, we have to explain why their closed file format or protocol can’t be accessed by us, and so we give Free Software a bad reputation. Further, it makes it more difficult to argue for open alternatives, where if they simply preferred the open platform, they would end up using the open software just because it is the default on their system.
Users do not care about the details, but if Windows remains the standard then we have to support the protocols they give users. Same with Apple and whoever else. Microsoft has started contributing to Open Source in the server space because it is what people demand. Creating a Linux system that has the same traction on the client side will force these companies to implement open standards in that space too.
I’m afraid you’ve missed my point. Ubuntu, whether using pieces integral to the Linux ecosystem or not, is what stands to gain or lose market share. The “Linux desktop” is a blanket term that people use to try to narrow a wide range of overlapping, conflicting, competing, and modular parts. It doesn’t exist as a tangible entity.
And you’re right, most users don’t care about the details. They care about what they see: the UI. But using an operating system skinned with Gnome, KDE, Unity, etc. isn’t going to skew the development of lower level system functionality.
I’m sorry, I understood your comment to be pertaining to marketshare.
It is the job of a distribution to create something cohesive out of the many projects which pertain to a given product. What you do not understand, it seems, is that Microsoft and Apple aren’t all working together within their company either. They are also working on utterly unrelated parts of the system individually with a particular goal in mind.
What Ubuntu is doing is awesome, truly. With Juju and MAAS, the diverse projects targeting the cloud become irrelevant almost, you just concentrate on your particular mission. The strategy is the same in the desktop space, they are creating a great experience for utilizing the applications available. Make no mistake, for a developer, there really is no difference between Linux distributions until it comes time to compile and package their software. Only then do they have to worry about a particular systems library versions and package management. It is why you only ever see one tar.gz on a given projects site, the Linux system is quite cohesive if you know what you want to target, it is just complicated because there are many choices. Again, though, Canonical is addressing this, trying to help developers who are confused by those choices to make good decisions.
Linux is a kernel, it has never been related to desktops really. There are other components which generally target that kernel and a particular set of libraries. Unlike in the Windows world, you won’t see software overwriting those libraries though, they can contribute directly to the libraries upstream. In this way everything on the system is cohesive and well defined, but to understand the whole system is complicated. This is why distributions exist, and Ubuntu does a better job than anyone else at bring that all together and delivering it in a meaningful way.
The particular goal you speak of is to contribute to a unified product that will be consumed by end users. These products are branded and marketed. It’s because of branding and marketing that the concept of market share exists. Windows is a brand. Mac OS X is a brand. Ubuntu is a brand. Ubuntu also happens to be a Linux distribution, which means it draws heavily from an ecosystem in which the actors are not all employed by Canonical and thus not necessarily interested in its success.
Again, the point of my original post. The “Linux desktop” is an amalgam of parts created by a community invested in open source software. It is not a branded, unified, tangible entity, and as such should not be viewed through the lens of “market share”.
Canonical has built a brand. Now, to steer their brand in the direction they want they will go it alone on projects like Mir. That’s all this is about. So Shuttleworth is miffed that some people within the community to which Ubuntu owes much of its infrastructure disagree with his views. Well, his interests lie with a brand that seeks to prove otherwise. Fine. But again, that’s all in service of building up Ubuntu, not the “Linux desktop”.
The problem is that as they diverge more and more, they will have to maintain more and more of the system themselves. Gradually everything that actually makes people like Ubuntu right now will no longer work on their system. Things like udev, which is why hardware works on an Ubuntu system, will probably gradually depend more and more on systemd, for instance. Services are probably going to more and more target systemd systems because that is where the enterprise distros are heading. Most everything on an Ubuntu system that actually comprises the system is going to become obsolete. Either they move to what everyone else is using or they will basically have their own system. With the systemd vs upstart thing, they have shown a tendency to not adopt what everyone has standardized on, and the rest of the community has shown a tendency to not really care about that. KDE guys are already adamantly refusing to support Mir, because it is a single distro project and they don’t do that.
Question is, can Canonical find enough competent people to actually do all this? Mark seems to fancy himself the next Steve Jobs, that’s fine, I think it’ll be good for everyone involved if they can achieve what they want to. It means the people using Linux because its cool can gradually separate away from the people that actually care about the movement Linux has depended on. Let them compete with the commercial entities, let them fight everyone, that isn’t what Linux is about.
Linux is about working together, sharing expertise, becoming better because of everyone involved. It is ironic that the company whose name means all this is acting exactly against it.
Edited 2013-03-10 18:58 UTC
Isn’t that kind of the point though? Right now, you’ve got about 1% of users on desktop Linux. Canonical wants to go after the other 99%. They could call it ‘Occupy Desktop’ Bahaha!
I’d imagine Canonical wants to get that other 99% onto Ubuntu, specifically.
Sure, because Ubuntu wouldn’t even exist if someone else was offering something better in the opinion of Mark. He has invested in Ubuntu because nothing else really addressed what he thought Linux could be. He made a lot of money off of Linux in the server space, and now he is pumping that back into the community.
Maybe he is wrong, maybe what he thinks Linux is lacking is a total non-issue for mainstream users, maybe Linux isn’t popular because people just don’t care about software. Maybe just getting Linux onto hardware will be enough to make Linux popular. Maybe all this work to deliver something beautiful is meaningless, it was just a matter of delivering it by default.
I personally certainly don’t mind the eye-candy though.
What Linux needs most of all is a good, ‘idiot proof’ distro. Something I can install for my mom and basically not have to worry about it anymore. In other words, something equivalent to a desktop-like iOS experience
I’m not sure if ChromeOS qualifies as a ‘Linux distro’, but I’d say that looks promising. Not only because it’s pre-configured to run on specific hardware, but also because it seems to be set up in such a way that ‘normal’ users cant hurt themselves, and power users can open it up to install a full-blown Linux distro, if they wish.
I actually have the original Chrome OS machine that Google was giving out a while ago. While it is nice, it really highlights the continued need for native applications. I am hopefully that the various parties pushing for HTML5 will eventually get to a good place, but as NaCl shows, not everyone will want to use it for everything even when they can.
I think what Canonical is aiming at is for you to not have to install Linux on your Mom’s computer, rather that her system comprise FOSS by default. For this, there has to be a compelling reason for her to be drawn to it, and I think Canonical is delivering that with Ubuntu Touch – which will be coming to desktops eventually.
MacOSX is pretty much it, it is a solid reliable easy to use *nix.
It just comes bundled with expensive hardware.
Linux is successful everywhere except where Microsoft holds a monopoly. Even Apple can’t eat away at that market in a meaningful way. Most of their success is not on the desktop either. Do they really suck at the desktop too? No.
Looks like Mark has stabbed Wayland project in the back. If they play it out well (i.e. don’t bundle it with a ton of Unity crap) it may be _the_ successor of X in Linux.
The reasons are mostly political (technical “issues” pointed by Canonical are a stretch, to say the least). The biggest issue with Wayland is that they just move too slowly and have engaged in ideology (by depending on half-baked in-kernel GPL graphics stack). Anyone who gets rid of this dependency and deploys the stuff first is the winner.
Mir has its problems too – CLA, being late to the market. But it will have the drivers (both Android and proprietary) and will be shipped in millions by a major Linux distribution.
Wayland still has a chance but they:
– have to be more flexible about the graphics stack.
– need a deployment vehicle (like Wayland-on-X) to get Wayland to distributions. That’s not unlike how OpenGL compositors were deployed (Xgl).
– have to hurry up.
> technical “issues”
There are actually two technical issues that bother _me_, not sure if Mir is addresses them:
– Client-side window decorations. Window management is a compositor’s job.
– Yet another IPC bus. There was no reason not to use DBUS even in 2008.
Could go either way. On the one hand, they were pushing for them but, on the other hand, they did move away from that kind of design with appindicators.
(Though I always switch back to the traditional tray icon or find a new application when faced with appindicators because I insist on left-click being “hide/show main window” and right-click being “show context menu”. Last I checked, the libappindicator API simply does not support letting an application register a handler beyond “show context menu”.)
Window decorations is not a valid complaint, as an example, KWin is implementing Wayland and thus still be doing decorations.
Again, Wayland is a protocol which Weston implements. Weston is only a reference implementation though, Mutter of Gnome is also implementing it as far as I can tell. At the end of the day it is highly unlikely anyone will actually use Weston at all.
This is really not a good situation though, everyone moving away from Xorg will have to implement the Wayland protocol in their code, multiple duplication of effort will be happening. I would be surprised if there is not another project that turns up too since everyone will be writing display managers anyway. As I said elsewhere, I just hope closed drivers start targeting KMS and GEM et al rather than trying to track all of the mess that is going to happen.
For me, the real reason is that Wayland is another IPC, there is no need when DBUS is already widely used. This is the key takeaway from all the docs about Mir, which I think they understood they’d have to write anyway – just calling it Wayland support in Compiz, it would still be a reimplementation of Weston.
Edited 2013-03-08 21:11 UTC
You’re also forgetting the lack to technical ability at canonical. The fact that they couldn’t understand wayland shows that lack of skill. They might be able to get something half working, but I’ll bet (like Unity) it will be absolutely plagued by bugs and incompatibilities for some time.
One doesn’t simply just produce a quality display manager easily and quickly, see the fact that Androids SurfaceFlinger still isn’t anywhere near as good and flexible as other display managers (even after all these years)..
Note that the most experienced people in linux graphics support Wayland. And some of those didn’t have very nice things to say about the code quality of Mir.
but we’ll just have to wait and see if Canonical can avoid the most likely outcome.
Edited 2013-03-08 19:16 UTC
The graphics companies don’t need to do anything about Mir or Wayland (actually Weston, Wayland is just a protocol.) What they need to do is write drivers for the current in-kernel stuff. The display system doesn’t need to be implementing drivers, it needs to be displaying things on the screen. It is the job of the kernel to interact with hardware.
The problem is Nvidia and ATI just can’t be asked to adopt the new stuff, but if they did it wouldn’t even matter what display server is used, everyone would benefit. That shouldn’t be a concern of anyone but Nvidia and AMD though, they are the ones refusing to adopt the new stack.
Wayland isn’t a display server, it is a protocol – essentially another IPC.
And the same does not apply to X/Wayland/Mir? Suddenly it’s bad to have a choice?
You could say the same about Wayland/Westin. Why make your own new display server and protocol instead of improving X?
Edited 2013-03-09 03:34 UTC
The choice is really irrelevant.
Xorg is heavily broken, everyone knows it. Wayland and Mir will run X11 applications, Mir will run Wayland applications, nothing really changes.
What is great is the display server doesn’t have to matter. The drivers will focus on in-kernel mechanisms, and most of the rendering will be left to the toolkits. So long as those two talk it is utterly irrelevant how it is happening.
The very fact this is even an issue today just shows how broken Xorg is. This shouldn’t require everything and the kitchen sink, but X11 defined a complete display stack with network transparency. What is even more humorous is we end up using NX or SPICE or VNC instead of X11 mechanism anyway, yet people seem to care about it.
That may be Shuttleworth’s plan. It probably annoys him that so many of his former users voted a big fat no to Unity and switched to Mint. He probably thinks of Mint as largely their work and wants to make things harder for any distros downstream. Locking Unity into a new windowing system would have that effect.
TO be honest, I don’t really care what becomes the standard, but whichever actually delivers on its promises (and considering Wayland’s delays and Ubuntu’s tendency to half ass something before changing directions it could be neither) first will win. It will be open source and adopted by others. Now stop trying to win the “political battle” and get to coding!
While that would be great, I don’t think many other Linux distributions adopted Unity and Ubuntu did not abandon Unity.
There was already stiff competition for window managers and desktop environments. People are jsut waiting for a replacement to X. Completely different situation.
People attack Ubuntu for envy and jealousy, as simple as that.
I can understand statements like this being applied to expensive platforms like OSX, but to Ubuntu? Seriously?
No, right now I’m attacking it because of the copyright assignment requirement for contributing to their projects and a severe case of NIH syndrome.
But 5 years ago, yeah jealously and envy were pretty much the only reasons.
So funny you say that, cause you have to have to do the same if you want to contribute to Qt, and yet, I don’t seen anybody bitching about it.
and about the NIH syndrome, please, what project doesn’t do that? cut the hypocrisy.
Edited 2013-03-08 18:28 UTC
Qt makes some sense because it has a commercial version and it would make quite the mess when contributions cannot flow into the commercial version: you don’t want the two to stray apart too far.
GNU, on the other hand…. but it looks like they’re starting to loosen up too ( http://www.gnu.org/licenses/why-assign.html )
“Starting to loosen up” is simply a display of ignorance, that has ALWAYS been the case for GNU software. People are acting like it’s something new, but it is really all they can find to try to argue against Ubuntu. I know this from personal experience because I have tried to make this argument in trying to justify why Canonical are bad.
In truth, there is no good argument, everything they are doing is justifiable right now. People are resisting change, and are focused on their particular interests rather than the bigger picture. Granted Canonical isn’t making it very easy to look at the bigger picture, but it is surprising the backlash that has come when they actually tried to clue us in.
I’m not sure you understood me correctly. If I remember correctly it used to be the case that it was always mandatory to assign your copyright over to the FSF when you want to contribute to GNU projects. The link shows now projects can choose whether or not they want to have their copyright handled by the FSF. That’s what I meant with ‘loosening up’.
I’m not sure I’m arguing against Ubuntu: while I usually think copyright assignment is a bad idea, pointing out that it makes sense for TrollTech/Nokia and even for FSF/GNU doesn’t seem like an attack at Ubuntu at all.
So, all Ubuntu needs is to go commercial and it would make sense, double standards at its maximum.
Apache and Qt use CLA’s, are you complaining about that?
All the CLA is doing is taking away responsibility from the project if your code infringes someone elses patent. If for some reason the project changes license to something you don’t like as a result of being able to, you can simply fork at the last point that wasn’t applicable.
It becomes a question of whether you trust the entity, and that is a personal decision. It clearly isn’t actually about CLA’s though, as I’m sure your system has software that uses one right now. If you don’t trust Canonical, don’t use Ubuntu, no one is infringing your rights or freedom, why infringe upon theirs?
Canonical has a right to protect themselves, and are free to invest where they wish. You are free to utilize their work or use something else. What are you actually accomplishing by ranting about it though? You are just wasting your own time and time is the only true resource we ever have.
Copyright assignment and a CLA’s are two very different things.
Copyright assignment goes a lot further though: it takes away your right to relicense the software you wrote. A CLA (typically) doesn’t. Also copyright assignment gives i.e. Canonical the right to relicense the software you wrote as they wish. CLA’s (typically) don’t.
Bitching about Canonical doesn’t infringe upon their rights or freedom.
A CLA (typically) doesn’t.
So, the keyword here is “typically”, cause I know that Qt gets the right to re-license your code, so both are in the same category.
I’d vote you up if I could – I didn’t realize the Qt CLA was so broad (and it indeed is).
The difference that you retain the right to re-license your own code is still a huge difference between copyright assignment and (almost all) CLA’s of course.
Agreed re Linux user elitism. However, there seems to me to be an unquestioned assumption in the more populist Linux circles that end users want tons and tons of eyecandy.
In my experience, this is incorrect – and really I think it’s another kind of elitism. (“End users are stupid, let’s put in lots of blinkenlights to keep them interested.”) Geeks care that their desktop looks good. Most end users don’t even change the default wallpaper. Users like prettiness, but given a choice between better looks and better usability, I would bet on most people choosing usability.
And Canonical has consistently spammed the eyecandy since Unity came out, while letting bugs go unfixed and features unimplemented…
Why do you think Ubuntu’s popularity has declined? Why do you think Windows 8 isn’t selling as fast as Microsoft thought? It’s because most people want their computer to be a tool, not a toy.
I am not sure how you are defining usability, but if you are intending “lets me accomplish my intended goal” then I agree.
I think Linux enthusiasts need to better understand what people are actually trying to do with their computers these days. Primarily, that is socializing and entertainment. Most people in the Linux space are still talking about the core tasks of an enterprise though, browsing, e-mail, office, things like this. This isn’t going to push adoption.
Ubuntu is beginning to have a great story for these people, the gaming space is a good example in the entertainment space. Something like Netflix needs to get on board, too, and it would begin to be an appealing option. Still, simply offering the same functionality is not going to draw people in though, at the end of the day you need a wow factor.
Apple benefited by being the first in the smartphone and tablet spaces, but have not learned their lessons with being closed. Microsoft benefited last time by permitting a wider range of companies to benefit from offering their product. Right now, Android is benefiting from this, but I don’t see people particularly committed to it – it is just more affordable and offered on their particular ISP mostly. Ubuntu can capitalize on this, people are already responding positively to the touch interface and these days most online services already have a mobile version even if they don’t adopt Ubuntu proper.
For me, though, I can’t wait to get the Ubuntu Touch interface on my desktop! To be perfectly honest, I don’t even care what is rendering it on the screen provided it works.
No, not really. It took a really long time before they enabled Compiz effects by default. When finally enabled the settings was rather restrained (No wobbly windows, cube desktop switching and other nonsense).
Funny, that’s exactly why I use Ubuntu. It has usability out of the gates and I don’t have to spend countless hours tweaking settings. Sure, I have done all that stuff (afterstep, awesomewm, black|open|flux|box, etc etc) and it was a learning experience but now I just want things to work and actually get on with doing interesting things.
Unity is the best thing that happened to Ubuntu in terms of end-user usability.
You know, I keep hearing this but I never see it backed by any facts.
Edited 2013-03-09 03:21 UTC
Unity is a Compiz plugin, everything you see on the screen related to Unity is rendered by Compiz. Being a plugin, it has the same status as wobbly windows. This is changing, Mir will replace Compiz and use in-kernel functionality – that is great, but if they start trying to dictate driver specific things for Mir, they suck.
Who do you think has provided the mechanisms Ubuntu use to create that usability? They have done absolutely nothing still outside their particular investments. What they are doing is great, they have concentrated on what a distro is supposed to do: present the software in a seamless way. Do not think for a second that what the software was written by Canonical though, they are solely in the presentation arena. Be it Juju for enabling all the server goodness, or Unity/Mir on the desktop, absolutely nothing they’re using to make these worthwhile is from them. Nothing you actually run from Unity, none of your hardware, nothing is there because of Ubuntu, it is only the pretty widgets around your actual tast.
I don’t think this is true, what interesting things are you doing on your Unity desktop that you couldn’t do elsewhere?
Again, I don’t think this is true. They have produced a beautiful system, but nothing you will actually use on a day to day basis is being done by them. Are you sure that it is actually contributing to usability? I spend half my time trying to remember where it put things, but Unity is certainly pretty and gives me the tools I am familiar with. We can begin to talk about Canonical usability when some of the Touch apps come out, but for now they aren’t responsible for anything you actually use, they are only responsible for throwing that on screen.
Have you actually seen genuine statistics to show Linux has increased exponentially because of Ubuntu? I have seen statics earlier today that show Linux marketshare has actually grown at a slower pace since Ubuntu came about compared to the years leading to their inauguration. What really seems to have happened is less users have gone to the other projects, it means that other projects have not gained as many technical people, or even lost many if many more average users are coming aboard.
For me, it is a shame Ubuntu is marketed as the best Linux distro for newcomers, as much as what they’re doing impresses me, it has never been the best put together distro around – simple fact of the matter is Fedora will always win there because those developers actually write the software being integrated into the system. It is possible that will change now that Ubuntu is writing much of the stack themselves, but I think most of it will prove to be a wasted effort.
As Ubuntu move further and further away from the community, the rest of the community seems to be coming together. Projects like systemd and ensuring the core Linux system is the same for all distros, for instance. Maybe Canonical is bring ubuntu to Linux, just not in the way he intended. I would ask whether he thinks it is money well spent and a life worth living if he ends up being the cause of an even closer nit Open Source community after he’s gone?
I think it would, so for me, it is really win win for him.
Obviously I’m talking about before Unity.
Yes.
I didn’t say I can’t do them anywhere else but there’s less fscking about with things before I can get started on them with Ubuntu.
To put it simply, Ubuntu & Unity works well with my workflow, better than any other distro and that is really all that’s important.
Except…Unity itself. And Upstart. And the software center.
Well, precious little I would use on any other distro is done by the distro itself anyway so I don’t really see your point.
You mean just like every other commercial Linux company like Suse and RH?
Uh no? so? I wasn’t talking about Linux users, I was talking about the lack of support for the assertion that Ubuntu is losing users.
Talk about an NIH project. RH could easily have used Upstart or runit or any of the other pre-existing improved service management systems but no,they have to create their own (inferior) one.
systemd exists because upstart is fundamentally broken.
With upstart, if you start dbus, every service which can use dbus which is installed will start. This is not at all useful or intelligent. There are other problems systemd solves too, for instance every service can be managed directly, so its RAM and CPU use is configured as it is initiated. Further, each can have their own /tmp which is one of the leading causes of security issues on a Unix system.
Further, systemd isn’t just a simple init, they are integrating the entire core. This means that Ubuntu will have to rewrite things like udev themselves or get on board with systemd eventually. Do you think Ubuntu has the man power or knowledge to manage all that stuff which has contributed to their usability? If they don’t adopt systemd we will see soon enough.
If upstart had been done correctly, systemd wouldn’t exist, but the simple fact is upstart is awful. There is no possibility of fixing it because its basic design is what is wrong. In the same way, Mir has been implemented because Canonical think something is wrong fundamentally with Wayland. Again, if Wayland had fulfilled what Canonical wanted, perhaps Mir wouldn’t exist, but it doesn’t apparently.
I don’t think either of these have reasons beyond technical for existing, although few want to sign a CLA for the Ubuntu stuff and none of that is being adopted outside Ubuntu because of this. Mir will again be an Ubuntu-only program.
Edited 2013-03-09 23:37 UTC
No, that’s not how upstart works. Only those services that has been configured to depend on dbus will start with dbus. It does exactly what a service management system is supposed to do: it manages services.
Upstart can do this too. It’s called ulimit and it’s been around for ages.
a) that can be done trivially without systemd
b) shared /tmp is not a leading source of security issues in Unix, insecurely created files are.
And that’s one of the main problems with systemd. It does too much and integrates too many things, like suggesting that GNOME has systemd as a dependency. Welcome to stupid design 101.
The only good thing about systemd is that it obsoletes SysV init.
Really. CLA seems to work fine for Apache, JQuery, Node etc etc. I don’t see any lack of uptake for those projects.
This is exactly what I said, and is broken.
It should start services that are needed only, for instance I probably want NetworkManager on a desktop system perfectly good, but I do not need avahi to also come up – I would prefer avahi initiate if I use a printer or actually try to do something with avahi.
On upstart, the only way to ensure avahi isn’t brought up is to remove avahi because it depends on dbus, this is broken yet is a fundamental design choice of upstart and cannot be changed – it is exactly why systemd exists in the first place, upstart does things in the wrong direction. Systemd starts based on what you’re trying to fulfill, upstart starts based on what is possible to start now that this service is up. It doesn’t do something intelligent like the avahi situation I started.
systemd does this via cgroups, so its rules pertain to every fork of a given process. In the server space, this is very useful, because I really don’t want to have to set ulimit on every module of apache, for instance.
The very fact /tmp is shared is something broken. I assure you whatever trivial way you’re listing isn’t as easy as systemd’s PrivateTmp=yes in the unit file.
Systemd integrates things that make sense for it to integrate. For instance, the journal instead of random log daemons that each do something a little different. It is designed again to make an admins life easier. Further, it does logrotate in an intelligent way instead of having logrotate trying to integrate any number of random log daemons. They are also ensuring different files on the system are in particular places, no more divergence for no particular reason. It also does what xinetd and init and at and cron each used to do, which for a UNIX vet all make sense, but for any intelligent person simply doesn’t. This brings us to the GNOME dependency, gnome-session also does much which should be handled by the init system itself, because it is initializing GNOME. There is no reason to have 10 different ways to initialize different things on the system, handled differently by every distro due to the differing scripts, all logged in different ways etc. systemd brings all this together, and makes a Linux system make sense again. Yet manages to do this in a way that is actually still smaller than upstart, which does none of this. systemd stops the redundancy of a task which should be extremely simple – which you touch here:
But it also simplifies every admins life, while doing the right thing in all situations.
These are not signing a CLA over to a company.
Umm, lika da Buntu, it preddy, preddy, me happy all over. Only shucks is no play SimmyCity in CeeDee holder. No too bad, my friends no work either.
Edited 2013-03-08 21:18 UTC
For people who agree with freedom, they do come down rather hard on anyone who uses technology that some deem to be outside the circles of GNU/Linux(I feel gross using this term, this is the last time I’ll use it — I promise.)
If Mir is a mistake, fine, let them make their mistake. If some snobby developers from Wayland or X (who obviously don’t have an agenda *rolls eyes*) think its crappy code, then so be it.
Continue doing what you’re (Wayland/X devs) doing, whatever it is you’re doing.
From the start its been clear that Ubuntu is closer to Android Linux than GN… — than traditional Linux, so why stress the fact that it eschews what traditionally has never been a part of Android?
Sure, but there is no reason why one cannot state an opinion why they’re making a mistake, or point out that they claimed to support Wayland, while at the same time working on a completely different solution in secret.
It should also be noted that people would respond more warmly if Canonical had previously demonstrated the chops to do a massive infrastructure project such as this. There is always a first time, I guess…
That is what the Wayland devs said. What they didn’t appreciate was Canonical dumping on their work with utter BS.
Edited 2013-03-08 21:40 UTC
I think that the fact the Wayland FAQ details why DBUS wasn’t used shows that at least SOMEONE pointed out a reason people dislike it. Further, I think the plan was to adopt Wayland, but again, Wayland isn’t software, it is a protocol spec. Weston is an implementation of that spec, but it replaces much of what Unity does and made Compiz utterly obsolete. They could have implemented Wayland in Compiz, but they’re still writing their own display server in this scenario. If they’re going to have to write one anyway, why not do it how they see fit?
Launchpad is pretty huge, upstart is very much a core piece, as was usplash and libnux. I think you are overstating the size that a display server needs to be though, hardware (graphics and input, etc) are in the kernel and things like fonts are implemented in other projects or directly in Qt. What the display server really needs to do is bring this stuff together and display it on the screen. The problem with Xorg is it tries to offer everything within one codebase. Mir, instead, will just bring everything together so that Unity can manage it within the various possible configurations (number of screens, form factor, resolution, etc)
I do not think it was utter BS, most of the comments from the Wayland devs suggest these are things they’ve fixed recently. We have to remember that Mir is already almost a year old, I feel the reasoning was probably valid at that time.
I think the Wayland/Weston developers were mostly annoyed because the reasons given were things they were willing to work on. I think Canonical should have echoed the protocol agnostic story more loudly, which Wayland guys have insisted they wouldn’t have changed (heck, they are basically just a protocol and implementation of said, the project itself goes away without that protocol… hence mir)
I’d say Launchpad is pretty big and Landscape isn’t exactly small.
Anyway, there always has to be a first large project, you don’t get a lot of them under your belt by magic.
It’s funny, on one hand people complain that Canonical don’t contribute enough and create enough innovative projects.
On the other hand people complain about their new and innovative projects.
This is much more low level than any of their other projects.
It is new, but I wouldn’t say it is innovative. People were fairly interested in their phone and tablet project because it is different to anything else out there. This is called reinventing the wheel.
Who cares? It doesn’t matter. It either works out or not and if it doesn’t it’s only Canonical’s loss.
If it works out we get an improved graphics stack.
I really don’t see why people are bitching about this.
It wouldn’t be an improved graphic stack.
It would be the same graphic stack without Xorg legacy nonsense.
Well, that’s an improvement, isn’t it?
Seriously, you are looking at Xorg and deciding Mir is something big and important.
It is more like a Compiz or Beryl sitting atop the kernel. The stack is just moving down a level…
The project scope is something more like usplash, just with multi-display config, and form factor config (because they have 3 form factors for one interface)
Canonical still isn’t contributing, they are duplicating effort because they feel others aren’t going in the right direction – they want to steer the ship instead of Red Hat, the relationship is far too much about competition between each other, they should be working more closely.
The thing is, Red Hat has always pushed Linux in the server space, and guess where Linux is actually successful?
No way! There always been competition in Linux world. Vim and Emacs, Gnome and KDE, Mandrake and Red Hat, Sendmail and Postfix. They’re duplicating efforts all along and that’s a good thing.
Vi vs Emacs was really BSD vs GNU
Gnome vs KDE was about whether it was OK to use a closed toolkit, then gradually Qt became more and more open until finally its license is the same as GTK – yet even now there is a CLA to contribute to Qt.
Mandrake vs Red Hat was really Gnome vs KDE again.
Sendmail vs Postfix was actually a technical thing, sendmail is simply insecure, and postfix tried to fix that – hence the name.
These all made sense in the Open Source space, and again the stuff Ubuntu are doing is argued for similar reasons. No one wants to sign a CLA because it means a particular entity controls the future of the code. It isn’t good enough that the community can fork for most, having many contributors with equal rights means no one really controls anything.
As George Carlin said, “it isn’t a right if it can be taken away” CLA allows the rights to be taken away.
Mark shuttleworth IS the “dumbest thing”…
This is all about Canonical creating its own de facto proprietary Linux variant like what Android is for Google.
Mark Shuttleworth is simply trying to cover his ass and his bad, harmful decisions with an utter crap like this one, cited here.
He can say whatever he wants about imaginary problems, “leetness” etc, but he won’t change the fact, that he poisons GNU/Linux and FLOSS community with the software that has hardcoded options [no options to choose from the commandline], spies on users, uses closed-formats and protocols, like the one used in Ubuntu One to send files and synchro.
Mark, mind your own shit, stop harming community and stop blaming it for your own tyranny, incompetence and lack of understanding of it.
Well, mister ubuntu, a lot of people who might be unfriendly towards what ubuntu does might be so not because you are saying that elitism and hardness for exclusivity’s sake is dumb, but because dumbing down a distro to the level that it’s a pain to use for people who need more than clicking on stuff.
For a while now the only people who I recommend ubuntu for are real noobies and totally linux-clueless ones. It’s OK, I don’t mind, they don’t mind and the puppies are well fed, but mister ubuntu do not ever come out and try to imply this is _the_ way to do a linux distro or a linux ui.
Now, those are the real dumbs who think everyone else is dumb. Chew on that.
These are the same people who are eternally logged into FaceBook, their online banking and online email accounts. Move them to another computer and they are lost.
I watched in horror as a friend was locked out of Windows 8 because he couldn’t remember his password to Pandora.
Many, many people failed the linux test a decade ago when they couldn’t 1) partition a hard drive and 2) had no idea about the MBR.
Wait, what?
Locked out of Windows 8… because of Pandora?
Color me confused.
“I simply have zero interest in the crowd who wants to be different. ”
WTF are you on about, everyone wants to be different!
‘Linux is supposed to be hard so it’s exclusive’ is just the dumbest thing that a smart person could say.”
Nobody said it did! but there are creators of distributions who make the installation/configuration less user friendly & their forums less “hand holding” for obvious reasons:
1: to attract advanced users who in turn are more likely to give something back.
2: to force users to learn & think about what they are actually doing.
You chose a different point of view, fair play.
Now Mr Shuttleworth my question to you is are you going to continue slating others in the linux community or are you going to just get on with dealing with real issue’s surrounding your own distribution ???
Edited 2013-03-09 19:00 UTC
I think Mark really wants to create an Ubuntu vs the rest of Linux scenario, he wants to make people choose because he believes that Ubuntu can win right now. He doesn’t understand that the only people he’ll win with is the people FOSS doesn’t need in the first place.
He is winning because his users don’t want to push for open source drivers, they don’t care about patented codecs. Let him go his own way, he is doing more harm than good. Linux has never been about gaining users, it has always been about something more than that.
Don’t get me wrong, spreading Linux is very important, but for the right reasons.
Edited 2013-03-10 18:42 UTC
It looks, that when I’m not going to follow the crowd – I’m still crowd member anyway (according to Mr. Shuttleworth).
There are plenty of distro maintainers that have a silly elitist attitude where to them cutting a task from 10 steps to 5 makes it less manly or less Unixy or something equally retarded. I guess they don’t have much real work to do and like to spend all day configuring.
But as silly as they are I would still take them over a design dictator like Shuttleworth who can’t handle any criticism and like Microsoft wants to force a UI that the majority clearly doesn’t like.
I also really don’t think Shuttleworth has done much for the Linux desktop. I think he is good at marketing Ubuntu as user friendly when in reality it isn’t much different from other desktop-focused distros. The major improvements in the last 10 years have been at the kernel level. If anything Shuttleworth has served as a counter-productive distraction.
Shuttleworth lost another good opportunity to keep the mouth shut and don’t speak bullshit.
Some of his actions go against the free/open source world, which he depends on and heavly use on Ubuntu.
He doesn’t know the difference between technical focus and consumer focus. Linux started in 1991 and Ubuntu in 2004 and Ubuntu’s doing different not the Linux/FOSS comunity, for the bad or good. Don’t spit up in the air, it will blow back on your head.