“Clement Lefebvre, the Linux Mint founder, has started working on a GNOME Shell fork called Cinnamon, which tries to offer a layout similar to GNOME 2, with emphasis on ‘making users feel at home and providing them with an easy to use and comfortable desktop experience’. Among the features that we’ll probably see in Cinnamon are GNOME2-like notifications and systray icons, option to change the panel position and other panel options like autohide, etc. Some of these features are already available through Mint GNOME Shell Extensions (MGSE), but their functionality is pretty limited.”
So now we have Gnome Shell, Unity, Mate and Cinnamon. Any one else I missed?
At least these are only forks of the GNOME 3.x shell, and not forks of “GNOME 3” itself.
A “GNOME 3” application will still look/work the same on all of these. It’s just how you access/manage the apps that’s different.
Wonder how many of these replacement GNOME 3 shells it will take until the GNOME-Shell/Unity devs realise they screwed up?
They don’t care. They wanted to program the new shiny shiny.
But they are not forks of the same thing. Mate is a fork of Gnome 2, and I predict will disappear. Unity isn’t a fork, as far as I am aware, but an alternate shell for Gnome 3, and Cinnamon is a fork of the Gnome 3 shell.
I haven’t used any but Cinnamon looks the best of these alternatives – but not enough to tempt me away from Xfce.
I agree that MATE isn’t sustainable long-term. What the Mint team is doing, IMHO is a much more realistic approach. And a mighty fine one at that! It’s good to have options; it’s also no secret that a lot of people aren’t happy with either Gnome 3.x or Unity. Perhaps if these forks catch on with the masses than the mainline Gnome or Unity developers will get a clue and act accordingly.
To a point, anyway. Gnome (and the whole Linux landscape in general) has more forks than a cookware factory. As an end user, even IF I were interested in trying any of this stuff out, I wouldn’t even know where to begin.
WorknMan,
“To a point, anyway. Gnome (and the whole Linux landscape in general) has more forks than a cookware factory. As an end user, even IF I were interested in trying any of this stuff out, I wouldn’t even know where to begin.”
Perhaps, but from the sounds of it, this is the fork most of us will want when it’s ready because the upgrade doesn’t aim at throwing away simple proven useful features.
If you’re a normal person rather than a geek, you start where all the *other* normal people start – with a *product*.
Only us technophiles are paying attention to the roiling changes that have resulted from the mobile explosion.
But it’ll shake out in the end, just as the WIMP revolution did.
And which one of the 300 products would one start with ?
With one you can buy pre-installed, of course. For home that would be Ubuntu (http://system76.com), for enterprises either Red Hat or Suse (say, http://hp.com).
Too much choice for ya?
Myself, I’ve been pretty consistent with Linux Mint since around 2005. Not to mention a large portion of my friends and family I’ve turned them on to.
Anyway this push from the major players to put us in a Kiosk/phone/tablet interface is biting them in the ass. What works well on a phone/tablet/kiosk sucks on the desktop and it is showing. The Gnome 3 dev’s, the crap Unity and not to mention the upcoming Windows 8 “Metro” is nothing but mental masturbation. Clem from Mint is doing an excellent job as with Gnome 3 is the future but he’s making it “usable with Cinnamon, hats off to Clem IMO!
At least Metro makes sense when you consider the perspective that it represents a way for Microsoft to leverage their dominance on the desktop. I don’t think the FOSS crowd can make a similar rationalization to justify their turn to mobile interfaces.
It’s certainly open to question whether putting an unpopular (though well reviewed) mobile interface into a mouse-driven environment will make people want to use that interface on either.
I have a sneaking suspicion that Apple may be right – a WIMP interface is fundamentally different from a touch interface, so perhaps they should be different.
Fortunately, the FOSS crowd has a far better rationale for adding mobile interfaces to the already large user environment buffet – choice. Gnome 3 and Unity may not gain traction, but if not, KDE 4 and Cinnamon and several others still present the more traditional interface. And that whole Android mobile environment thingie might just stand a chance, too – you never know.
(Written in Firefox under Unity, with Android running in a VM. Choice rocks!!!)
It’s not about users, it’s about attracting developers. It’s about saying to them that the day Windows 8 comes out there’s already a huge potential market out there just because it’ll be preinstalled on (well, almost) every PC sold.
I very much agree with that opinion.
I don’t think that’s a rationale, more like a really easy cop out.
Edited 2011-12-23 15:35 UTC
There are really only a handful to chooce from for the non-techie user.
Why does people think choice is a problem with computers? People make choices like this every day without any problem regarding a multitude of different products and brands and models.
Btw, which one of the aprrox. 5 bazillion different text editors on Windows should I use? All that choice is making my dizzy.
If you’re very lazy, that’s a good argument.
If you’re just not interested, that’s a good argument
If you’re not internet saavy enough to brose to google.com or wikipedia, that’s a good argument.
But saying that its too difficult to learn more about linux because the whole linux landscape is too confusing to begin learning about it, that’s not a good argument.
Generally speaking, learning how to use an OS should never require reading a manual. If it does, the design team made some mistakes. People, rightfully so, expect computers to be simple and easy to use. What they don’t expect is to have to conduct research on how to use it.
Whether the problem with Linux is a learning curve, a cheap feel, instability, bug-ridden software/drivers, or whatever else… the point is that Linux sure as hell isn’t without it’s problem. Anyone who thinks otherwise has their head in the clouds.
Completely disagree.
Linux is easy. A five year old can do it. And I don’t mean that metaphorically, I mean I gave a five year old a cd and told him to install fedora on the computer then go to cnn.com and he did it. Without further instruction. Its a slight cheat as I knew the computer’s hardware was compatible in advance, but still –a honest to goodness five year old.
What I was responding to was that some one was claiming that there are so many options that one couldn’t really learn how linux works. Which is a different question entirely. There is such a wealth of information out there that its absurd. You cannot learn how Mac, Windows, or any other closed source operating system works as well as you can with a Linux based system.
More one?
As long as they give me the option to install the default GNOME Shell im fine with that.
Cinnammon is chasing the Windows desktop experience while Microsoft is running away from it. Stop looking backward and try to engineer/design something new!
Why was the parent moded down, he’s absolutely right.
IMO, Gnome 2 is a copy (actually a really bad copy) of the Windows 95 desktop. Why bad, well, for starters, to freaking add a menu item, you have to go through about 50 dialogs to create a shortcut, choose an icon, what command to run, this is freaking BS, whereas with Windows 95, at least I could drag an icon to the main menu and it would show up as a menu item.
Anyway, look at Windows 8 and OSX lion, MS is FINALLY getting away from the Windows 95 desktop, and Apple is going, well, not sure where Apple is going, but they’re going away from the traditional OSX interface.
Whats really really nice about Gnome 3 is that IT DOES NOT COPY WINDOWS OR OSX, they came up with they’re own design! Now its not perfect, I think clicking on activities then dock is stupid, but this is really easy to change with just a bit of Javascript.
Very strange comment. All you do is right click and add from menu. Whenever you do an install of an app, it shows up in the menu. Don’t get it. This is so simple even real technophobes can do it. Now, how exactly you do that on KDE4? There you really do have to jump through hoops. Gnome3? Life is too short. But Gnome2 its a snap.
Oh really???
Might be true for an app somebody that is in the package manager, these are typically relatively old versions.
What if I install an app NOT in the package manager. Say I install a new version of Eclipse, first have to copy Eclipse somewhere, then to get it in the Gnome menu, have to open the menu editor, choose the path of the app, fill out what command line to use, then have to browse around to find an icon. Ridiculous!!!
On Windows 95, 15 freaking years ago, I could just take an app, drag it to the start menu, drop it in the place I wanted and it worked.
Its absolutely ludicrous that I need a “menu editor” to change the Gnome menu, and that it does not support drag and drop editing.
MacMan,
“What if I install an app NOT in the package manager.”
“On Windows 95, 15 freaking years ago, I could just take an app, drag it to the start menu, drop it in the place I wanted and it worked.”
Ah the good old days when all apps were self contained directories, even office would run just by trivially copying it’s directory. I really miss that in all operating systems. What was once simple has evolved into into complex system-wide dependency trees.
Sure it’s still technically possible to distribute self contained application archives, but for the most part computing has “evolved” away from that – both for linux and for windows.
How do typical MacOS applications handle installs? If they’ve resisted the urge to scatter files throughout the file system, such that “Delete directory == Uninstall”, then I’d see that as a huge plus for the platform.
OS X has mostly resisted the trend, although being an UNIX it still stores user settings in home folders. Which is nice for backups, but not so nice for clean uninstalls. Compromises, compromises…
You install an app and then it shows up in the menu. Very simple.
Whats really really nice about Gnome 3 is that IT DOES NOT COPY WINDOWS OR OSX, they came up with they’re own design!
Granted. The design is original. It’s just too bad that it doesn’t fit in with any efficient use of a desktop machine (desktop, laptop, netbook) and as a mobile interface it is dead on arrival. I just don’t see it dethroning iOS or Android, nor reach a 0.001% installed base on tablets.
The real tragedy in this is that the Gnome dev team has created a lot of worthwhile desktop components with Gnome 3, with the potential to create a really exhillerating desktop, but they chose to combine them in the least attractive way.
Wow, that’s a pretty lame trolling attempt.
The new stuff is gnome shell, unity, kde4 etc etc. Cinnamon exists simply because some people like things the way it was.
If the core code base deviates from the original, the extensions from extensions.gnome.org will become incompatible.
If the core code base deviates from the original, the extensions from extensions.gnome.org will become incompatible.
If Cinnamon eventually gets to contain all the goodies one wants from a desktop, I don’t see the point in the extensions from gnome.org. While these extensions make Shell somewhat palatable, they aren’t a robust solution. It’s more of a band-aid.
If Cinnamon can provide the polished and integrated experience we had with Gnome 2 and on top still be able to provide the best components of Shell in an attractive package, I consider that preferable to a Franken-desktop riddled with extensions.
Judging from the screenshot alone, I can already see this one is *probably* much better than the other two. I can’t wait to try it.
All those forks are a proof that Gnome 3 sucks, and Gnome should reverse to 2.x and stick to what works…
I am staying on Gnome 2 with RHEL clones, Ubuntu 10.04 and Mint 9… So much better that this nonsense…
No. All the forks of the GNOME 3 desktop shell shows that GNOME-Shell sucks.
The forks are all running on top of GNOME 3 itself.
No, all the forks of the GNOME 3 desktop shell shows that many users can’t let the task bar go, I don’t know the numbers, cause there is no way to know it, but they are pretty verbose.
If something BETTER than the taskbar comes along, we’ll let go. So far, the grumbling group hasn’t seen the better thing.
No, what shows is that UI designers (and their supporters) should be rounded up and shot dead.
So by that same logic Linux (the OS) sucks since there are so many distros.
Why don’t we settle this decmoratically?
http://foundation.gnome.org/elections/2010/
Let’s just kick out whoever is in charge and then community vote?
Sorry but the GNOME Foundation Board does not dictate technical direction for the project, the Release Team does. The Foundation Board also doesn’t dictate design, the design team does.
The Foundation Board is for other things and project direction (weirdly enough) is not one of them.
Good on Mint for doing this. It shows that at least *they* are responsive to their users (unlike Ubuntu who just shove Unity into the faces of their users and say “take it or leave it”).
I think Ubuntu will be learning a hard lesson now.
“The users come first.”
Not the devs. Not Shuttleworth. The users.
Edited 2011-12-22 21:04 UTC
The funny thing about the hate for Gnome Shell is that it really stems from two things. 1) The ability to discover how it works (which I agree is annoying) and 2) People who on the one hand want new technologies, but on the other hand won’t except new paradigms of using technology.
The first issue should be resolved by a simple “Welcome to Gnome Shell, here are some tips.” KDE has had a tips screen for a long time. A friend of mine kept rambling on about the difference between Gnome being ‘task’ oriented as opposed to ‘application’ oriented. I was confused, because my applications aren’t full screen, ever, and to switch between running applications I just hit the Meta (windows) key.
He was using the MGSE to add the task bar, and I told him about that key and he said “Well, I suddenly don’t need that.” Gnome Shell is extremely usable. It’s just currently missing a bit of the features, like being able to change themes within the System Settings (instead of needing the gnome-tweak-tool) which I’m sure will eventually be put back into it.
The second type of issue can only be replied to as “get over it.” I hate to be that direct/rude but things move on. I bet when Red Hat 7 is released, it’ll be using Gnome-Shell. I’m guessing by then it’ll probably be 3.8 or 3.10. Debian Wheezy is already working on getting Gnome 3.2 from Sid.
Really, after I read the tips and tricks page for Gnome-Shell, I really love it. Have a hard time going back to anything else, because now I just want to flip my mouse to the upper left to switch between tasks. Much easier than using Alt+Tab, if you’re just laying back and doing some browsing. Or hitting the Meta key to bring up the tasks from the keyboard, then typing in what you want to load is extremely fast and convenient. Gnome Shell mostly just gets out of your way if you do it that way.
I vary rarely even use the ‘dock’ which really should be used for a ‘favorites’ rather than minimizing applications. The only other tweak I do that I like is change the middle mouse click to minimize the window when you click on the title bar. Love that.
I liked the name.
I like these things keep coming.
There is even a discussion in Ubuntu community to put out an official GNOME Shell spin.
I am using the Shell, but… I’d prefer GNOME 2 thing if I was able to use and find it being actively developed.
Edited 2011-12-23 01:13 UTC
Looks like this one also doesn’t work without 3D driver support. I’d like to have it as a desktop option, but I won’t use something that requires 3D just to render windows.
Not 3D, but a GPU. And I don’t even want to start to think why it is a bad idea of not letting the GPU handle the graphics stack instead of emulating it in software (except for the poor state of the linux drivers).
Yep, it’s the poor state of graphics drivers on Linux. GPU-rendered desktops just add an additional layer of bugs and complexity on top of the bug-ridden, complex mess that is GNU/Linux with Xorg. And additionally with most cheap 3D cards like Intel integrated chips, even minor 3D effects drastically increase the shared memory consumption.
Well, technically a GPU does run software. Just massively parallel one that is good at dealing with large bitmaps and vector spaces, but not so good at anything else…
Anyway, I happen to be one of those who think that GPUs have no business rendering UIs, so here is why :
First, GPUs consume lots of power, even when idle. This is why NVidia’s Optimus software, which turns off NVidia laptop GPUs when no GPU-intensive software is running, causes a huge (> 2x) increase of battery life on Windows. Though NVidia are one of the worst offenders, most desktop and laptop GPUs tend to be very bad at power management.
CPUs, on the other hand, must always be ready to process incoming hardware events, so they exhibit very efficient power-saving functionality which allow them to stay in extremely low-power states while nothing is happening yet quickly jump back to high-performance states as needed.
It is dubious whether the high “static” power cost of GPUs is worth their lower “dynamic” power cost for rendering UIs than CPUs. The battery life of modern cellphones tend to show that the power consumption benefits of GPUs range from highly overrated to negative.
Then, as you mention, there are drivers. Even on Windows, which benefits from Microsoft’s close relationships with hardware manufacturers, GPU drivers are bloated and unstable. On less common OSs like Linux, things are much worse.
That is mostly because GPU manufacturers can’t bother anymore to interface with the OS in a standard way like they did back in the VESA days, and feel a need to reinvent their nonstandard hardware-driver interfaces at every chipset generation, and keep them hidden behind closed doors.
Whatever the reason for this madness, the result is that on Linux, GPU drivers are not bundled with the OS. Those provided by hardware manufacturers are outdated with respect to modern Linux standards such as KMS, and worse, they don’t work well at all. I can’t count how many times I have installed an NVidia or AMD driver on a Linux box, only to discover that at the next boot, my OS of choice falls back to console mode or freezes without warning. And even when the driver does get installed properly through some black magic hacking on my side, it has a fair chance to be broken by mundane OS updates.
And should you use any OS that is less common than Linux, GPU drivers are simply out of your reach, period.
My last issue with GPU-accelerated UIs is one of software development ethics. It seems to me, though I can be wrong, that accelerated graphics are often used as an excuse by developers to justify the existence of horribly-performing code in critical UI areas, more than to implement really useful features.
While widget toolkits like E17 are here to show that there’s largely enough power in modern CPUs to render complex UIs *and* run software smoothly in meantime, desktop environments which rely on the presence of GPU-accelerated graphics seemingly tend to embrace very inefficient code, because there’s plenty of power in modern GPUs. As an example, the very slow performance of KDE 4’s and Windows Vista’s GUI on slow hardware is worrying.
It seems to me that this is a very nefarious way of thinking, and one that is not future-proof also : the sluggishness of most smartphones is just one of the first consequences of that way of thinking, but when the next generation of low-power personal computers comes out, it is to be expected that the inefficient coding practices of our days will turn out to be a huge problem.
Edited 2011-12-23 14:50 UTC
It’s a bad idea to require it for any kind of smooth experience (plus if in exchange you don’t really get anything particularly compelling, justifying hw requirements)
Especially when we’re talking about some part of a whole software stack which is quite small but very important, “core” one – which can and should be under the care of the most talented devs.
As an Ubuntu user I quite like Unity ^aEUR“ In Gnome shell, there are things I like such as the way it adds virtual desktops. I^aEURTMve also tried mint with MGSE and I can see why users might like it – it has a menu.
Sorry but I just don^aEURTMt et what is wrong with the old branching tree menu and why it is being culled from user interfaces.
Close source, open source, neither has anything to do with how easy it is for someone to learn. The user interface is everything. Linux isn’t for everybody and it’s certainly not a replacement for Windows by default or just because.
Linux is absolutely not a masterpiece. At best it’s a work in progress that’s in a constant state of repair. That’s not to say it’s trash — at some tasks, Linux is great. But as a solid & stable desktop? Only on it’s good days.