“The year is winding down and while we have a lot to look forward to next year, what were the greatest Linux innovations of this year? This year at Phoronix, we have published over 325 articles, with most of them being Linux hardware and graphics reviews, and that is in addition to over 700 original news entries. After spending much time in considering what the “best” and most substantial Linux gains over the year have been, we have comprised a list of what we believe are the greatest Linux innovations of 2007 along with our reasoning behind these decisions.”
I don’t suppose it counts for “innovation”, but considering some of the other things that were mentioned I thought that Likewise (Active Directory interoperability for Linux, Macintosh and UNIX) would have at least rated a mention in this article.
http://www.centeris.com/
http://www.linux-watch.com/news/NS2350659361.html
http://www.likewisesoftware.com/community/index.php/download
Wow, I didn’t even know about them. I thought the first really solid AD integration would have come from Novell.
Good Job centris, AD is a prerequisite for Linux being viable for many office environments.
I wasn’t aware of that, although it’s worth pointing out that winbind could provide basic AD authentication on *nix, and you can also use Kerberos in a pinch.
IMHO it is one really good thing that happen in Linux world this year. Speaking of course from Ubuntu perspective.
I remember having a choice between 3D Nvidia drivers and restricted modules in Ubuntu prior to 7.04
But still this is too little and too late and we need a REAL WORKING device manager in Linux, similar to its Windows equivalent.
I don’t need any crappy device manager, the linux kernel already provides much more detailed information about devices than Windows’ device manager.
With Linux’ udev (that was a _really_ big and damn necessary innovation) a device manager is more obsolete than ever, 95% of todays hardware being just auto-configured by the kernel and working out of the box.
For everything else, you have various tool at your hands to find out what’s inside and get the right drivers, like lspci, lspcmcia, lsusb. Furthermore, if you _really_ want to know about ressources, have a look at /proc/interrupts, /proc/ioports, etc.
Finally you can use /sys/ to find out really everything about your system, I can even read the temperature sensors on my board inside /sys/, no crappy special tool needed just to provide this plain data.
There are GUI tools which expose those data in a convenient way, but I never really tried them due to lack of need.
Edited 2007-12-13 08:05
I don’t need any crappy device manager, the linux kernel already provides much more detailed information about devices than Windows’ device manager.
With Linux’ udev (that was a _really_ big and damn necessary innovation) a device manager is more obsolete than ever, 95% of todays hardware being just auto-configured by the kernel and working out of the box.
FYI the “restricted device manager” is a tool that can install (proprietary) drivers for hardware for which no support is available in the kernel or a normal (open source) module package. For instance, nVidia and ATI drivers are a common use case.
While I heavily dislike closed-source drivers, I know some newcomers who are very happy with this. First impressions last, and the comfortable way in which the restricted device manager installed drivers they needed made their live easier.
(Sure, I have educated them about open vs. closed drivers afterwards .)
So you think that only newbies want to watch DVDs on TV screen ? It is legal in Europe!
How can I do such things without proprietary NVIDIA drivers ? And how can I adjust TV screen output without nvidia-settings ?
I DO NOT CARE if those tools and drivers are open- or close-source when they work fine and allow me to replace Windows as OS.
Edited 2007-12-13 11:02
Yes, the proprietary Nvidia drivers will let you do that, but how many other things will they not allow you to do ?
Intel made their graphics drivers open, and the open source community made them far better, AMD are heading the same way with the ATI drivers.
If Nvidia also make the specifications available, think how much more improved your experience will be.
//So you think that only newbies want to watch DVDs on TV screen ? It is legal in Europe! ;-)//
Legal in USA, too … I’m pretty sure that was the Spiderman 3 DVD I watched on my 42″ plasma TV last night …
“For everything else, you have various tool at your hands to find out what’s inside and get the right drivers, like lspci, lspcmcia, lsusb. Furthermore, if you _really_ want to know about ressources, have a look at /proc/interrupts, /proc/ioports, etc. ”
Most normal users don’t want
a) this level of detail
b) to use the command line
So just because YOU don’t need a device manager, doesn’t mean LINUX doesn’t need one.
Why would a normal user need any level of detail if every supported piece of hardware just works?
Not every supported piece of hardware just works though, so it may be needed by users. No OS works perfectly, and neither do all drivers, or hardware. To think otherwise is foolish
Not every supported piece of hardware just works though, so a certain level of detail may be needed by users. No OS works perfectly, and neither do all drivers, or hardware. To think otherwise is foolish
Well. I need. My RT2500-based Wi-Fi cards on my both PCs do not work at all in Ubuntu. Aren’t they should “just work” as you suggest ?
Linux is a kernel and yes a kernel doesn’t need GUI tools.
But anyway, there are already tools that build a GUI about the interfaces I mentioned. I wrote that already.
What I wanted to tell is that the “device manager” of Windows is ridiculous, and at most for the end user (read: for an end user OS). With Linux, you don’t need a device manager because you don’t fscking have to “manage” your devices anymore. That’s the deal.
“With Linux, you don’t need a device manager because you don’t fscking have to “manage” your devices anymore. That’s the deal.”
Uh, wireless adapters, video cards and webcams are all devices that users have trouble with in Linux. The driver situation is not perfect, and until it is, users need some way of dealing with problems. It’s a pretty piss poor attitude to not recognize that the situation is not perfect, and not give users the tools they need to diagnose and fix problems.
You still need to manage devices from time. Bugs happens, mistakes happens, and people need to be able to recover from those mistakes, and non-technical users may need a graphical tool to do it.
There are even less tools available for running OSX or Vista on arbitrary hardware.
Linux works without intervention or need to diagnose & fix on more existing hardware than either Mac OSX or Vista.
Looked at another way: if you choose a PC that Mac OSX or Vista actually works on, and fully supports (in each case the minority of PC hardware out there), then you won’t have trouble and you won’t need diagnostic tools. OTOH, if you choose a PC that Linux actually works on, and fully supports (which is more machines than either Vista or OSX) then once again you won’t have trouble and you won’t need tools.
That is true, but it is a comment that applies equally to all three: Mac OSX, Vista or Linux.
If you are going to compare, then compare each one from the same ground rules. Don’t complain about one being “not perfect for all hardware” when the other two run on even less hardware.
Edited 2007-12-14 03:07
I actually wouldn’t have brought up OSX, since apple controls the hardware, you are pretty much guaranteed full compatibility.
Other then that, you have a very good point. I would also like to add that Microsoft and Apple need to do very little, MS has 80%+ marketshare, and OSX has 15%+. Every hardware manufacturer is going to do everything they can to make it work on windows. As for OSX, it is big enough to be worth the effort. At ~1% of the home PC market, most manufacturers just ignore linux. When you take that into consideration, it is downright stunning how well Linux works with notoriously badly specced hardware types, like WiFi or ACPI. The fact that it stands up well against its more widely used counterparts is really a testament to the developers. Currently, there are relatively minor ACPI issues still to be worked out, it works well with the vast majority of hardware out there. And wireless seems to get better every time I turn around. I have an ipw3945 integrated card which is rather crappy, and it works pretty much flawless nowadays.
I can’t see why we can’t look at OSX from this same standpoint.
Doesn’t Linux work pretty much flawlessly on any hardware that OSX will work on? If you have Macintosh hardware, aren’t you more-or-less guaranteed full compatibility with either OSX or Linux?
OSX has a BSD core and all that. Isn’t the source code for Darwin released now?
OSX has 100% compatibility with the platform it runs on, which blows both Linux and Windows out of the water. The reason for that is that it will not run on any hardware not controlled by apple (AFAIK it checks for a dongle on the mobo). Sure, linux and windows will run flawlessly on apple hardware, but since apple hardware is such a small subset of whats out there, it isnt saying much.
That is what I meant, OSX is sort of a special special situation.
Windows won’t run on all apple hardware … such as anything with a PPC.
http://www.terrasoftsolutions.com/products/ydl/
Linux will.
But then again, Mac OSX or Windows won’t run on a PS3 either.
http://www.terrasoftsolutions.com/products/sony/
Vista hardware is still only a smallish subset of whats out there too … compared to what Linux will run on. I’m thinking way beyond just x86 here.
http://www.internetnews.com/dev-news/article.php/3715726
Looks interesting. Might be a couple of weeks yet before someone ports Linux to it, though.
http://blogs.sun.com/achartre/entry/linux_with_ldoms
http://www.ryandelaplante.com/rdelaplante/entry/sun_launches_89_4_g…
Nope. Don’t even have to wait that long.
http://www.terrasoftsolutions.com/products/mercury/
Cell too. Talk about “cross-platform interoperability”, hey! You can do that if you have the source code.
http://en.wikipedia.org/wiki/Linux_kernel_portability_and_supported…
Edited 2007-12-14 06:25
“If you are going to compare, then compare each one from the same ground rules. Don’t complain about one being “not perfect for all hardware” when the other two run on even less hardware.”
I’m not comparing at all. I agree with you, all 3 need these tools. I was talking about it’s need in Linux, and if it had it, it would be another plus in it’s favour.
The question is, how does a “device manager” help you with hadware for which no driver is available?
This is a problem the Windows Device Manager wouldn’t be able to solve, too. Some days ago I had Windows on a friends machine and wanted to install a PCMCI Ethernet adapter. The device manager only showed me “Ethernet Adapter”. WTF? How to find the driver for “Ethernet Adapter”?
On the contrary, “lspcmci” told me “(Card0: see lspci)” and “lspci” told me the Brand and chipset name. Very helpfull for Windows, not for Linux, because it automatically loaded the module anyway.
Let me get this straight, I’m totally positive about some decent GUI tool showing you information about your hardware, and I can’t believe there are none (it’s so damn easy to read the data with nowadays kernels and set up a GUI with pygtk). But — god help me — not something like the windows device manager.
If Windows’ Device Manager at least did the same as the likes of Huntersoft’s Unknown Device Identifier and showed you more detailed information, it might be useful.
But when you’ve got a third-party tool which is only a download away, what incentive have they got to improve things?
Ford Prefect:
BluenoseJake:
How about this?
http://en.wikipedia.org/wiki/Kinfocenter
http://www.howtoforge.org/node/1781
Is that what you were after?
Edited 2007-12-15 00:09
Why? Seriously, what are you going to manage? It’s 2007, consumer devices are auto configured. I cant even remember the last time I used the Windows one to actually change some setting of a device. Probably back in 1998 or something.
Not that the HAL “Device Manager” isn’t a joke, it is, but it’s not a big deal. It does a reasonably good job at showing you an overview of the devices just like the Windows counterpart.
Not that the HAL “Device Manager” isn’t a joke, it is, but it’s not a big deal. It does a reasonably good job at showing you an overview of the devices just like the Windows counterpart.
HAL is a big deal, it can notify interested applications that a hardware event occured. E.g. it’s the component that lets GNOME applications like Rhythmbox and Sound juicer know that a new CD has been inserter.
Rather than having a bunch of applications poll for devices and device events in their own ways, they can communicate with HAL over d-bus, and retrieve such events. It has made device support far more uniform and straight-forward.
Thanks Prof Obvious, I know what HAL is, what it does and what it’s good for. I wasn’t talking about hald, i was talking about the “Device Manager” application and that is still pretty much crap.
But still this is too little and too late and we need a REAL WORKING device manager in Linux, similar to its Windows equivalent.
The device manager sucks. Equivalent tools in Linux are much better. The Windows device manager doesn’t even tell you any specific information about missing hardware. There are device managers available for Linux. I have used a couple but they are mostly useless to me because drivers come with the kernel and most of the time if you are missing drivers it is because they don’t exist. Linux should load all your drivers automatically. No need to search the net for obscure drivers like you have to do with Windows.
Great. But graphical tools are always easier. And most people (myself included) prefer to do the same thing using a few mouse click instead of googling to find that “cat /proc/net/irda/discovery” command is necessary to check if infra-red connection with my GSM phone is working properly.
Great. But graphical tools are always easier. And most people (myself included) prefer to do the same thing using a few mouse click instead of googling to find that “cat /proc/net/irda/discovery” command is necessary to check if infra-red connection with my GSM phone is working properly.
There are graphical tools in Linux for device management. I think you missed that in my post. I stand by my statement though, the Windows device manager is severely lacking and I’m surprised that they didn’t change it much for Vista.
“But graphical tools are always easier.”
Easier for whom? I find CLI tools easier and more productive than graphical ones.
You don’t count. 99.995% of the population however does.
For usual people, who use computer as a tool. I do not know about you, but the rest of us are not “Red Hat Enterprise Certificate Management” specialists.
Please, explain why searching the whole Internet to find one CLI command is easier and more productive than just clicking on Control Panel-System-Device Manager-System Devices ?
And what to do when Internet is not available at the moment ?
Sometimes it is a lot easier to explain how to do something with a command line. Example: tell someone how to install inkscape.
Command line:
Start the console program: Menu -> Utilities -> Console
Copy the text from the following line, and paste it into the console screen:
sudo apt-get update && apt-get install inkscape
GUI:
Start Synaptic: Menu -> System -> Configuration -> Synaptic Package Manager
Click the “Reload” icon, and wait for the update repository listings to download
Click the “Search” icon, and type “inkscape” then press enter
Select the listed entry for inkscape, and then right-click.
From the right-click menu, select “install”
Click the “Apply” icon.
Edited 2007-12-14 02:15
But we do not talk about installing programs now, why didn’t you compare setup of nvidia card using nvidia-settings and … what you are using for that “easier” way of yours ? “tvout” ?
I did say “sometimes”.
Quote:
It took me quite a while to find the answer about how to configure GSM phone as GPRS modem. All solutions based on CLI failed. (they were about manual installation of scripts for PPPT) Maybe they were too narrow, and maybe worked for many configurations, but not for mine. Recently I discovered “GPRS Easy Connect” and finally it worked. Now I can connect to the Internet using GPRS and Ubuntu – thanks to GUI solution, not CLI.
Those were all interesting things and one which has meant alot to me has been to introduction of 3D eye-candy on the desktops (Yes, I admit it, girls very often are suckers for such ) But the one thing that really caught my eye was that Splashtop :O I hadn’t heard of such before but it sure looks like a killer feature! A fail-safe OS to resort to when your own OS has gone bye-bye, an OS that can be used to check if your new hw works properly before you even attach any harddisks or optical media..Darn, I hope that becomes the standard in new motherboards!
I’m not a girl (last time I checked, in any case), but I still kind of agree with you.
Which is a tad bit sad, that I can’t think of anything else but flashy effects as the best Linux innovation of 2007 :/.
I’m not a girl (last time I checked, in any case), but I still kind of agree with you.
Check once more, just to make sure xD
Which is a tad bit sad, that I can’t think of anything else but flashy effects as the best Linux innovation of 2007 :/.
But well..It depends on your viewpoint. For an average desktop the flashy effects are probably the most important innovation and atleast they are the most visible one. But on the server front f.ex. KVM is a really good addition and is likely seen as like a gift from gods to many admins. As for the year 2008…I have proposed one project many times over and I will propose it again: someone implement a framework for automatically checking from the internet for drivers for yet-nonworking devices on boot and make it download, install and enable those drivers _without_ needing user intervention. Oh, and just to make it clear: include any proprietary drivers there too.
It would explain a few things .
No user intervention? That kind of scares me. Nothing should be installed without the user’s consent. I assume you mean that the process itself doesn’t require user interaction – but starting the process does?
Sure user intervention is realy needed? I mean alot of people have no problem what so ever with *insert biggest badest ugliest spyware/virus/worm* beeing installed without their knowledge through their webbrowser or msn. This is just about harmless hardware-drivers.
Yeah, im kind of kidding. As a geek, and somewhat controll-freak i agree with you, but alot of people really dont care what gets installed, and if they know it or not. They just want the PC working, and I guess it is this kind of people that solutions like is geared towards?
I’m not sure there were any innovations in Linux in 2007, something completely new as distinct from more of the same – refinements of what existed already. In fact the lack of true innovation on Linux is becoming a worry, at least on desktop Linux where “innovations” seem limited to copycatting what’s happened elsewhere.
Too bad in this sense that KDE4 has slipped into 2008, otherwise that would definitely have been desktop innovation of the year. It’s a very, very important release since if it bogs up desktop Linux will be in real trouble, imho. Gnome has driven itself up a cul de sac, something the brighter bunnies on Gnome have clearly worked out. My guess is that in 2008 not a few of them will quietly slip off into other jobs.
I’d suggest two innovations for 2007. First the Asuse triple E subnotebook. If this form factor is successful, it may do more to put desktop Linux into more new hands than anything else recently. Relying on a Taiwanese hardware-maker to grow the market by giving users what they want may seem a startling innovation in Linux but without that growth and customer-focus desktop Linux is doomed in the long term. Asus have rather shown up the well-known MIT-OSS evangelist types. They talk a lot, but this year only Asus delivered the beef.
Second, the market began to take a more realistic view of Ubuntu. So long as this distro was seen as some kind of all-curing wonder drug, attention and energy were diverted from what everyone else in every other corner of Linux could do. Now that’s no longer the case, I’d hope that 2008 will bring new and interesting ideas from many other distros.
This leaves out the GPL v3, the big daddy of them all in 2007, imho. But as it’s not specifically about Linux I’d guess it doesn’t count as strictly relevant.
Edited 2007-12-13 12:02 UTC
I’m not sure there were any innovations in Linux in 2007, something completely new as distinct from more of the same – refinements of what existed already. In fact the lack of true innovation on Linux is becoming a worry, at least on desktop Linux where “innovations” seem limited to copycatting what’s happened elsewhere
To be honest, you don’t see much innovation in other desktop environments either. This is a sign that the desktop metaphore have matured, and there is less and less room for big changes in the same way as there is not much room for big changes in how we control a car.
Even so, there have been some small inventions not seen on other systems, one example is the possibility to leave notes to a user who has locked his screen in Fedora 8. Another one would ge gOS that tries to integrate google service into the desktop. Other than that the “innovations2 could be seen as added polish to existing systems.
Even so, there have been some small inventions not seen on other systems,
To add to that: respins. What OS will let you roll your own flavor of that OS with simple GUI tools as the recent respinning tools introduced in Fedora?
Another field with a lot of movement is virtualization. While virtualization is old, the lightweight approach to virtualization of KVM is refreshing. What about the paravirtualization features that have slowly been added to the kernel?
And let’s not forget about recent movement in the mandatory access control framework area. SMACK is getting more and more momentum, and it seems that the merge of AppArmor may be closer.
Yes, most of these changes are evolutionary, rather than revolutionary. But GNU/Linux is being used for production purposes, so a evolutionary cycle is far more appropriate than a revolutionary. People don’t want to have major components their working systems replaced by something else overnight. Stepwise refinement and improvement is predictable, and not disruptive for users.
This whole discussion is revived every year or so, with varying contexts (Linux, GNOME, etc.). Some people continually claim that no real progress is happening. I urge them to try and install a Linux distribution from two or four years ago, and then the latest releases of Ubuntu or Fedora. There is a world of difference. My girlfriend still uses CentOS 4, which is almost two years old by now. It feels old and clunky compared with the newest generation of distributions.
Edited 2007-12-13 12:45
And let’s not forget about recent movement in the mandatory access control framework area. SMACK is getting more and more momentum, and it seems that the merge of AppArmor may be closer.
I’m not sure that I look forward to that. It would probably have been better to make SELinux more user friendly. Red Hat have already come a long way in that direction.
Having multiple mutually exclusive security frameworks will make it harder for developers to create apps that works in a secure way on all distros.
I’m not saying that Apparmor is a bad security framework, it is just that SELinux is already integrated in the kernel and works fine and is constantly getting better at least in Fedora/RH as improved default policies are distributed and applied.
Red Hat has really shined this year, I am currently testing the new RHEL5.1 Advanced Platform Clustering at work and I have to say they have really come a long way.
Fedora 8 or Fedora in general has also come a long way, I know most people do not like SELinux but the fact it is on by default and is most likely one of the two best tools the other being iptables (firewall) when paired together really secure a system.
HAL is another biggie, the OpenSource community is driving innovation as a whole and that is the beauty even if a concept is started someone else will pick up the reins and run with it. To be honest Linux Distro (x) is really in its infancy the best is yet to come.
Apple(Mac) got their system based on..?
MS got their system based on Apple(Mac) GUI
linux(KDE and GNOME) got their GUI based on windows….
There should be a prize for a linux program(for end user) which is already not existing in MS…
looking for innovation…I can spare 8 hrs….My workplace network is windows based. I wanted to use debian. My Sys Admin allowed me use only on one condition…Authenticate my linux desktop to windows server. I tried PAM and other configurations (xandros, linspire etc) Still couldn’t do it after a year…gave up and returned to XP for work and debian at home…
Edited 2007-12-13 13:35
“Apple(Mac) got their system based on..? ”
Apple based their first Mac system (or at least its GUI part) on some research that Xerox was doing.
http://en.wikipedia.org/wiki/Xerox_Alto
Edited 2007-12-13 14:38
So the EeePC has the SplashTop instant-on OS built in? Can anyone confirm this?
If so, I’ll be drooling more & more for one of these.
no I don’t think the Eeepc has splashtop. I think there is an Asus Mobo that has it though. Eeepc uses a reworked version of Xandros. Splashtop seems to use gtk while Eeepc ‘s interface can use both but is primarily qt. I may be wrong.
Nice article with a fairly complete list of important projects that saw new releases this year. But it’s really not just Linux innovations, but general open-source programmes that also happen to run on Linux (among other OSes).
Good and short read anyway for open-source enthusiasts. The scene is getting better and better every year. When I started to use Linux on a daily base with Red Hat 6.3 i never thought that this will ever be a mainstream OS. Well, OK, it isn’t. But its on its way.
out of site is out of mind…
Moderators didnot like my post –accepted
Mods gave me -1 score the next second i posted — accepted
Mods gave me -2 score –accepted
Osnews have default rating for viewing of -1 — NOT ACCEPTED
The purpose of a forum is to view different opinions for or against linux. If a message gets -5 rating still it MUST be possible to view by ALL people including who are not logged in, right?
if the post is offensive, personal and abusive Mods can remove it from forum. But default -1 viewing system, ristricts forum viewere to know what and why messages got -2 or -5 score ratings.
THIS IS TRICK BY MODS TO REMOVE POSTS WHICH THEY DONT LIKE PERSONNALY(rather than subjectively, or anti-linux messages) BEYOND NORMAL VIEWING of -1 score.
I hope you mods will not play these tricks and allow default rating of -5 to be viewed by ALL without logged in. That is fair democratic process. THEN only people decide if they dont like that post and remove it.
i am reposting my message
—————————————
Everyone is copycat
By rakamaka on 2007-12-13 13:31:45 UTC
Apple(Mac) got their system based on..?
MS got their system based on Apple(Mac) GUI
linux(KDE and GNOME) got their GUI based on windows….
There should be a prize for a linux program(for end user) which is already not existing in MS…
looking for innovation…I can spare 8 hrs….My workplace network is windows based. I wanted to use debian. My Sys Admin allowed me use only on one condition…Authenticate my linux desktop to windows server. I tried PAM and other configurations (xandros, linspire etc) Still couldn’t do it after a year…gave up and returned to XP for work and debian at home…
Edited 2007-12-13 18:28
The narrowmindedness of some people never cease to amaze me.
The kernel is based on BSD code. The GUI is based on ideas from Xerox, and they wrote their own graphics library.
… and their kernel was roughly based on ideas of VMS. So? Windows is not VMS, and Microsft GDI is not Macintosh graphics library. They are different implementations.
Not at all. The graphics elements in Linux are based on X11, which pre-dates either the Mac graphics library or the Windows GDI.
http://en.wikipedia.org/wiki/X11
The early work on what has now evolved into the Linux GUI and graphics environment began at almost exactly the same time as the Apple Lisa.
http://en.wikipedia.org/wiki/X11#Predecessors
Modern GUIs basically all come from the Xerox Alto.
http://en.wikipedia.org/wiki/Xerox_Alto
OK, so which one of these wins your prize?
http://en.wikipedia.org/wiki/GNU_Compiler_Collection
http://en.wikipedia.org/wiki/Iptables with GUI = guarddog http://www.simonzone.com/software/guarddog/#screenshots
http://en.wikipedia.org/wiki/Synaptic
http://en.wikipedia.org/wiki/Wireshark
http://en.wikipedia.org/wiki/LyX
http://en.wikipedia.org/wiki/Bash
http://en.wikipedia.org/wiki/Emacs
http://en.wikipedia.org/wiki/Php
http://en.wikipedia.org/wiki/Apache_HTTP_Server
http://en.wikipedia.org/wiki/Perl
Edited 2007-12-14 00:40