This news is already a week old, but it only got submitted to us today, and I didn’t notice it all. As it turns out, two malicious software packages had been uploaded to GNOME-Look.org, masquerading as valid .deb packages (a GNOME screensaver and theme, respectively).
The two cases were discovered shortly after one another. First, it was discovered that malware was masquerading as a screensaver. It came as a .deb package, but instead of installing a screensaver, it would execute a script which would mess with some files and download a few other scripts which would make the affected machine take part in a DDoS attack, while also allowing it to update itself.
Not long after, a similar problematic package was discovered, but I can find little information on that one, other than that it was a theme called “Ninja Black”. Since only one removal instruction has been posted, I’m assuming it was the same .deb package/script uploaded under a different name.
Speaking of removal instructions – if you’ve been hit, here’s the fix:
sudo rm -f /usr/bin/Auto.bash /usr/bin/run.bash /etc/profile.d/gnome.sh index.php run.bash
<br />
<br />
sudo dpkg -r app5552
This minor incident highlights both the inherent strength of the repository system, as well as one of its weaknesses. First, though, let’s make it very clear that this very minor incident in no way means that Linux, Debian, Ubuntu or other .deb-based distributions are insecure. This is a clear case of social engineering, and there’s no remedy for that yet. Of course, GNOME-Look is partly to blame too, but I guess it’s virtually impossible to keep such a large site clean. For what it’s worth, they removed the offending packages very quickly.
The inherent strength this little incident illustrates is that if you stick to the official repositories for Ubuntu, there’s very little to be afraid of. Those packages are well-tested and secure (i.e., they contain no malware, but could of course still contain regular security flaws), and can be installed without fear of repercussions.
The weakness this case illustrates is that quite a few times, the official repositories are simply not enough. A new version might not be there, a program you wan’t isn’t in the repositories, or whatever other scenario. In those cases, installing something from outside of the repositories is appealing, but it does mean opening yourself up to potential hazards.
All in all though, this is a very minor case, but noteworthy nonetheless, as I think it’s one of the first pieces of malware for Ubuntu.
This was bound to happen at some point and I’m surprised it hasn’t been in the past.
Personally, I think they should ban the upload of binary packages to such sites, they just cannot be trusted.
Maybe in theory, yes. But in real life many people would find compiling programs too difficult. On the other hand, it is true that on a theme site like Gnome-look, many themes don’t even need packaging, but could be installed as non-binaries.
Other than that, cases like this should be good reminders for ordinary desktop Linux users not to install unknown third party packages so easily. But probably many Linux users already knew this quite well. I just hope that MS Windows users could see the light too, because many of them seem to install relatively unknown binaries from the net all the time. A case like this is rather big news in the Linux world, but all too often Windows users don’t seem to care, and may install odd and maybe trojan-infected binaries like pirated software from who knows where, and may simply expect their antivirus and antispyware etc. programs to protect them even if they themselves do stupid things.
Edited 2009-12-17 17:23 UTC
Don’t necessarily disagree, but it’s worth pointing out that the malware wasn’t a binary package, it was a collection of scripts. Should have been easily vetted if someone was approving uploads, and it’s probably why it was so quickly discovered.
If someone had created a functioning screensaver with an embedded trojan in the binary, even if the source was provided, I doubt it would have been discovered as quickly. The users with the savvy to lockdown and monitor their network traffic or processes probably aren’t downloading and installing anonymous packages from public sites.
This should be a bit of a wakeup call, particularly for newer or naive users, and the community should be doing more to educate and inform less knowledgeable users on this point. Linux is no more immune to damage from user-installed packages than any other platform, yet all the cheerleading about how the unix-heritage somehow makes the platform more secure than Windows can lead to a false sense of security.
Users can become just as conditioned to clicking through a sudo authentication window as they can a UAC window.
Both platforms need better and more granular separation of privileges for applications, rather than focusing on users. If a user chooses to install a screen saver, they should be giving the application explicit permission to only access the display, and the platform should not be permitting it to touch network, file or system resources, regardless of the user permission level. AppArmor, selinux etc. are a step in the right direction, but need to be better integrated into the application installation framework, and that’s not likely to happen any time soon.
As it stands, this problem will never go away, and can only get worse as a popularity increases.
it’s finally the year of Linux on desktop.
I remember someone talking about something similar ages ago on a forum (but he was talking about using desktop shortcuts, not themes), it was more just as a theoretical way to do malware on linux. I wonder if it is the same guy….
On any operating system, when you install 3rd-party applications, you can be compromised. On Windows, all those helpful little utilities, games, etc. you install – any of them has the potential to hose your system. Same goes for Linux, Mac, BSD, etc. That is why I like the packaging systems in Linux and BSD. I’ve never been hosed, I have thousands of applications available, and all of my applications stay up-to-date. Very rarely will I install a 3rd-party application. I just did for Chrome Beta for Linux. I trusted Google enough to trust their package. For me, that is the only exception.
I would say this is probably the main security weakness in Windows. You have to install 3rd-party applications to get much useful done. You have to be very careful. It is not just whether or not you trust the company, but also have they been unknowingly compromised (by a virus at their company), or is there a backdoor built in for the government. It’s very hard to tell.
Edited 2009-12-16 22:27 UTC
Precisely. Spot on.
Package managers and associated repositories for Linux systems are a means of delivery of applications to users systems that has an impeccable record. AFAIK there has never been a recorded instance of an end user’s system getting malware via the package manager/repository system.
OTOH, downloading applications and utilities from websites is one of the primary means of delivery of trojans to end user’s systems, regardless of the OS.
If anything, this incident just underlines the points: that one simply cannot trust downloading from websites, no matter how seemingly reputable; and that one should always use the package manager, and ONLY the package manager, to install applications and utilities for Linux systems.
Fortunately, just about everything one would want for a Linux system is installable via its package manager and repository.
In contrast, downloading binary blobs from websites and putting them on one’s system is a way of life for Windows users. Mac users are possibly part way between these two extremes.
Edited 2009-12-16 22:31 UTC
Thats not really true.
As soon as you execute ANY executable code, you are putting full control of your computer into the hands of anyone who had the ability to modify that code before it got to you. I’m assuming you mean debian when you said package managers have an impeccable record, and I would totally agree with that. But that doesn’t change that you are putting control of your computer into the hands of whoever has the ability to add or modify a package in a debian repo when you run it.
It is a matter of trust, and a question of degree.
Mac users are in the same boat as windows users.
No, I mean all distribution repositories. That is to say, those repositories of packages that are maintained by some distribution or another.
Debian has these, as does Fedora, Arch, Ubuntu, Mandriva, OpenSuse, Slackware … almost any distribution. (Some smaller distributions leach off other repositories. For example, sidux uses the Debian sid repositories).
All of these have an impeccable record.
Debian and Ubuntu repositories include about 25,000 packages. “Smaller” distributions, such as Arch, will typically have only about 5,000 packages. This is largely a matter of the manpower available to maintain the repositories in each case.
As far as trust goes … it is most decidely in the self-interest of the distribution to maintain the highest quality of its repositories. This is what the people involved themselves use for their own systems, and the quality of the distribution’s repositories is what the entire reputation of the distribution hangs on.
As for whether or not you can trust the system … well, having an impeccable record over many years for thousands of packages speaks a lot to that topic, wouldn’t you say?
Edited 2009-12-17 01:00 UTC
I believe it was Red Hat’s repositories that where breached a year or two ago. The cause was a config error which allowed someone to push modified .rpm into some of the repository mirrors. I believe it was caught quickly and was due to a config error rather software flaws. It also doesn’t mean all repositories are wide open. The repository should be the safest source for packages but one should still remain aware of what they are doing.
In a general way I wasn’t really arguing with you. My problem was “If you do this, you are safe”. Its not that cut and dried. For example, debian has an extensive testing, maintenance, and QA process they follow, with checks built in to the package manager to prevent tampering, slackware is basically stuff pushed up to an FTP, and then mirrored out. I would trust debian a heck of a lot more then slack. (not to say I wouldn’t trust slack, just that debian has more focus on this, and is more then one guy)
The same trust thing is true on windows, if you download something anonymously off of an anonymous torrent site, I would have a very low level of trust. If you download something off of source forge, I would have a much higher level of trust, although significantly less then from debian, and would probably verify the signature before installing it on a server. If I download something from Microsoft.com I would actually hope to get a virus, since they would probably be will to pay a lot of money to shut me up due to how much they have on the line
Too many people just want magic bullet solutions, and assume they are safe. It doesn’t matter how many security products you have on windows, whether or not you use linux, or how you download your files. There is always a chance of bad things happening, it is all about doing things to lower the risk, and never just assuming you are safe.
This is all perfectly fair enough.
Here is where we diverge. It does matter, very much, how you go about getting the software on to your system.
If you stick to a system where the whole process, from whoa to go, from source code text editor all the way through to “click apply” for installing the software on the end user’s system, is auditable and visible to many eyes who use (but who did not write) that code, then you can be safe.
If you routinely deal with a binary-blob system where no-one but the original authors (who just may be malicious) ever has visibility into the code, then you will be quite likely to get malware.
It is a mindset thing, it is a paradigm. Windows itself is all-too-firmly in the latter camp (even if the code from Microsoft itself can be trusted). Expect to get burned.
In other words, if you had source code available for every Windows application you ran, and had eyes on that code that would package it for you, then Windows would probably be just as secure as Linux is.
Unfortunately, telling people that the only way to secure their systems is not to run any app who’s source code hasn’t been reviewed by a committee is just not very practical for a lot of folks, because it severely limits the apps you would be allowed to run. Not everything that is useful to me out in the wild is open source. If that wasn’t the case, then those of us who use proprietary software wouldn’t have to take the risk of downloading binaries from 3rd party websites and running them.
Possibly. The system with Linux relies on a bit more than just eyes on. It relies, for example, on the fact that one set of people, with a whole raft of different responsiblities, ties, and allegiences, write the code, as a collaboration, and that an entirely different set of groups of people package it in full and plain sight of what went in to it.
Duplicate that on Windows distribution channels and you may then one day approach the same level of trustworthiness.
Actually, you would be very surprised at what you can do, and what power is available to you, even if you limit yourself to run ONLY Free Software.
However, it should be admitted that there are some critical application areas that are simply not covered well enough by Free Software. OK, so here is an approach: limit yourself to just the one or two critical commercial professional applications, and do the rest with open source, on an open source OS.
For example, if you are a CAD professional:
http://www.varicad.com/en/home/
… then run it on a secure Linux system (Kubuntu and OpenSuse are recommended).
This way, you limit your exposure to getting a trojan to the installation of just that one or two critical-but-non-free commercial applications.
I agree. There’s several problems with package managers today though. There’s so many packaging standards and mechanisms. This results in one tarball having to be packaged a whole bunch of times to reach most linux users. I realize here, that many distros use different versions of dynamic libraries and such, but there are the possibility to build “fat” binaries (not the correct term perhaps) that would fit the most common configurations, or a “golden standard” if you will.
It seems to me, none of the major distros are willing to work together to create such a standard, and a mechanism to work with it though. It could in theory bring packages to a much wider audience, with less work being done by the maintainers = more time to work on packaging stuff that end up at gnome-look.org etc as it is now.
That is not the only problem IMHO. There should be some way for users to install packages, contained in their $HOME only (or a mechanism to install packages per user, or group), without root privilegies. Themes don’t have to be friggin installed as root, to the system root! But, to be honest, it would be nice if one could install regular applications this way too. In this recent case with the .deb from gnome-look, this method could have significantly minimized the damage a “rouge” binary could have done to the system..
Typically, this is handled by a division of responsibility.
A “project”, such as KDE, will work on source code. They will typically use a source code management system (perhaps SVN or GIT), and they will have a community of developers, maintainers and testers etc, etc.
Once a project releases a new version, then the repositories take that source code, compile it for their given distribution with switches for their supported architecture(s) and directory structures, make sure it works against all of its dependencies at the version they are at in the distribution, and then if all is OK, package it (into a .deb or a .rpm or a .tgz or whatever that distribution uses) and include it in the repository storage area, and index the newly updated package in the repository index files.
There is one set of application developers, and one or more package maintainer at each distribution.
It isn’t too onerous. It typically works well enough, even to the extent that it is possible to have one-man distributions.
For general package install, I shudder to consider a system that allows users accounts to toss anything they want on there. I already have that with Windows allowing things like Skype to install without admin privileges. Reducing the required privileged to install software is just not good thinking.
Now, for things like DE themes, KDE4 actually does just that. In the desktop properties one can select from the provided backgrounds or click “get more” resulting in a a list of themes and such available for download. Select them background or theme and down it comes into the user’s ~/.kde without admin privileged. This sort of thing is less of a concern because it’s not executable code user’s can easily be fooled into downloading (wow.. another naked-britney.exe.. I must have it). The security issue returns to the vulnerability in the chair-keyboard interface rather than that and the design flaw of promoting user installed executables.
Such a standard would bypass the advantages of a distro software repository as outlined by Lemur. You are proposing something that would allow third parties to package something up in binary format to be run by (m)any distro without being “audited” by the distro team. What they should be doing and all they should have to worry about is providing source code and letting the distros package it.
A universal binary format is only of interest to software that someone doesn’t want distributed in source code format, which really doesn’t belong on an open system, at least according to some. Such a format is certainly not an answer to the security questions posed by the poisoned theme in the article.
I think that what Thom means by the term PPA is the distribution’s official repositories.
PPA stands for Personal Package Archive and I don’t see the reason why someone cannot make available some malicious code (probably hidden inside an otherwise useful application) through their PPA.
PPAs are not distribution repositories, they are outside of that system. Use them at your own risk, because they are not audited by anyone associated with your Linux distribution.
Being outside of the distribution means that PPAs are no more trustworthy than downloading a package using a web browser and installing it manually.
Edited 2009-12-17 01:02 UTC
eugh @
GNOME-LOOK is a third party collection of themes that you can install at your own risk. It^aEURTMs nearly the same as getting a porn pop-up with a .deb file link in.
This has nothing to do with repository systems, and 100% to do with trust.
Exactly. This trojan did not get to users systems via the repository/package manager system. It relied instead on users downloading an individual package via a web browser, and then installing it manually once it had been downloaded.
This instance actually serves to higlight the strength of the repository/package manager system.
Of course a repository system provides a degree of trust.
We actually need some kind of global “open source web of trust” system, and getting your key signed would require that:
– You are using your real name
– You have social security number and an address
– You are living in a country where police can throw you to jail if needed
Ubuntu has a tool for installing offline packages, called gDebi. gDebi has always been able to show you the names and locations of files that will be installed in the package; well the latest version actually allows you to look at the contents of the files before you install. You can even look at the Debian control scripts and the contents of gzipped files.
It would be a good idea to have a quick look at this information (the “Included Files” tab) before installing a package.
Of course, on Windows it’s nearly impossible to audit the contents of their binary installers, and it’s still not very easy to look at the contents of MSI packages on Windows. Kudos to Ubuntu and the gDebi developers for implementing this feature so conveniently, and more importantly doing it before this recent attack ever occurred.
I, personally, would maintain that it is better and easier (and far more thorough) to have the distribution’s maintainers worry about auditing each package.
If you stick to using the distributions repositories via the package manager, then that is what you are effectively doing.
Downloading packages (using a web browser or whatever) short-circuits the audit of the distribution’s repository maintainers. Whoever made that package could have put anything at all in it. You would probably be very lucky to spot anything untoward yourself.
I, personally, would avoid downloading packages from outside the distribution’s repository and installing them using gdebi (or dpkg, or whatever you are using). The reason why I would avaoid it is because you open yourself up to trojans if you do this (as indeed what happened in the original article that this thread is about).
Edited 2009-12-17 01:18 UTC
While I agree with you in that repositories are the way to go, I don’t really believe the above is true. Package maintainers are just guys like you and me, with little time to audit packages. The constant flux of security updates is a testimony of this.
Their audit needs to be that the source code being compiled into the package is the correct latest released code from the project.
Their audit needs to be that the source code is compiled correctly for their particular distribution, and that the package is set with the correct dependencies. Their audit needs to be those dependencies are all already available in the repository.
Their audit needs to be that the binary that is present in the new package is correct against the source code (which is also correct against the project’s source code revision system, such as GIT).
Their audit needs to be that it compiled correctly, without warnings, and that it runs when test installed.
If they audit all these things (and it is their interest to maintain the distribution’s reputation), then their package in their repository will not contain malware.
It doesn’t mean going over the code with a fine tooth comb, it means only that the package is a correct representation (for that distribution) of the project’s released code. The distribution maintainers are the only people really in a position to do this audit.
End user’s can definitely take advantage of this, and thereby guarantee their systems will not get malware. There will be no malware if everything is open, public, and all viewable by many poeple who did not write the code.
After all, malware can only exist in closed, secret binary blobs, whose workings are visible only to the original (malicious) author(s).
If you have any doubts about the efficacy of this system, remember, distributions repositories have an impeccable record so far, after many years use across many distributions for thousands of packages. “Guarantee” is not too strong a word.
Edited 2009-12-17 08:45 UTC
Sure. No big disagreements there.
Yet, the packagers seldom audit the actual source code from which the binary is packaged.
I believe, as you, that the “audits” you mention are generally sufficient enough to ensure that no malware gets through. But that is not to say that no security vulnerabilities wouldn’t get through.
Edited 2009-12-17 15:02 UTC
It depends on the distribution. I think most of the security research community would be impressed if you could get a malicious package through Debian’s vetting stages and into stable back-ports or testing repositories.
Exactly so.
Debian’ use of package management goes back to the 1999-2004 timeframe.
http://en.wikipedia.org/wiki/Debian#History
No instance has ever been recorded of a mailicious package getting through the system yet, for many thousands of packages, over a decade timespan.
A few times in that period some Debian servers have been hacked. Some intruders even got root access, I beleieve. Even so, still no way was found to inject any hidden malware into the system.
Edited 2009-12-17 22:40 UTC
True.
That part is up to the original project itself.
By “project”, I mean an open source collaborative development project, such as KDE, or GNOME, or Apache, or Mozilla, or whatever.
The projects audit their source code and submissions to their source coe.
The distributions audit that that source code faithfully gets on to end users systems.
Neither party does the work of the other. It is a collaboration involve multiple, independent individuals, all of who have an interest in ensuring the purity of the code.
It is also like a double-blind. No one malicious person (who might have an aim to infect end users systems with malware) gets to push the code the whole way through to end users systems.
Finally … don’t forget about the perfect record of this system. The proof is in the pudding, as they say.
I think it’s time to think about security concepts in general.
For example all this virtualisation stuff going on could be used to make your computer more secure. What about disconnecting everything that isn’t needed to interact. Create containers and containers in containers to only allow needed interaction. Make sure that only a very small part of your system can be compromised.
Don’t allow every application to access your personal data or to send mails/spam.
This would also make debugging much easier. Besides this their could be a log about the containers, what they are doing, when interaction with other containers is really required. This would allow the detection of malware by analyzing the behavior and not its code.
Yeah, I know their are jails and all this stuff. The problem IMO is that their is no system to actually make it usable for everything.
There are a lot of small parts which together could create a system which would be way more secure.
IMO there needs to be a real successor to the current way of operating systems. Everything has to be more highlevel. Everyone is talking about the browser is the OS and one reason for it being hyped is that you can use a very high and abstract layer to design applications. Wouldn’t it be better to create a system optimized for high level stuff, instead of providing only a low level way by itself?
I’m really not a fan of this WWW-hype and in general I think Unix does it the right way, but things change and it would be time for a newer version of doing things the right way.
The current way of thinking is to add as many security layers as possible and of course it’s right to do so, but I think it has to end somewhere or you aren’t able to connect security with usability (be it KISS or the “bubble-gum-way”) anymore.
There are people who say all this virtualisation/high level stuff is a waste of resources, but this has also been said when the first operating system was built and when assembly was replaced by languages we now call system programming languages.
Is it really necessary for themes and screensavers to be “installed”? If there was an easily accessible place in the user’s home directory, like “~/Library/Screen Savers” on OS X, and another directory for themes, then packages with installers that must be run as root would be completely unnecessary, and the possible damage would be limited to the user’s privileges.
This isn’t really about Gnome-look’s lack of moderation, this is Gnome’s fault for making something as easy as storing themes and screensavers difficult enough to warrant the use of a package management system that can only be fiddled with if you have root privileges.
Which for users interested in installing screen savers… would be pretty much everything they care about. Their pictures. Their personal documents. Their email.
Why do people persist in thinking that /etc/logrotate.conf is more important than the user’s home directory?
Edited 2009-12-18 01:39 UTC
A compromised system may play a role in something far nastier than a user weeping over files they didn’t back up.
And most of those nasty things can be accomplished from a regular user account.
It is not more important as data. But this line of thinking worries me. It has “Fedora 12” painted to it; Linux is now suddenly understood to be a big single user “Desktop Spin” (whatever that means).
But as the poster above tried to say, if you are able to own, perhaps in addition to user’s data, that /etc/logrotate.conf, implying root compromise, you can probably greatly lengthen the period of the compromise as well as hide the detection of it. To name few examples.
Edited 2009-12-18 04:06 UTC
i dont understand how this package manager/repo thing works, just wondering if it is possible,for some reason, that we landed on the wrong site/repo because of a compromised/poisoned DNS? so instead of getting pidgin update, we get malware?
Package managers check that the package is properly signed.