I’ve bored the readers of my personal website to death with two rather prosaic articles debating the Linux security model, in direct relation to Windows and associated claims of wondrous infections and lacks thereof. However, I haven’t yet discussed even a single program that you can use on your Linux machine to gauge your security. For my inaugural article for OSAlert, I’ll leave the conceptual stuff behind, and focus on specific vectors of security, within the world of reason and moderation that I’ve created and show you how you can bolster a healthy strategy with some tactical polish, namely software.
Do not expect wonders or detailed guides how to setup this or that NIDS. That’s not the idea.
The idea is to help you understand the core elements of security, focus on identifying your
needs and leverage them with a flexible and transparent solution. The choice of software will
reflect your needs.
Let us begin.
Linux security as a concept
Linux security revolves around minimizing exposure to malicious code by using digitally
signed repositories, minimizing accidental or automated damage by using a non-root account,
default file permissions, with diversity of software as a reserve.
Now, some of the suggested repertoire might need some small tweaking.
Keep the system up to date
This is a very simple, very important piece of the puzzle. Make sure the software
repositories are configured and that you have an automated update mechanism in place. Having
your system fully patched is always a good idea.
Firewall
Firewalls sound like an interesting concept. Basically, a firewall is a tool that controls
the traffic flow in and out of your machine. Firewalls are configured to permit traffic you
initiated and asked for and blocked traffic that was sent without invitation (unsolicited).
This holds true for firewalls on all operating systems.
To work with one, you do need some basic understanding of networking. Luckily for you, most
distributions ship with a firewall enabled, with default rules that permit a reasonable level
of comfortable use, without any special changes required. In a few cases, you may need to
create manual rules to allow additional functionality, like Samba sharing.
Some distributions ship with a graphical management console for the firewall, which
simplifies the usage. Others stick to the command line, making them less suitable for new
users.
Here are a few examples:
Firewall in Fedora 12:
Ubuntu, on the other hand, ships
with the firewall disabled, because there are no network-aware services running, hence no
need for a firewall. Hence, no management console for the firewall. However, you can very
easily restore the missing bits with additional software like gufw:
Scanning for malware
This is so 90s, when you think about it, still some people have this dire need for scanners.
In that case, you may want to consider using either the rkhunter or chkrootkit scanners, both
of which will probe your system for nefarious changes. Both are command-line only.
Using these scanners implies a deeper understanding of the Linux system. Then, there’s the
question of what to do if you encounter a problem. Can you really trust a subverted machine?
How do you recover? You should definitely read my previous article for
that.
Anti-virus (not needed, but read on)
You do not need one.
Seriously. Honestly. It’s not required. It’s useless. In the worst case, if you can’t let go
of your Windows demons, go for a free solution, so you need not waste your money on something
that is redundant by concept.
There’s ClamAV (including Klam for KDE), as well as a
number of commercial products that have started shipping solution for Linux. Then, most
anti-virus rescue CDs are based on Linux. Clam-based versions can be found in the
repositories.
As a security measure, anti-virus products are problematic, due to the signature-based
nature, which is always competing against malware creators and always lagging behind. The
only sensible reason to use anti-virus on Linux is to scan files that you receive from your
friends running a flavor of Windows before forwarding them to other Windows users. That way,
you may break the chain of accidental malware spreading. You may not notice or care, but your
clueless friends could. The best solution is to have no friends, but most people fail at
this.
On that note, please consider reading my whitelist vs. blackisting
article.
Still, you need not have a resident program running on the system. You can go for a web
solution, like Jotti or VirusTotal, both of which use multiple scanners to detect
malicious content. Upload a file and it will be diagnosed by a host of dedicated anti-malware
software.
You can also consider using dedicated security distributions for offline, in-depth system
scanning and analysis. A forensics distribution like BackTrack sound like a very good
idea.
You may also sin the sin of using a Linux-friendly Windows-based preinstalled environment
(PE) like BartPE or UBCD4WIN, which also come with scanner
utilities for Linux, too.
Startup applications and services
If you’re in the mood, go through the list of applications and services configured to run on
your machine. You may discover undesired processes running, hogging resources and possibly
exposing your machine to threats, as well as plain doing things that you do not want. While
this can take the form of system optimization, it can also have security implications.
As an analogy to Windows, think of these as the msconfig and services.msc utilities.
System awareness
System awareness goes beyond malware. It’s about controlling your system and knowing what’s
running when, where and why. There are tons of tools available, many already installed and
waiting for you.
If you’ve read my Linux cool hacks, both parts one and two, you’ve learned about
a few useful system tools that provide a better visibility of what’s happening inside your
system.
I’m going to mention a few, just briefly. Some of these will have their own dedicated
article, with numerous examples and screenshots.
/var/log/messages
This is the system log. Almost everything goes in there. Reading the log will give you an
indication of possible system issues, including software errors, as well as possible
security-related items. You do need some knowledge to read the file properly.
/var/log/secure
You can configure your machine to log ssh and sudo attempts to a separate file, like
/var/log/secure. Then, you can examine the log for any privilege escalation attempts or
remote connection attempts.
Examine logged in users
There are many ways of doing this. The most accurate one is to parse the output of the
ps command. But you can also use w and who and lastlog. Manually dumping utmp and
wtmp can also work.
Processes accounting
If you use pacct, you can write a log entry for every command
successfully completed on your machine. Then, you can dump the log and look for suspicious
entries. Automating the mechanism can provide you with a useful early warning system.
lastcomm lets you print out information about previously
executed command, sort of a head against the pacct log. Furthermore, you can enhance the
power of process accounting by using sar.
Audit files
It is possible to audit core system files. This is what audit is for, a built-in Linux kernel auditing
facility, which allows you to monitor changes to critical system files. I’m going to write a
dedicated article soon. Stay tuned.
Other tools
You also have a range of other utilities available, like netstat or nmap, which can help you examine
your machine network visibility
Geek stuff
There’s geek stuff, of course. For example, you may want to use system hardening tools like
AppArmor, by creating special, sandbox-style profiles for your applications, which are then
restricted from doing harm to your system, should an unwanted privilege escalation occur, due
to an error, a bug or a vulnerability.
There’s also SELinux, available in most RedHat-based distributions, like Fedora.
Conclusion
I’ve written lots of stuff. So what do you take from this article? Well, firewall seems like
the best single solution overall. It’s useful and sometimes rather necessary. Anti-virus and
malware scanners are definitely not needed. The rest is perks. Take it or leave it. You can
run a comfortable desktop life in Linux without so much as lifting a finger, with most
distributions configured properly, including firewall enabled and running and hardening
profiles preconfigured for you.
You may want to invest time in learning how to use the logging tools and facilities, as they
offer a wealth of useful information. Properly configured and used, they will replace the
need for commercial tools that strive to do that for you.
Server security is a different matter altogether, but for home use, you’re in a really good
shape. Just make sure to keep the system patched, install software from official repositories
and run a firewall. The rest is polish.
Windows users moving to Linux often suffer from a panic surge due to the sheer lack of
security-related buzz, but it’s really simple and quite boring. There’s no need to go
overboard. You can invest your brain cycles in having fun. That would be all.
Cheers.
About the author:
Igor Ljubuncic aka Dedoimedo is the guy behind dedoimedo.com. He makes a
living out of his very hobby – Linux, and holds a bunch of certifications
that make a nice pile in the bottom drawer.
It is nice to see some practical advice regarding security. Now I am off to investigate gufw!
Ok, someone has spent countless hours coding and debugging his new masterpiece app and yet couldn’t spend five minutes thinking about a real, at least pronounceable name? I mean, gufw?
Indeed, the name is just horrible. And besides, it looks pretty but has almost no features whatsoever. Firestarter ( http://www.fs-security.com/ ) may not be as pretty but it’s a lot more functional and serves much better as a GUI for the Linux firewall system.
Unfortunately Firestarter isn’t particularly maintained anylonger, except occasional patches from the community. Otherwise a (very bestest) great GUI-tool for managing firewalls.
Unfortunately Firestarter isn’t particularly maintained anylonger, except occasional patches from the community. Otherwise a (very bestest) great GUI-tool for managing firewalls.
What little I have used it I haven’t found any bugs or missing features, the only thing that needs improvement is the looks and flow of action. And that should be rather easy to improve on, I might even try it myself when I get bored
GUi Uncomplicated FireWall. Seems pretty logical, though not as sexy as iGUFW.
Edited 2010-06-21 11:13 UTC
And that’s not even good English. A more correct name would have been: Uncomplicated GUI Firewall (UGF). But then again programmers like to program, not think up fancy names or write documentation.
I thought Gnome UFW.
GUFW is a GUI for UFW, where UFW means “Uncomplicated FireWall” (formerly “Ubuntu FireWall”), so the programmer only added the G part.
The programmer is spanish, and his english isn’t very good, so instead of bullying him, you should be thanking the extra work to translate into english so you can use it, or make your own program and try to do a better work.
Edited 2010-06-21 14:32 UTC
But… it is far easier to complain than contribute…
Back in college, the course on Operating Systems had this to say on Linux security:
“Linux is both the least and most secure OS there is. It all depends on how much time and effort the admin puts in to properly configuring it.”
Well that’s not really true as, generally speaking, Linux distros ship with more secure defaults than Windows does.
However, it is fair to say that no OS is secure if you stick an experienced idiot in front of it. i.e. the kind of users who are experienced enough to know how to do stuff but not smart enough to know they shouldn’t do it. (unfortunately I think we’ve all met at least one of these guys and I’m sure a few of you guys has made a living out of fixing their computers)
Linux also ships with more outdated and insecure packages, than the latest version of Windows.
Because Windows retail boxes update themselves while sitting on the shelves at the store, right?
Ah, thanks for that heads up. That Windows shipping with more up to date programs and patches would explain the 80 fed into my shiny new Windows machine last Friday.
And may I remind people that Linux security features are not even turned up to full blast on default installations. It’s this good out of the box but it’s not even trying. There’s room for increasing Linux security two-fold or more. Consider:
* mandatory AppArmor-based software whitelisting;
* mandatory separate /home and /tmp partitions with noexec,nodev,nosuid;
* restricting software installation to official repositories and their mirrors and denying direct install of debs/rpms/install kits by default;
* integrating and shipping default kernels that feature better ASLR and NX bit support.
May I remind you that I stated “more secure defaults than Windows” and not that “Linux’s defaults are perfect”
They aren’t more secure than Windows anymore. At one time, sure. Now? No.
So Windows 7 doesn’t give the default user accounts full administration rights?
Windows has come a long long way, there’s no denying that. And I’m not disputing that security is an ongoing battle in which users shouldn’t get complacent regardless of the OS they run.
I just don’t see the point in lying by saying all OSs are equally secure by default. The simple fact is some OSs do ship with better defaults. However, and as I’ve already stated, none of that really makes much difference if you stick an experienced idiot in front of the keyboard.
It really would be a lie to say that they were all secure by default, because none of them are.
And once again: Hence why I said “more secure by default” and not “Linux’s defaults are perfect”
Ah, but what do you mean by “Windows” and “Linux”? If you mean an install with just the OS and an interface, let’s assume you’re right. Windows 7 has made great strides into closing remote vulnerabilities and has taken protections such as ASLR, sandboxing IE etc. Remote breaking into Windows 7 through IE8 has been called one of the biggest modern challenges in security.
But a working PC also contains a large number of applications. This is where the cookie crumbles.
The Windows applications come in huge numbers, they are mostly closed source and they are not updated in a centralized manner. Plus, Windows users consider it normal to download stuff off any website they run into, not to mention downloading and running dubious cracks and keygens. What’s more, they’ve become complacent about having malware in their machine.
Contrast this with Linux apps which are fewer, mostly open sourced, come 99% from trusted repositories, the update system is centralized and automated, and there’s usually no need to go and install cracks. And a Linux user who finds a single piece of malware on their machine will be absolutely horrified.
Basically, the Windows userland is a security nightmare.
s/Windows/any\ OS/i
Precisely so.
In fact, there was one case recently of an obscure program called UnRealIRCd where someone had replaced a tarball (which was unsigned) on a mirror with a version that contained a trojan.
There was a huge amount of “horror” and discussion generated over this, but at the end of the day the trojan found its way into only two minor distribution repositories. It is unclear if it actually mamanged to infect any end user’s machines at all.
The amount of “horror” generated compared to the actual infection rate was hugely blown out of proportion. In a way, that is a positive … if an equivalent thing had happened in the Windows ecosystem, probably no-one would ever have even noticed, and certainly there would be no comment raised.
Edited 2010-06-23 04:37 UTC
Depends on how you define it. Windows now is certainly more secure than Windows of the past, but nevertheless the actual infection rate of Windows systems is still vastly more than infection rates of any other system.
It matters not at all to the end user (whose system gets infected) if this is “unfair” comparison, or if it is due to the fact that there is vastly more security threats against Windows. The practical outcome is still that if you run a Windows system, it is far more likely to get infected.
Welcome to every OS ever created.
The link to the article about white-listing vs black-listing is a file:// URL. (And it includes a Windows drive letter!)
Edited 2010-06-21 13:09 UTC
While I agree that anti-virus is pretty pointless on Linux, and even detrimental on Windows I think your reasons are nearly all flawed.
1. User account stops viruses getting root
This is largely moot. Viruses aren’t really interested in gaining root access. They can do nearly anything as the user anyway – key-logging, sending spam, DDoS, and so on. Besides once you have access to a user’s account it is trivial to gain root – just change their path to point to a fake ‘sudo’ program which logs their password.
2. System updates provide security fixes for all software.
Ok this is a fair point.
3. Software is obtained from trusted repository
This is true up to a point. I’d bet most linux users install stuff from outside the repositories, and besides we’ve already seen examples of mirrors, and even source code being maliciously modified.
4. By default files aren’t executable
This is just silly. Most viruses work either by buffer overflow type exploits, or by tricking the user into running a program. File permissions aren’t going to help in either case. By the way, you can easily execute non-‘executable’ binaries like this:
/lib/ld-linux-x86-64.so.2 ./a_file
5. Diversity
This is true. Although I’d wager Ubuntu is becoming popular enough to count as a single target.
6. People will see vulnerabilities in open-source code.
Well evidently not, otherwise there wouldn’t be any need for security updates. See also the Underhanded C Contest: http://underhanded.xcott.com/
7. Linux users are more skillfull.
True, I suppose.
The real reason you don’t need anti-virus on linux is because there are a very very small number of linux viruses. And that is almost certainly due to the fact that it has a 1% market share (and probably the diversity and skill factors to some extent).
This is no longer true. The stated goal of Ubuntu is to build a consumer distribution, and it is being sold by Linux zealots that non-skilled users are safe using it.
This has opened a wide vector for attack.
Note: don’t call them viruses, call them worms. Viruses are a different beast (they don’t use networking as a vector).
Second, it’s not that easy. As an unpriviledged user you can NOT snoop on other users or open ports under 1024 (which is where most legitimate servers like to reside). But yes, you have network access so spam and DoS are valid points.
It’s NOT trivial to gain root, if it was trivial the whole UNIX security would be worthless. The particular method you described is not really practical.
I think you mean social engineering — tricking the user with a sudo window. Which can work (if the user doesn’t bother to think why there’s a sudo window all of a sudden).
But the point is moot. If there’s malicious stuff running on your machine you’re pretty much screwed. This is the 1st major vector of computer security: remote break-ins without user intervention. This is a very important important thing and THIS is why Linux is more secure than Windows: on Linux, everybody makes every effort so that the break-in doesn’t happen. On Windows they let it happen and deal with it afterwards.
Granted, the dependence of the repositories is a weak link. But the repositories are distributed and closely watched by many people. I’d say they do a much better job than, say, Apple does with the AppStore. Not to mention they have the source code too.
As for installing stuff from other sources… this is the 2nd big vector: users bringing malware in themselves. And there’s not much anybody can do about it. Unless the user understands not to install stuff from unofficial sources, all bets are off.
BTW, a Linux distro can easily close 99% of this vector by only allowing certain repositories and disallowing direct installation of package files (deb, rpm etc.) But it’s not practical.
For that to happen you need to already be able to run code. If you managed that you don’t need that trick. On the rest, you’re right.
But let me point out that when you’re trying to trick someone into running malware, it’s one thing if all it takes is to double-click (a universal action used for everything) or if you need to go into file properties and change some stuff. You have to admit that executable status in metadata is better than executable status as part of the file name.
That point of view is wrong.
Some people like to say that once a platform is more popular there’s more (or more motivated) people attacking it so chances for break-in increase. That’s bull. Remember that most of the servers of the world run some form of UNIX or Linux and that has NOT made them more vulnerable. There’s no direct link between popularity and security.
There is an indirect one. Some of the installations are old and not updated. If you have lots and lots of installations, statistically the chances increase for running into an old one. It’s a numbers’ game. No relation to actual security.
The reason there is so much Windows malware is because it’s easy for it to exist: lots of vulnerabilities, bad underlying security models (fixed with Windows 7, hopefully), unpatched machines, many propagation vectors. There’s next to none for Linux because vulnerabilities get patched fast, almost all installations update by default and propagation vectors are few.
Not sure how you mean that. Since there are security updates, obviously somebody DID see the vulnerability (and fixed it). Ok, they didn’t see it the first time, but second time is better than never. Between a platform with 1000 vulnerabilities which has updates for all 1000 and a platform with 2 vulnerabilities which leaves 1 open, I’ll take the first.
Don’t count on it. Educating users will not work in the long run. Most users are not skilled enough, and security is a highly skilled game.
The most you can teach them is not to install software from anywhere else but the official distros. The rest of the security job needs to be done by the OS and software with no user intervention.
Which will always leave social engineering as a backdoor. But that’s valid anywhere.
Edited 2010-06-21 15:53 UTC
echo alias sudo=’sudo do bad stuff >/dev/null 2>&1;sudo’ >>~/.bashrc
I agree with pretty much everything else you said though. Malicious people that want in don’t necessarily need in “right now”, they wait patiently for it.
Edited 2010-06-21 15:54 UTC
echo alias sudo=’sudo do bad stuff >/dev/null 2>&1;sudo’ >>~/.bashrc
I agree with pretty much everything else you said though. Malicious people that want in don’t necessarily need in “right now”, they wait patiently for it.
In order for that to work the malware app in question would either have to be root in order to put the fake sudo in a location mentioned in $PATH, or it would have to place it somewhere in the user’s own home directory and modify $PATH.
The problem? Well, atleast some distros use the Tomoyo/SELinux framework to disable running applications from the user’s own home directory if they have the same name as a common system application, and sudo often belongs in that list.
Some shell providers even completely disable the ability for one to run executable code from the home directories or /tmp and it might actually be a good idea for home-user oriented distros too; a common home user does not have the need to execute stuff from their home directory, they’ll most likely just install what they need system-wide using the package manager. Executing stuff from your own home dir is more likely a power-user feature, including programmers et al, not Joe Sixpack.
That works. It can be countered with some extra safety measures in the shell.
But it’s awkward; it may give false positives and impact legitimate uses; it can still be circumvented; it uses a blacklist, which is usually a bad idea in security; and most importantly, it misses the main point: once malware executes on your machine, you’re screwed.
There’s a [url=http://ubuntuforums.org/showthread.php?t=504740]lengthy discussion[/url] on this exact topic on the Ubuntu forums, if you care to read it.
Personally, I’d rather have most effort put into plugging app vulnerabilities than in mitigating the aftermath of a break-in. I find the casual attitude about break-ins on Windows terrible. If a Linux user found a single piece of malware crawling inside their machine, they’d be horrified. A Windows user just assumes it’s natural to have piles of that stuff. Terrible.
Granted, good security means layers upon layers and not relying on a single barricade, lest you find yourself in trouble when that barricade is breached. sudo calls could probably use better guarding and closing some of the more “creative” ways of plugging into it.
Let’s not assume there’s an actual person behind every break-in. Most break-ins into personal computers are done by bots, the worms that cruise the net and blindly try every address with every trick they know. They don’t rest, they don’t stop, they don’t think, they don’t have personal likes or dislikes or reasons to do something. They just do what they were told to do, forever. Like I said, a numbers’ game. That’s the main threat we’re trying to protect against: dumb repetitive robots.
I’d wager that if an actual highly skilled hacker wants in your computer, they will manage that. Then again, even an unskilled person can manage that, with a hammer and your fingers. But that’s another ball game entirely.
You guys forget that security features don’t exist in a vacuum and I’m not sure you realize how much Linux does to mitigate the user being the weak link.
4. By default files aren’t executable
In combination with things like a lack of embedded program icons, not hiding file extensions and, for Nautilus users, extension-header mismatch warnings, this works to prevent “Cool picture!.jpg.exe”-style exploits.
I vaguely remember the devs recognizing a hole in this protection relating to .desktop files about a year ago and rushing to close it.
5. Diversity
Ubuntu may be approaching “single-target” popularity, but I suspect the presence of Kubuntu, Xubuntu, and Lubuntu will prevent it from ever having that problem as badly as Windows or MacOS could.
6. People will see vulnerabilities in open-source code.
While this is somewhat optimistic, open-source does have a deterrent effect on bundled malware and, more importantly, it means that features like stack-smashing protection, NX-bit buffer overflow security (A.K.A. Hardware DEP), and the like can be easily phased in by adding the userspace changes to the compiler.
For example, on Windows, last I checked, Hardware DEP was still an opt-in thing in the default configuration to ensure backwards-compatibility with older software. On 64-bit Linux (and 32-bit distros which don’t need to ensure no on-boot freezes on Pentium Pro), GCC has been appropriately setting the DEP opt-out flag in ELF headers for years. (nested functions, JIT compilers, and so on require the ability to dynamically build code and then execute it)
Here are some of the other things I didn’t see mentioned:
1. Linux vendors have a better track record than Microsoft for patching vulnerabilities quickly. (Is Microsoft still equating their confirmed exploits to Linux potential vulnerabilities and ignoring the Security/Crash/Bug/Annoyance flags to pad the numbers? I know they used to do that)
2. Without root access, malicious programs can’t remove themselves from the list of running, killable processes, interfere with syslog, etc. Last I checked, Windows was still struggling to virtualize all the admin-level access that older programs expected to have.
3. On Linux, because privilege separation was around from the start, the number of escalation dialogs users see is significantly smaller than on Windows (partly because of the batching of package installs) so users are less likely to get in the habit of just clicking OK without reading them.
Also, the presence of user accounts from the beginning means families which give different people different accounts are less likely to run into rough edges or to end up depending on apps which implement their own user profile systems. (Which means that you can have users who don’t know any better (eg. kids) but don’t have the admin password or access to mommy and daddy’s files)
4. Linux media players aren’t vulnerable to the “Use Windows Media Player and get tricked into visiting a malicious DRM auth site” vulnerabilities I see every now and then. Any automatically-offered codecs come from the same signed repository farm as the OS.
5. Linux provides many APIs for implementing drivers in userspace (libusb, CUPS, FUSE, CUSE, etc.) minimizing the amount of potentially vulnerable code that runs in kernel space. (Especially important since, video aside, the main remaining things which don’t use a standard OS-bundled driver seem to be USB doodads and printers)
5. Linux provides no hooks for programs to steal file associations, which removes the need for 90% of those buggy, tray-resident “agents”. (Especially when combined with the general preference for minimizing wheel-reinvention (outside the world of Linux audio))
Correction: We have seen a few examples of mirrors where someone hacked into a machine, but no distributed software was altered because of that. Just lately, we saw one example of an obscure source code tarball being replaced on some mirrors by a trojaned version. Fortunately this affected the repositories of only two know distributions, Arch and Gentoo, both of which are minor distributions.
It is unlikely that as many as a dozen systems were ever infected by any of this activity.
BTW: I personally install very litlle software from outside the repositories. Why would I? Debian repositories contain over 25,000 packages. There is very little outside that you would actually need.
If we are going to try to scope the problem, lets try to keep it real. Compare this real-world scope for malware infection of Linux systems to the estimated 50% of Windows machines that are infected (perhaps 200 million machines or more) … that gives it some perspective.
Personally I think Linux is much more secure than Windows and it is more reliable than Windows. However all Distro(s) run as server will be set up an anti virus software to increase the protection.
The following link shows The Most Reliable Hosting in May/2010:
http://news.netcraft.com/archives/2010/06/08/most-reliable-hosting-…
“Well, firewall seems like the best single solution overall.”
Firewall won’t save you from anything by itself and the only meaningful reason of using fws is when certain hosts need access to certain service. On a workstation you can pretty much disable/remove every network daemon like ssh, apache, mysql etc. or if you need them to develop stuff then just bind them to localhost.
“It’s useful and sometimes rather necessary. Anti-virus and malware scanners are definitely not needed.”
Then why do you even mention them? Most of the linux AVs were made for mail gws or to scan fileservers and their detection rate is far worst than what their windows version can offer. Except clamav because thats crap on both. If you would have to write a list which av is the worst clamav would be somewhere on top.
You should’ve rather write about rootkit detectors like: http://www.chkrootkit.org/
One of the best nix sec guide I read in the past (good for workstations too) was this one, unfinished unfortunately:
http://slackware.asmonet.net/index.php?dzial=artykuly&p=5
Edited 2010-06-21 13:54 UTC
Yes, I agree. I really don’t see much point in packet filters on workstations. Either you want to run a certain daemon and then it needs open ports or you don’t and you just don’t run it. If daemons are running with listening ports that shouldn’t either you screwed up or your distro is fundamentally broken.
I’d have to disagree, in my experience it’s quite capable at mail scanning. Sure beats most the Windows junk AV’s.
A local firewall is very useful, even on a Linux computer when it’s directly connected to the internet (home, free public WIFI, etc).
There are a lot of network based attacks that computers without firewalls are vulnerable to.
man in the middle attacks, spoofing, etc. It also keeps ports that shouldn’t be exposed to the internet away from the internet.
“Sure beats most the Windows junk AV’s”
I don’t think that any antivirus company even consider clamav as a competitor or care to share samples with them this is the reason why their signature db is nowhere compared to the “junk avs” you mentioned. My experience is that clamav not just gets sigs for a certain malware later but it doesn’t have a signature from 10/8 files.
“There are a lot of network based attacks that computers without firewalls are vulnerable to.
man in the middle attacks, spoofing, etc.”
I don’t see how firewall would help you in a MITM attack. There is a publicly available tool called ZXARPS which is able to intercept/change traffic between hosts in the same broadcast domain (eg between yout laptop and default gateway), try to defend your box against that with iptables
“It also keeps ports that shouldn’t be exposed to the internet away from the internet. ”
The thing is that you are almost always behind a NAT device whether you using your laptop in a corporate network or just at home behind a dsl router but don’t get me wrong having a firewall in situations where you for example have a samba server running on your laptop what you need to access when you are home is ok.
Using premade firewall rulesets however what the user in many cases don’t understand and probably just an “input only” ruleset doesn’t help much.
Depends on the attack really, a firewall shouldn’t be the only line of defense.
That is a fair point, but it requires that the user remember to turn on and configure a firewall during times that they aren’t protected by some other method.
What?!? How exactly does a firewall mitigate man in the middle attacks or spoofing? That’s just silly.
Spoofing IS a man in the middle attack.
– http://www.fwbuilder.org/4.0/docs/users_guide/ch15s02s06.html
– http://www.cipherdyne.org/LinuxFirewalls/ch01/
– http://www.aboutdebian.com/firewall.htm
For workstations and even home personal machines; SSH is a must for me. I can manage, and have, my home machines from anywhere in the world with a network connection; safely. If you support client/family/friend machines then SSH can save you a house call.
Not to mention, copy files between machines safely, provide ad-hoc secure proxy when away from home, provide network shares with real security rather than CIFS/Samba’s leaky credential management.
Even if SSH wasn’t so wonderfully useful, I’d still recommend firewall rules if only to detect port scanning and other network oddities. If it has a network connection, it should have a firewall in place.
Yep, SSH is awesome.
Why bother? If you’re connected to the internet you’re going to get port scanned and probed. It’s a fact, you don’t need a packet filter to tell you that.
Heck, you’re probably getting scanned and probed so often that that logs will be too big to be useful.
Firewalls are over-rated, both on workstations and standalone gateways.
Off-topic but this is especially common in corporate environments where many managers seem to think that firewalls (especially Cisco ones) are magic amulets that will protect you from all evil.
True, on there own packet filtering isn’t going to cure all. You will also see a lot of noise if connected directly to the internet. If the user is behind a router, that notice of network noise may be a sign of issues within the local area though. A friend is visiting and suddenly I’m getting port scans and other network oddities; I ask them if they are playing with my network or have an infection that needs to be addressed. My user’s network is behind a router but they call asking about popups or see oddities in the logs; I start looking at the other machines inside the network.
I’m not the average user though as all my machines at home that can, have IDS on and watching each other. Someone may pop one of my machines but you can bet there are going to be “witnesses” that see the mugging and report back to root.
I figure it’s already there in the kernel and the setup isn’t hard enough to justify not doing at least a three way handshake and a couple of drop all rules.
Overrated, maybe. Over-relied-upon, definitely. But they have value. At least a few Windows remote exploits were preventable or otherwise mitigated by using a firewall (maybe Linux ones, too). And they help in a defense-in-depth strategy. They might also help some less-skilled Windows users detect network-accessing malware (though the false alarms often generated diminish the advantage there).
“I’ve bored the readers of my personal websiteto death…”
Not the “websiteto” bit. I think that should be reader (singular)?
This seems to be an article about how great your own piece of software is… if OS news is going to let people advertise they could at least tell people about it first like: Advertisment follows…
Two questions for you: What are you smoking, and can I get some of it?
Yeah, another useless article by some self-promoting dickhead featured on OSAlert.
You’ve got to wonder when the so-called editors on this site will wake up to the fact that they are little more than a bunch of saps…..
One of the biggest security concern for home users is to protect their data from themselves. Baking up your data and running with unprivileged account helps a lot.
You can delete, or ship user data without root escalation so I fail to see how running with a non-privileged account helps here.
Keeping backups is a good practice.
“You can run a comfortable desktop life in Linux without so much as lifting a finger, with most distributions configured properly, including firewall enabled and running and hardening profiles preconfigured for you.”
This is completely untrue. You explain how to minimize risk, however you still have to be careful about what you install or run, or you can still be exploited just like anyone else.
You do NOT need root to be exploited. Implying that users are safe by nature of running Linux is a very dangerous thing to tell to people that don’t know better.
Please update your article.
I don’t view AV as optional regardless of platform. Even if your using a low risk platform; you probably talk to other platforms. Viruses for my platform may be far and few between but why should that justify my being an immune carrier and passing on something to a platform I’m interacting with. When we all got on the same network, we became responsible for each others platforms. Passing something on through negligence is not unjustifiable.
Hello guys,
Dedoimedo here. First, this is the first article posting here, so please excuse the few rough points, like the missing space in paragraph one and such, will be sorted out. Be gentle.
Now, thanks for the comments.
Linux security: we can argue about this to death, but the point is: it’s all about statistical probability.
I think the home usage security card is seriously overplayed, regardless of the operating system used and if you get it right, the operating system becomes a non-issue. Real security is agnostic.
Exploits exist, vulnerabilities exist. On the same note, huge meteors exist and cosmic ray bursts exist. Likelihood of witnessing one before imminent doom? Not very high.
If you don’t go about wildly executing stuff, then you won’t see the pixel devils take over your machine.
Cheers,
Dedoimedo
This is exactly the same security mistake that Microsoft made in the 80s and 90s.
And what do you mean by: This is exactly the mistake that Microsoft made in 80s, 90s?
Dedoimedo
Do you seriously not know?
All mistakes Microsoft made until they decided to take security seriously.
I disagree.
I truly believe what I say and it’s as simple as that. Security (for home) is no biggie. In fact, it’s boring.
Windows OS is neither the disaster nor the blessing that you might read about here and there. If you pay attention, most boxes were compromised by: no patches and ancient vulnerabilities, deliberate execution of code, user mistakes, not any special inherent flaws in the design.
Dedoimedo
Windows was a disaster until XP service pack 2, as was ME, 98, 95, 3.11, and MS-DOS before that.
I know that you disagree, but that doesn’t make your opinion right. Microsoft even admitted that their security was crap and their design was flawed themselves 8 years ago.
http://www.microsoft.com/presspass/features/2002/feb02/02-20mundieq…
“Boring”? What does that even mean? Ignoring security at home will just make Linux become the next Windows 95. Stop telling users that don’t know better that they don’t need to worry about it.
I said this earlier, but the link to your article on white-listing and black-listing is broken (it is a file:// URL). It looks like you forgot to put that on the Internet. I would be interested in reading it.
Just go to his own site and search for it.
http://www.dedoimedo.com/life/whitelist-blacklist.html
And also, I would say that the whitelist blacklist article on his website is pretty much against what I think about whitelisting and blacklisting.
Central to his whitelisting and blacklisting article is the idea that whitelisting = innocent-until-proven-guilty and yet he goes on to say that whitelisting is done in old Soviet Union yada yada and that blacklisting is the norm of the society, done in US and guilty-until-proven-innocent.
I would say that trying to divide it into forms of governance and behavioural patterns is much more complicated. Not to mention the amount of prior work needed to prove that governance and behavioural patterns can be mapped into the analogy in the first place. But I’m digressing. My main problem is that I do not even agree with his use of the words whitelisting and blacklisting.
Basically, the idea of blacklisting (not coincidentally, blacklist as a word is permitted by the spell check while the much newer whitelist is not) is to select known bad elements of the pool of all elements and apply strict rules on them. Contrast this with whitelisting where you select the known good elements and build a fence around them to protect them.
Actually, both cases’ characteristics are very well known. Blacklisting allows for more rapid development but is much more prone to attacks while whitelisting is much more secure (though not eliminating insiders) but can be so painfully slow. Usually, in real life, they are used in combination — simply allow for a gray area and you can selectively relax rules for known good elements and apply strict rules to the known bad ones, with whatever policies the administrator wants to apply on the gray.
I am now going to show how both cases are doomed to failure if not applied together. Blacklisting is currently employed in malware scans. This is where malware appears in the wild first (recall Blaster, mydoom, sasser?) and then the malware scanning companies will do whatever they can to block it, which, for virii (stupid spell check allows for viruses but not virii) is a signature check. This model of work is proven to be easily compromised. Whitelisting, on the other hand, is going to say that you can only use openoffice.org and mozilla firefox. That way, you cannot install stuff that compromises the security of the system. If chrome comes along, it will need to be thoroughly vetted first (no wonder it is so slow moving), but this system is only vulnerable to regulation oversight and insider malevolence. It tends to last longer, and is evolutionarily selected for use in large governmental organisations, most notably in military (i.e. those that try to be funny in war tend to be infiltrated too quickly).
Hence, it is important to incorporate both. Which is the problem with malware scanning these days — old systems used to have intrusion prevention rather than detection, and when they compared the newer detection to prevention, they found out, quite unsurprisingly, that detection is a lot lousier in dealing with attacks (and that the number of signatures to scan increases so fast that whatever gains it initially had over prevention is quickly overrun).
If you want to read more, read ranum at
http://www.ranum.com/security/computer_security/editorials/dumb/ind…
PS: In fact, the whole site itself is generally well-written.
It’s nice to see common sense articles like this one, and helpful, too. Hopefully it will also demonstrate that Windows users are not only affected by viruses, malware, and all of that, but they don’t have any useful logging capabilities either. Linux is amazing when it comes to logs. Logs are kept for everything, and are a tremendous help when trying to troubleshoot something. In Windows, logs are an afterthought, and makes troubleshooting more difficult as we find ourselves looking around for solutions with symptoms. On Linux where we can look at a lot and determine where to go next.
I don’t quiet agree with the article, the default setup does not use a firewall, but it does expose some things to the outside world: avahi-daemon
It has a few settings to make it more secure by itself, but saying a default Ubuntu desktop has nothing exposed is not true.
Possible even dhcpcd is listening on his/her socket.