“OpenBSD is widely touted as being ‘secure by default’, something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to ‘Only two remote holes in the default install, in a heck of a long time!’. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install. An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems. Used as an indicator to gauge the security of OpenBSD however, it is worthless.”
The article is actually pretty interesting as many of the points raised had some insight behind them. That said, whilst the article argues that removing all bugs is not a substitute for a powerful security framework, it’s also the case that a powerful security framework is not a substitute for removing bugs! Specifically, if any (kernel, probably) bugs creep in that enable you to influence supervisor-mode code in malicious ways or to execute your own code in supervisor mode then there is nothing a security model can do – if you can run code at the highest machine privilege level then you really can do anything, no matter what user ID or security framework you’re running under.
It’s true that a security framework can constrain e.g. processes running as root that get compromised, in order to prevent them from causing more widespread damage. But if you don’t have proper auditing of *all* kernel code, from the system call interface through your USB feather duster driver, to your memory management and filesystem code, you can safely assume that somebody will be able to find a way past it.
Actually what the author forgets is that OpenBSD has many memory-protection features enabled by default which also help against all kinds of memory bugs in applications. When porting any new application to OpenBSD it always surprises people that they fail because they bump into these memory-protection. So other operating systems also benefit from this, because their applications are fixed by people porting the applications to OpenBSD.
They add another layer of complexity… you have to care about kernel bugs, application bugs AND security framework bugs.
I like OpenBSD’s way: keep it simple stupid.
Another point this article brings up is how relevance of the operating system is decreasing in terms of their overall security. Operating systems are nothing on the Internet without the applications running on them. From utility services such as mail, DNS, NTP, etc., there’s also databases, application platforms, web servers. And on top of all that you have the web application running on the stack. Who puts servers naked on the Internet anymore? Everything is behind a firewall. I wouldn’t even put an OpenBSD system naked.
Most of the hacks that have gained attention today have been Layer 7 protocol related, and could have happened on top of OpenBSD as easily as anything else. SQL injection can happen on an OpenBSD as easily as it could on Linux; not because OpenBSD is less secure, but because OpenBSD is irrelevant to the web application. If there’s vulnerable Ruby code that passes the SQL statement onto the database, OpenBSD isn’t going to stop it.
The default OpenBSD install may be secure, but it’s not terribly useful. Add in OpenBSD-audited Ruby, PostgreSQL, Apache (not even sure if all of those are audited) as well as Ruby-on-rails framework, and OpenBSD is a small part of that security equation.
>The default OpenBSD install may be secure, but it’s not terribly useful. Add in OpenBSD-audited Ruby, PostgreSQL, Apache (not even sure if all of those are audited) as well as Ruby-on-rails framework, and OpenBSD is a small part of that security equation.
Well this is exactly the problem, if you don’t have a clue don’t make wild assumptions like the ill-informed writer of the mentioned weblog. It’s not just the auditing, but they’re also “forking” software like GCC, X, Apache, Perl etc. and they’re rewriting lots of essential applications.
http://openbsd.org/faq/faq1.html#Included
The firewall is naked on the internet, what does that run? Most are not hard coded embedded systems these days, but usually unix based systems…
And typically you have a router outside of your firewall which is naked on the Internet…
Do not depend on your firewall, your servers should be able to survive on their own on the Internet… Adding a firewall should be a defence in depth strategy of adding an extra layer of control/monitoring to the path. A lot of places use a firewall as their only line of defence, with machines behind it which would be trivially exploitable if they became exposed.
Incidentally, a firewall implementation done poorly can actually be detrimental, a poorly configured firewall in itself could be exploited and compromised and obviously any firewall no matter how well configured adds an additional failure point to your network.
Incidentally, while it’s possible that a webapp could be exploited to gain a foothold on your system… What you need to do is accept that this is a risk, and work to minimise the damage that could result.
A firewall won’t help here because once an attacker has a foothold on your machine, they are now behind the firewall.
You need to do things like ensure that the webserver is isolated (eg chroot), running as an unprivileged user with the minimum level of access it needs, and that there are no ways for the attacker to elevate his limited privileges gained through your webapp to root on the system.
This also highlights my other point about systems being able to stand on their own without a firewall. If someone gets past your firewall using a poor webapp, and then finds that all your internal boxes are running vulnerable RPC services which are blocked by the firewall from being accessed externally… He could now use his unprivileged access to the webserver to exploit your RPC services and gain privileged access on other internal machines.
Then you may have shared authentication systems, where access to one system gives you access to others and it’s game over.
(this is based on my experience of a real world scenario)
It’s not enough to chroot the webserver… Any server running something thats accessible from the internet belongs in the dmz.
I’d say you seen some pretty bad designed solutions then.
It doesnt matter if a server is running a linux distro with selinux or openbsd it should be in the dmz if it’s running a webserver. And it should’nt be able to communicate with other servers with the exception of a proxy server that have access to data servers on the privat server network.
Very interesting. Its hard to assess as an outsider, but as an argument, it seems to make sense. Essentially he is arguing that having a secure base system is only part of the solution. To be secure, you have to also limit damage when penetration occurs, and OpenBSD, he says, does not do that, or not adequately.
He is really arguing two things.
One, that in the typical installed environment, a less well audited base system with AppArmor or similar will be less at risk than OpenBSD. This is because there may be more penetrations, but the damage from them will be very limited. No idea whether this is true or not in the light of real world experience, it would be interesting to know. Its a verifiable prediction.
Two, that OpenBSD would be dramatically more secure if it implemented some good access control system, such as AppArmor.
It was an interesting article, and you can derive testable predictions from it about what should be observable in the real world if he is right. Maybe the discussion will make more progress if it focuses on whether what the argument predicts should be happening, really is.
Written as a complete outsider.
Hello,
I am glad so many people have found this article interesting and worth discussing.
@Mark – Thankyou for your comments. While I agree that an extended access control framework is not a substitute for quality code and auditing, I feel that it is far more reliable.
You won’t ever be able to audit and remove all the possible exploits for any system, of course you should always try, but you should also prepare for the case where a system is broken into. OpenBSD does not provide a way to prepare for this.
I am also not aware of any of these frameworks actually being bypassed, only the policies. Given the realtivley small size of the framework code, it should be much easier to formally verify and audit, and make sure they are free of vulnerabilities. Due to the way they are designed, breaking into one part of the kernel will not enable bypassing these frameworks.
@Oliver – Exactly what wild conclusions do you think I am jumping to? The software OpenBSd includes and has audited is limited, or out of date. If they provided audited versions of actual server software, then you might have an argument.
@alcibiades – There are many examples of extended access controls preventing the damage an exploit can do. I linked to one example on Dan Walshes blog, and there are many other examples on the net.
Actually, Sendmail, Bind, Apache among others are ^aEURoeout of date^aEUR but heavily patched and audited. You should read some more before actually commenting about that.
The Apache/BInd/etc. versions that ship with OpenBSD are perfectly capable and are heavily audited.
By out of date, I mean that the versions OpenBSD chose to fork do not implement all of the functionality or features in later versions, such as the case of Apache 2.0 vs the OpenBSD audited Apache 1.3
Edited 2010-01-22 02:21 UTC
What? You truly are naive. Calling say SELinux code “small” is ridiculous, and as you should have known before writing the article, there have been several exploits that have either used vulnerabilities in these frameworks themselves or disabled these frameworks via vulnerabilities in other places (of the kernel).
Edited 2010-01-22 06:52 UTC
I mean the size of a framework is small, relative to the size of an entire base system.
I also don’t think SELinux is a particularly good implementation and much prefer RSBAC or even GRSecurity. However, if you could provide evidence of a vulnerability in one of these frameworks, and not just an error in the policy, I would appreciate it.
Google Spengler.
Yes, that was an error in the SELinux policy.
Do you have an example of a real vulnerability in the framework?
How about allowing mmap at 0 from the userland? This is (or was, I don’t know), you know, a requirement imposed by SELinux itself.
“mmap at 0” – does this mean being able to map physical address 0 (which would be bad), or simply “use mmap as memory allocation” (which I can’t see a reason to disallow)?
For compatibility with various other OS ABIs, Linux *can* allow userspace processes to mmap stuff at virtual address zero which would otherwise usually be left unmapped IIRC. There’s a knob in /proc/sys, I think, that lets you control whether this is allowed.
But there was some peculiar interaction whereby this knob wasn’t used on RHEL systems systems where SELinux was enabled. RHEL’s SELinux itself *could* enforce the same constraint if configured in policy but if it was missing from the policy then the constraint ended up not being enforced at all. Or something like that. This might be RHEL-specific.
Source (which has links to further details and details of the folks who worked on tracking down the security problems – Brad Spengler, Tavis Ormandy and Julien Tinnes):
http://lwn.net/Articles/349999/
Specific details on the RHEL problem:
http://kbase.redhat.com/faq/docs/DOC-18042
lwn.net had a proper article about this stuff as well but I can’t find it.
Disclaimer: This is probably a hopelessly mangled explanation since I only half remember it :-S
It was an error in policy, that was a deliberate decision to allow WINE to function, among other things.
It should not be considered an actual vulnerability in the framework.
http://eparis.livejournal.com/606.html
OK, so “just” virtual#0… would’ve been pretty bad if usermode processes could get arbitrary physical memory locations mapped.
Allowing to map virtual#0 is bad enough, though…
Nah, it was good enough, and the lwn.net had the relevant links; the cr0.org blog post, and it’s links to involved kernel functions source code (nice feature!) it was pretty easy to understand.
All boils down to a bug in a C preprocessor macro causing sockop.sendpage function pointer to be uninitialized, and net/socket.c#sock_sendpage() missing a “if (unlikely(!sock->ops->sendpage))” check before calling.
Thanks for reminding me about this – I remember the exploit, but didn’t check into the details; only knew that “local ring0 through socket-code bug”
@allthatiswrong
Hi there – first off, thank you for an interesting and well-written article. It made some really good points and was an enjoyable read. Thanks also for coming here to join in the discussion!
Agreed that the small size of a security framework should make verification easy – in fact, you could create a formal proof of the security framework’s properties (I expect someone has done this). With the security framework in the kernel you have a very powerful way of constraining what all processes, no matter how privileged, can accomplish if compromised.
The main thing I disagree with (it wasn’t explicitly mentioned in the article but I was trying to clarify it with my comment) is the idea that these mechanisms can protect against kernel-level code being exploited. They can protect against all sorts of things, including code running with root privileges, etc. This might well allow you to sandbox a compromised process such that it is less likely to be able to take advantage of kernel bugs in the first place, e.g. by limiting the devices and syscalls the process may access during normal operation.
But the issue I really wanted to draw some attention to is that if an attacker manages to get control over what code is executed in kernel mode – say by directly getting the kernel to execute some injected code by exploiting some bug – then it is literally impossible for an in-kernel security framework to prevent that attacker from completely controlling the machine. There’s unfortunately nothing to stop malicious code at kernel level from simply altering any memory content in the machine – this is true for the most trivial device driver to the most fundamental piece of infrastructure.
The way popular OSes are structured and the way popular hardware works gives all kernel-level code equal and complete trust – and therefore the complete freedom to bypass any security checks that it wishes. The only way you might limit this is by having a separate component (e.g. a hypervisor) that is protected from kernel-level code and can enforce some restrictions on what the kernel level code does.
So for the OSes people use nowadays your only option is really to audit the kernel-level code thoroughly (which unfortunately includes most or all device drivers), since without that being secure your whole security framework may be undermined. However, the security framework can be used to mitigate and contain anything that doesn’t run in the kernel, i.e. most code on the machine! With different OS architectures and / or new hardware support we might be able to do even better in future.
So I’m agreed that a modern OS should have a powerful policy framework for constraining user-level code and that this is arguably more important than simply auditing user-level code. The nice thing about a security framework as you describe is that it can potentially protect against bugs in applications that you, the OS developer, haven’t even heard of let alone audited. And as an administrator it can constrain applications that are provided by a third party or are even closed source, such that you know *exactly* what the application is potentially doing without having to read the source code.
Thanks again for the article and for participating in the discussion.
And even with a HyperVisor, you might be vulnerable to DMA-based attacks… at least with the original implementations of x86 VMX.
True! I forgot about that. So if you were going to use a hypervisor to enforce this sort of thing you also need an IOMMU so it can protect itself from DMA. Modern x86 systems do have / are getting that hardware, though I’m not quite clear who has it now :-S
Good question – both Intel and AMD have IOMMU (at least on paper), but I’ve only read about (and worked with) the original Intel VMX.
It’s been some years since I’ve touched it, so I can’t remember if you’re inherently vulnerable against DMA attacks or if it requires a buggy hypervisor
Hi Mark!
I really am glad you and others found the article interesting and a good read.
How could I not join in the discussion when good points are being made? The whole point of my article is to get people to discuss and think about the issues I raised. By discussing the issues, hopefully we can all learn along the way.
It is interesting that you bring up making a formal proof of frameworks. For SELinux and RSBAC have done exactly this. SELinux is an implementation of the FLASK architecture, and RSBAC of the GFAC architecture, both of which are formally verified.
The main point you make is interesting, and the only real argument I have seen thus far. You are saying that these frameworks will not help against any kernel level bugs…, but I am not sure that this is the case.
These frameworks have all been around for almost 10 years…at least since 2002 or so. In that time, I have set them up and seen them mitigate many vulnerabilities in practice, and at the same time have never heard of an example of them being bypassed by a kernel level vulnerability being exploited.
It is my understanding that while these frameworks are a part of the kernel, they are distinct from other parts of the kernel, so as exploiting one bug in a kernel will not allow you access to disable or override these frameworks.
I am not even aware of this as a theoretical exploit.
If you could expand and clarify on this, I would be most interested, and would update my article to address this.
Thanks!
Yep
We have some good discussions on OSAlert and I think this is one of them!.
Ah, cool. I thought FLASK might have been formally verified but couldn’t remember specifically. RSBAC I am not familiar with (not that I’m that familiar with SELinux – I have it enabled on my Fedora systems because they make it fairly painless but I don’t actually tinker with policy!)
Well, the reason security frameworks are so powerful is that there’s a very strong and well-defined boundary between user mode code and kernel code. The security framework lives in the kernel at the highest level of privilege which malicious application code, even if running as root, can be prevented from accessing. Normally root can effectively access everything in the machine, including altering the kernel – but actually, since the application code is in userspace, that’s only possible if kernel code *allows* root to have these privileges this. A security framework that lives in the kernel can restrict userspace code in any way it wants, so even root can be confined securely.
Basically the problem I’m describing is rooted in the fact that modern kernels are horribly lacking in isolation because they’re “monolithic”, in the sense that all code within them shares the same address space and privilege level. The kernel is basically just a big single program within which all code is mutually trusting and equally privileged. So any code that’s part of the core kernel or a loadable driver module actually ends up with privilege to read and alter any part of machine memory, access any device, etc. There’s usually some sort of supported ABI / API for driver modules that they *ought* to use but there’s not actually a solid protection boundary to stop them from accessing other things.
The in-kernel security frameworks in Linux exist as a Linux Security Module, which is an API boundary but it isn’t actually a protection boundary – there’s nothing to enforce the correct use of the API. There are well-defined places where other kernel code *should* call into the LSM layer and there are parts of the LSM that the rest of the kernel *shouldn’t* touch. But there isn’t actually anything in the Linux kernel to stop a malicious bit of in-kernel code from corrupting the LSM’s state to make it more permissive, or disabling the LSM hooks so that it no longer gets to make access decisions.
That is, unfortunately, a direct consequence of the fact that most popular OSes are structured monolithically – same problem will exist on Linux, BSD, Windows, etc. It really is possible for any kernel code to alter any other kernel code, even to alter the security framework. There are some ways in which you could, perhaps, make this harder but I don’t think any mainstream systems can actually prevent it.
So, all you need in order to circumvent the security framework is to get some malicious code into the kernel – and, assuming your security framework is solid, that needs there to be a kernel bug of some sort. I don’t have an example of an actual in-the-wild bug here but I’m pretty sure one could be found with a bit of digging. But instead, here’s an example of a bug I once saw that could occur …
Device drivers are a good source of kernel bugs – there are many of them and not everybody can test them, since some devices are rare. An easy mistake to make would be to have function like:
void send_data_to_userspace(void *user_buffer)
{
memcpy(user_buffer, my_private_kernel_buffer, BUFFER_SIZE);
}
That code will copy data to a user-provided buffer from an in-kernel buffer. It’s incorrect though – kernel code should not access user pointers directly, which won’t even work on some CPUs. On x86 it will work perfectly, so testing on that platform will not show any odd behaviour. But actually this code is *trusting* a pointer from userspace, which could actually point anywhere, including into kernel data structures.
Now suppose your webcam driver contains this bug. You wouldn’t expect that allowing access to a webcam would defeat your security framework and you’ve probably allowed *some* applications to access it. But if the application accessing it supplies a maliciously-crafted point it could potentially do anything up to and including disabling the entire framework.
If the bug is in a device driver then you can at least use the security framework to disable access to that device once you get the security advisory. If the bug is in the core kernel (in the past various fundamental system calls have had such exploits) then disabling access might not be an option.
Does that make some sense? I might have overcomplicated things by talking about a hypothetical example but basically there are many classes of bugs that might allow kernel compromise and on modern systems kernel compromise = everything compromised
Anyhow, hope I’ve helped, thanks again.
Hi Mark,
Sorry for the delayed reply.
You do have a really good point, and I will update my article to address this.
This may also be of interest.
Basically, we can add separate protections into the kernel, and audit the kernel as much as possible to try and prevent things like this.
I am glad to say that there are no real examples I am aware of of someone breaking these systems through kernel vulnerabilities, and things like PaX really help here.
As you say though, the advantages MAC provide can not be denied, and this weak point does nothing to diminish the technology.
Cheers
There isn’t really anything you can do to guard against malicious kernel-mode code. Sure, you could map the security subsystem (and related kernel structures) as read-only… the malware would just remap as writable.
OK, you install checks in the kernel APIs related to page table manipulations (there’s probably more than you expect) and disallow deprotecting. So, the malware just accesses the pagetables directly which you can’t guard against.
You can add a lot of mitigation, including randomized addresses for kernel structures… you could even go microkernel with separate address spaces for each module… but as soon as a piece of malware gains ring0, none of this is failsafe.
Only way to gain some safety against ring0 breach is through a hypervisor, but those are fairly complicated to design (i.e., possibility of bugs that could let malware break out) – and simply wrapping an existing kernel in a hypervisor isn’t a silver bullet, either… a well-designed hypervisor can insulate each running OS instance against eachother, but still won’t automagicaly insulate a specific OS instance from ring0 malware.
So ideally, you’d want an OS that’s highly hypervisor-ware, and requires the use of hypervisor functions to manipulate address mappings (and probably a whole bunch of other things) – only then can pieces of a kernel instance be properly insulated. And there’s a lot of subtle pitfalls that might still ultimately let a piece of malware end up tricking the legitimate pieces of kernel code to invoke the hypervisor on it’s behalf.
Depends on your application.
OpenBSD is by a good margin the simplest, most powerful, and most flexible firewall OS I’ve used (yes, pf has been ported to other BSDs, but you always get the latest and greatest from the source). The installer is spartan but functional, and gets a machine going in a refreshingly minimal amount of time.
The common services are audited, and often patched or rewritten, as well — OpenNTPd, a patched, chrooted Apache, a patched sendmail, a patched BIND…those ship in base. For running a simple network service, OpenBSD is tough to beat. I’ve run it Internet-facing with small-to-modest workloads for years with no problems.
If you want to run non-base software, well, you take your chances, as they warn you. In that case, a MAC framework may be beneficial, as would be the case of, say, running a shell server. Unfortunately, most of the MAC systems appear complicated (SELinux) or not in the mainstream software/kernel (GRSecurity, RSBAC?). I don’t know about TrustedBSD as I don’t run FreeBSD. Besides, who is to say the MAC framework is solid, or immune to kernel bugs? I remain unconvinced that MAC is the total solution.
I think OpenBSD might adopt a MAC system if there was one that was simple, clean, and well-designed. They liked systrace for those reasons, but unfortunately it ended up being hobbled by the underlying kernel infrastructure (on several OSes, not just OpenBSD).
The article was a long treatise, but the arguments to me mostly end up being assertions, with not much in the way of hard evidence. I do agree with some of the points, as I do agree that MAC may have its uses, but I also believe that OpenBSD is as secure as any other open-source OS (and some closed-source ones as well) for a good number of deployment scenarios.
Another nonsense article, albeit a well-worded one.
You mention GRsecurity as a framework OpenBSD is lacking but later on say that OpenBSD include many of the features of GRSecurity. Uh, so….it has the features but its kind of not enough because it’s not called a “framework”?
I should have stopped reading here. You know that both Sendmail and BIND are the de-facto standards, right? And that current versions does not have any of the problems that previous versions, right?
Linux/FreeBSD/etc are old and has had many security problems in the past ergo no one should use them. SELinux has had security problems so we should not use it either. How can you consider any of these solutions as secure?
Uhm…so…the problem is that…they are rejecting insecure solutions? Seriously? This is bad how?
You sure put a lot of faith in extended access controls.
This “article” can basically be summarized as “OpenBSD does not work that way I, an anonymous nobody, think it should therefore it is wrong”. This is a pretty good example of “allthatiswrong”.
I believe you may be confused. I do not criticise OpenBSD for not incorporating GRSecurity, but for not incorporating the TrustedBSD project, which is available and suited to the OpenBSD project.
You’re kidding right? BIND 9 was a complete rewrite, yes, but it has still had some horrible security flaws. You can hardly saw it was written with security in mind.
Saying Linux/FreeBSD having security problems is similar to BIND/Sendmail having security problems is a false analogy.
For BIND and Sendmail, secure alternatives exist without any loss of features of functionality. Th same is not true of an entire OS.
The problem with the OpenBSD team rejecting Systrace is that they do not reject it because it is using an insecure method(I did not even see this brought up in the discussion), but rather because any such technology provides no increase in meaningful security.
Do I put a lot of faith in extended access controls? Sure. But that faith is well placed, as there is example after example of exploits being rendered useless by the technology.
You keep saying that.
One of the many examples:
http://www.theregister.co.uk/2009/07/17/linux_kernel_exploit/
One of many examples?
That is the only example I have ever seen anyone provide, and it is not an error in the framework, but an error in the policy.
It is also specific to SELinux, and does nothing merit your argument against the technology.
See here: http://eparis.livejournal.com/606.html
Generally, I believe that OpenBSD is still the #1 when it comes to security. (And no, I don’t use OpenBSD.) As a software developer, I find their philosophy (audit instead of work around) sound, but this is not the real reason for my believe.
When you look at the grand scheme of these “security technologies”, MAC is practically the only piece missing from OpenBSD. But everything else is there: ASLR and other randomization all over the place, PIEs, stack protections, privilege separation, and so forth. OpenBSD guys were not the inventors of these technologies, but from my digging these are the guys that use them to their full extent. Often these solutions can be found also in Linux and Windows, but their use is typically limited. For instance, in the Linux world, sadly, it is nowadays hard to find a fully working “hardened distribution”. (I used to run hardened Gentoo, if that helps.) At this band-aid front OpenBSD is currently ironically the leader.
Sadly, SELinux kind of “won” the Linux security battle. Today the Linux security research & development is pretty much SELinux and MAC kool-aid (with some weirdo LSM thrown in once in a while).
And MAC/SELinux kool-aid is clearly what the author drank too. But more importantly, it is sad that SELinux “won” because there are/were superior technologies like PaX and GRsecurity — both of which share many similarities with the OpenBSD approach.
I’m not sure why you think I drank some “SELinux kool-aid”, as I actually agree that SELinux is the least best implementation.
I much prefer RSBAC/PaX.
Good for you.
But unfortunately where security really matters (which is not our computers), RSBAC/PaX is not there.
You have to look at what the big names such as Red Hat and Ubuntu do. Out-of-tree security frameworks don’t really count. Actually, I believe there are out-of-tree MAC frameworks for OpenBSD as well.
Now, enough feeding of a troll by me (a troll).
Edited 2010-01-22 07:41 UTC
The article is based exclusively on the first glance impression.
The security frameworks do not add very much security. The only goal of those frameworks is to organize security management yet another more complex way. While there are some cases when [SELinux, AppArmor or whatever] is good on its own, the best use of it is to move the security subsystem from the fundamentaly flawed linux security (http://article.gmane.org/gmane.linux.kernel/706950) to some more or less agnostic-of-everything-else kernel subsystem. At the same time tuning the security framework is a nightmare, that is more of a security risk then a security backup for the system.
Even worse, any security framework is useless until the software is agnostic of them, simply because the administrator has to either give a piece of software sufficiant access, or reject this piece of software. It took more then a decade to make X11 run without root permissions!
On the OpenBSD side, the sources of the vulnerabilities are malaudited ports that runs on top of secured system layout and therefor is somewhat harmless towards the system.
Anyway, it is generally accepted that linux even “enhanced” by security frameworks is less secure then OpenBSD.
P.S.: There are two point that I don’t understand at all:
1. The author claimes initial security focus to be the only source of security. But I am absolutely sure that while initial focus on security gives some extra security bonuses to the system, it is generally worthless until the security becomes the key point of the system. And even then, the exact implementation of a security measure is much more important then the architecture and the overall aproach to the security.
2. The auther believes unix access control consept to be fundamentaly flowed. Does anybody know at least any kind of proof for that?
First, could you please make an effort with your spelling? Annoying to read.
Second, while I disagree with many of the author’s points, he is right here: security is a process, so a ‘secure OS’ is an OS which allow administrators/distribution to implement (securely) their chosen security process.
Traditional Unix security doesn’t allow to implement easily for example:
-a sandboxed system where an application run by an user cannot have access to users data.
-a NSA style operation where you need those security frameworks mentionned in the article.
So it is flawed by being not flexible enough.
That said I don’t know any security model flexible AND easy to use, so in this view all the security models are flawed..
Edited 2010-01-22 14:10 UTC
I still don’t see any data pointing to OpenBSD having a larger list of known security risks than any other OS — Unix or otherwise. Until then, the original article doesn’t even qualify as an abstract for a research article but merely an editorial or letter to the editor.
q.v. http://openbsd.org/security.html, http://www.sans.org/top-cyber-security-risks, http://www.commoncriteriaportal.org/products_OS.html#OS, and http://en.wikipedia.org/wiki/Abstract_(summary) vs.http://en.wikipedia.org/wiki/Letters_to_the_Editor#Misrepresentatio…
@Gary – My article was never designed to be anything more than an opinion piece. It certainly should not be taken as serious research!
If you think the list of vulnerabilities of OpenBSD is relevant to crediting or discrediting my argument, then you may have missed the point I was trying to make.
(Also, let’s try and keep the comments to here of my blog, whichever suits you)
The OpenBSD approach is about ensuring that the existing security framework cannot be broken… While the unix permissions system is not ideal, it is more than adequate for most uses. And the simplicity of the system makes it that much easier to verify that it functions as designed.
Contrast this with the approach most commercial vendors take (eg microsoft with NT) – have a fancy over complicated security model that looks really good on paper but due to its inherent complexity, it’s hard to be sure that it functions to spec and there are many loopholes that allow you to bypass the security model entirely.
OpenBSD has the right idea, get the existing security model rock solid before you start extending it, and when you do extend it try to still keep it as simple as possible.
Over complicating things not only makes it easier for bugs to exist, and harder to find or fix them, but it also makes it harder to manage, increasing the chances that someone will either make errors in configuration, or not want the overhead of maintaining the security system.
I prefer GRSecurity on Linux. The issue for me is that OpenBSD CANNOT audit everything. If I want to run an un-audited application what is going to give me the most protection? OpenBSD doesn’t have a MAC, and if the app is un-audited I’m not going to have many protections from flaws in this application. GRSecurity will allow me to build a policy automatically for the application. I think OpenBSD remains a niche because if you want to go outside of the list of audited applications there are better operating systems to use.