DragonFly 1.8.0 has been released. The biggest kernel change in this release is the addition of virtual kernel support and a virtual kernel build target. The biggest user-visible changes include updates to third party applications included in the base system, a major rewrite of NULLFS which removes all directory recursion restrictions from mount_null and removes nearly all the kernel resource overhead when using such mounts, and a multi-ip feature for jails.
I haven’t tried this release yet, but I think multi-ip for jails will be a great feature. Jails are immensely useful (unless you are using PostgreSQL, of course).
DragonFlyBSD is creating an amazing infrastructure to support SMP, clustering and kernel virtualization from the scratch. This project is the most vanguardist BSD project currently been developed and in the future will be a competition for very large and scalable Operating Systems (let’s say Solaris, HPUX or QNX).
Good work Matt Dillon and team!
Congratulations.
DragonFly BSD will only be in competition with Solaris, AIX or Tru64 when a company is behind it, at best one can say that if these modifications yeild the sought after results then the system will have comparable performance to major operating systems.
Well I’d love to believe that Matt Dillon is a God, I’ve seen You Me and Dupree, so I’ll just wait to see how well the system turns out after he’s gotten most of the itches scratched.
“I’ll just wait to see how well the system turns out after he’s gotten most of the itches scratched.”
you mean when the kernel transforms to a ukernel?
Edited 2007-01-30 23:46
Matt is not trying to make it a microkernel, so don’t bother with that nonsense, like most real computer scientists, he doesn’t believe in some mystical solution to all computing problems. The microkernel solves nothing, it’s just another kernel like any other, modularity does not have to come through actual modules, one can compartmentalize the kerenl itself without having to have dozens or even hundreds of small items to enter into kernel space at run time.
The hybridized kernel system he’s decided on should work well enough for his needs.
Just for the record: Matt Dillon the programmer is working on DragonFly. Matt Dillon the actor is not
/*Well I’d love to believe that Matt Dillon is a God, I’ve seen You Me and Dupree*/
form DragonFlyBSD’s Matt Dillion website
No, I am not the movie star.
Yes, I’m the guy that did all that Amiga stuff years past
Yes, I was associated with BEST.COM and am associated with the FreeBSD project
http://apollo.backplane.com/
DragonFly is really cool, but not as cool as you think it is. This project had to happen because the FreeBSD project totally blew it with their approach to multi-processing. They’ve made great steps in the right direction, but it’s a small project.
You might as well say that Plan9 is going to gain huge marketshare, because it’s just as likely.
To find a system that’s more scalable, clusterable, and virtualizable than DragonFly, you need look no further than Linux. Forget comparisons to big UNIX. And QNX? I think you’re a bit confused…
The FOSS BSDs have a great heritage and history, but its getting a little long in the tooth. It would be great if these very talented developers could learn to set aside their egos and work on a more viable project (i.e. Linux), but I understand that they have great pride in their work and great faith in their ideas. As long as some people find it useful, I support their efforts 100%. But they’re not taking commercial marketshare, and they probably never will.
“The FOSS BSDs have a great heritage and history, but its getting a little long in the tooth. It would be great if these very talented developers could learn to set aside their egos and work on a more viable project (i.e. Linux), but I understand that they have great pride in their work and great faith in their ideas. As long as some people find it useful, I support their efforts 100%. But they’re not taking commercial marketshare, and they probably never will.”
When people bring commercual marketshare up in a linux thread they are told that it doesnt matter… Why does it matter in a bsd thread then?
Personnally i don’t see linux as a more viable project… only more hyped.
I didn’t mean to flamebait. If you want to talk about technical merit, I’m more than willing to do so. My point was that DF is no more scalable or sophisticated than Linux.
That’s not hype, that’s the truth. Linux has been proven to scale well to the biggest machines currently available (2048 CPUs I believe). I don’t think DF can make that claim.
Like I said (and you quoted), I support any free software project as long as anyone finds it useful. The guy I was replying to claimed DF could take on big UNIX, and that’s a ridiculous assertion.
“I didn’t mean to flamebait. If you want to talk about technical merit, I’m more than willing to do so. My point was that DF is no more scalable or sophisticated than Linux. ”
I did’nt see your post as a flaimbait. Nor do i hope i responded in a way that lead you belive i did.
“That’s not hype, that’s the truth. Linux has been proven to scale well to the biggest machines currently available (2048 CPUs I believe). I don’t think DF can make that claim. ”
What you and i think is irrelevant. What actually works out there is what matter. And i wasnt talking about your claim about linux scalling well when i was talking about hype. I was talking about linux in generel. You might just have found an area where it actually does that fanboys and the like says it can.
“Like I said (and you quoted), I support any free software project as long as anyone finds it useful. The guy I was replying to claimed DF could take on big UNIX, and that’s a ridiculous assertion.”
Thats a harsh remark. Unless you have seen some tests that can prove you right.
“That’s not hype, that’s the truth. Linux has been proven to scale well to the biggest machines currently available (2048 CPUs I believe). I don’t think DF can make that claim. ”
It is obvious by your comments, you are talking through utter ignorance. DF, if successful, will be revolutionary. You say Linux scales up to 2048 CPUs, but on what kind of system? Is this the same kernel I get on RHEL. Can I use this same kernel on a 4 CPU systemm? What Linux version allows you to mix any amount of computers with whatever amount of cpus and treats them all as one logical computer while being able to scale linearly? The answer is none.
DragonFly is the only serious OS doing this type of research. It will be able to accomplish this type of scalability because rather than rely on locking, spinning, threading processes to infinity, it will assign processes to cpus and then allow the processes to communicate to each other through messages. The two models of scalability are sufficiently contradictory that it becomes relatively impossible to implement both. I can assure you that if there is someone working on it, they stole the idea from Dragonfly and will either a) be forced to fork the Linux kernel (hence it is no longer Linux) or b) spend so much time trying to synchronize their kernel with the production Linux kernel that they will not ever get anywhere.
Edited 2007-01-31 23:29
It is obvious by your comments, you are talking through utter ignorance.
I admit I’m not the most experienced kernel programmer in the world, but I have a few years of Linux and AIX kernel programming experience. Maybe you are more qualified, I don’t know.
You say Linux scales up to 2048 CPUs, but on what kind of system?
The top end of the SGI Altix line of Linux supercomputers runs 4096 CPUs, and IBM validated Linux on a 2048-CPU System P. Linux scales to 1024 CPUs without any serious lock contention. At 2048 it shows some contention for root and /usr inode locks, but no serious performance impact. Directory traversal will be the first to suffer as we move toward 4096 CPUs and higher, so that’s where the current work is focused.
Is this the same kernel I get on RHEL. Can I use this same kernel on a 4 CPU systemm? What Linux version allows you to mix any amount of computers with whatever amount of cpus and treats them all as one logical computer while being able to scale linearly?
Choose the latest SMP kernel image from Red Hat. The feature that allows this massive scaling is called scheduler domains, introduced by Nick Piggin at version 2.6.7 (in 2004). There is no special kernel config flag or recompilation required to activate this feature, but there are some tunables you need to set (via a userspace interface) to reflect the topology of your supercomputer (i.e. grouping CPUs in a tree of domains).
Usually massive supercomputers are installed, configured, and tuned by the vendor. They’d probably compile a custom kernel instead of using the default RHEL image. But it could work out of the box if you really wanted it to.
…rather than rely on locking, spinning, threading processes to infinity, it will assign processes to cpus and then allow the processes to communicate to each other through messages.
That’s fine. It’s just that nobody has proven that message passing is more efficient than fine-grained locking. It’s my understanding (correct me if I’m wrong) that DF requires that, in order to modify the hardware page table, a process must to send a message to all other CPUs and block waiting for responses from all of them. In addition, an interrupted process is guaranteed to resume on the same processor after return from interrupt even if the interrupt modified the local runqueue.
The result is that minor page faults (page is resident in memory but not in the hardware page table) become blocking operations. Plus, you have interrupts returning to threads that have become blocked by the interrupt (and must immediately yield), and the latency for waking up the highest priority thread on a CPU can be as high as one whole timeslice.
DF has serialization resources, but they are called tokens instead of locks. I’m not quite sure what the difference is. There also seems to be a highly-touted locking system that allows multiple writers to write to different parts of a file, which is interesting because Linux, FreeBSD, and even SVR4 have extent-based filocks that do the same thing. What’s different about this method?
I hope I’ve addressed your questions adequately. Locks are evil, I know, but they seem to be doing quite well at the moment. Maybe by the time DF is ready for production use there will be machines that push other UNIX implementations beyond their capabilities. But for now, Linux is a free kernel for over a dozen architectures that scales better than some proprietary UNIX kernels do on their target architecture. That says a lot about the success of its design.
But for now, Linux is a free kernel for over a dozen architectures that scales better than some proprietary UNIX kernels do on their target architecture.
Which Unix kernels ? this include the AIX kernel I suppose ?
If not, an IBM’s employe is telling us that Linux and AIX rulez them all.
Funny, it doesn’t feel marketing at all …
It would be great if these very talented developers could learn to set aside their egos and work on a more viable project (i.e. Linux)
What ? The Linux community is the worst regarding the ego.
I like FreeBSD a lot. If DragonFlyBSD can improve upon FreeBSD, that’s really amazing.
However, I’m not too keen on building stuff from source. I’m curious to know if there’s a frontend for installing & upgrading applications on DragonFlyBSD from a binary package repository?
If that’s possible, that would be very cool. If there’s such a package management frontend for binary packages, I’m going to give DragonFlyBSD a try.
BTW, what’s the supposed difference between FreeBSD and DragonFlyBSD?
As long as I know, the pkgsrc allows installing software packages from source (using its package tree) or through binary packages using pkgadd.
BTW, what’s the supposed difference between FreeBSD and DragonFlyBSD?
Mainly the kernel.
ftp://packages.stura.uni-rostock.de/pkgsrc-current/DragonFly/RELEA…
# I wouldn’t recommend the next one. If possible omit.
ftp://packages.stura.uni-rostock.de/pkgsrc-current/DragonFly/RELEA…
There you can find Joerg Sonnenberger’s binary packages build from pkgsrc.
Just add it to your ‘PKG_PATH’ as described here: http://www.netbsd.org/Documentation/pkgsrc/using.html#using-pkg
…and you’re good to go.
i can be wrong,but i think the binary packages found in those links are for previous DF release,like, 1.6.X.
Edited 2007-01-31 16:58
Hmm… The “wip” section of pkgsrc (that seems to be some kind of “experimental” section of pkgsrc) apparently has something called “pkgmanager” that might just be what I’m looking for.
http://pkgsrc.se/wip/pkgmanager
Has anyone tried this pkgmanager? Are all those pkgsrc applications available also for DragonFlyBSD? (They are available for NetBSD, I know that much.)
I wonder how Dragonfly 1.8 compares to FreeBSD current. For a while, the FreeBSD project seemed a bit lost, but now it seems to have things together. It seems they are now dumping their overengineered 5.x and are going back to simplicity and stability of design, which have always been BSDs advantage since the early 80’s. That said, DragonFly seems to have had the most revolutionary idea while being relatively simple to implement. From my understanding, it seems to also be the only OS really able to utilize modern computing emphasis on multiple cores by treating them as independent cpus rather than treating them as one large logical cpu with needless locking and spinning.
Anyways, I would really love to see some performance benchmarks versus some of the other operating systems. Dragonfly has been developed for a while now so that kind of information would prove to be very helpful.
Edited 2007-01-31 01:04
/ *It seems they are now dumping their overengineered 5.x and are going back to simplicity and stability of design, which have always been BSDs advantage since the early 80’s. */
so matt was right all along about freebsd 5.x. I like DF because it’s based of freebsd 4.8., i have been a huge fan of freebsd’s 4.x, but now that freebsd 4.11 is about to be retired, i might switch over to df 1.8.x. and add df along the other os i run.
“so matt was right all along about freebsd 5.x. I like DF because it’s based of freebsd 4.8., i have been a huge fan of freebsd’s 4.x, but now that freebsd 4.11 is about to be retired, i might switch over to df 1.8.x. and add df along the other os i run.”
From my understanding of the situation, the answer to whether Matt was right or wrong is yes and no and quite a bit more complex than that.
Theoretically speaking, the FreeBSD project chose the correct solution but ran into several major problems. 1) It is extremely difficult to implement, the code becomes extremely bloated, and there just isn’t enough resources to maintain it. 2) There is little to no application that could take advantage of the advanced threading model. From what I understood of what I read, FreeBSD was basically implementing a very complex M:N scheduler where all the applications were treating things as M=N=1. So in almost all cases, a native 1:1 threading model scaled much better because it did not have any overhead. 3) In the spirit of biting more than you can chew, the FreeBSD team decided to also implement some kind of fair queue scheme for processes. However, the fair queue code was a place holder waiting for someone to properly implement it. This place holder code had the nice benefit of slowing things to a crawl while causing all sorts of instabilities. A lot of these problems have been fixed for FreeBSD 7.
In addition, it is very hard to fault the FreeBSD team for making the decision to which way to go. What Matt wanted to do is rather revolutionary and is rather risky for a product lots of companies use for production purposes. While there is the potential of a huge payoff, there is a certain amount of risk involved with it. The FreeBSD project would be rather self-centered and careless to try such a revolutionary path without some alternate “safe” path to fall back to if it fails. In a way, the way Solaris, HPUX, AIX, and Linux chose to scale was the only safe and reasonable choice for such a large project.
What happened was probably the best thing for both projects and for BSD in general. Now, the FreeBSD project is free to provide a safe, proven path for scalability, performance, and stability for companies dependent on it while Matt is free to explore an alternative path that might prove to be superior.
Edited 2007-01-31 04:19
So did FreeBSD abandon the M:N threading model? That would be a shame after all the work that went into it. Then again: if you must then you must. Where can I find more information about this? Wikipedia doesn’t seem to mention this more does the KSE homepage. Did I misread your comment?
It was mentioned on a FreeBSD quartly report a while back in regards to KSE and slow performance on non-x86 architectures.
One has to ask how much is due to KSE tuning beant towards a particular architecture, or are the architecturers were there is bad performance, just plain well crap architecture.
So did FreeBSD abandon the M:N threading model? That would be a shame after all the work that went into it.
Everybody is moving away from M:N. It’s a bad idea that looks good on paper. Solaris and AIX both support M:N, and Linux can use it with NGPT (not the same at NPTL), but it is not the default on any of these systems.
Basically, threading only works efficiently if the kernel “knows” about all of the user threads. For example, in M:N (and M:1), it is possible for one blocking thread to block one or more of its peer threads, even if they are runnable. With increases in physical memory and addressable memory (with 64-bit systems), there is no excuse for the kernel to delegate thread management to userspace.
In addition, it is very hard to fault the FreeBSD team for making the decision to which way to go.
Back when FreeBSD decided to go with an M:N implementation, Solaris had already switched from M:N to 1:1 and put out a white paper (http://www.sun.com/software/whitepapers/solaris9/multithread.pdf) explaining why, and Linux had decided to go with NPTL instead of NGPT, IBM’s M:N implementation for Linux. That was a very uncontroversial decision in the Linux camp (http://thread.gmane.org/gmane.linux.kernel/54620), because _they actually understood the ramifications_ of the different implementations.
That’s the fun thing about “theoretically better” solutions. In practice, you’re so often limited by developer resources, that a simple, but well-implemented and highly tuned solution will end up being better than a more sophisticated design that hasn’t had the same amount of tuning. Take, for example, GCC’s register allocator. It’s a theoretically unsophisticated design with an enormously complex (and convoluted) implementation. We’re talking about something that wouldn’t have been state of the art even 20 years ago. Yet, it creates surprisingly good allocations, because it’s had so much tuning. Or, take the x86 code generator in Ocaml. It doesn’t do any fancy SSA-based optimizations, but it generates very good code.The generated code is very straightforward and small, and it turns out that modern x86’s care very much about code size and very little about fancy scheduling.
Now, of course, implementation tuning being equal, a fancy design is almost always going to be better. However, developers rarely have the luxury of unbounded time, and given that constraint, a more sophisticated design often implies a less well-tuned implementation. That’s why there is no alternative to experimentation when writing performance-sensitive code. You’ll be surprised just how often theoretical arguments fail to fully model all the relevant details.
In addition, it is very hard to fault the FreeBSD team for making the decision to which way to go. What Matt wanted to do is rather revolutionary and is rather risky for a product lots of companies use for production purposes. While there is the potential of a huge payoff, there is a certain amount of risk involved with it. The FreeBSD project would be rather self-centered and careless to try such a revolutionary path without some alternate “safe” path to fall back to if it fails. In a way, the way Solaris, HPUX, AIX, and Linux chose to scale was the only safe and reasonable choice for such a large project.
FreeBSD took pretty much the same approach as Linux to address SMP, at about the same time. They both introduced big kernel locks and planned on phasing in finer-grained locking in the future. Linux also experimented with M:N threading at about the same time as FreeBSD implemented it. The difference is that FreeBSD just doesn’t have the same resources. Linux was able to implement fine-grained locking, per-cpu runqueues, and a vast array of threading and process scheduling experiments while FreeBSD seemed stuck in the mud.
Everybody and their dog is prototyping some new theory on Linux, and often it pays off. The kernel maintainers honestly thought IBM’s NGPT model would be the next-gen thread model for 2.6, but Ingo Molnar did some quick hacks. Whoa! That simple, unsophisticated 1:1 model rocks IBM’s fancy-shmancy implementation. Nicksched or staircase, anticipatory or CFQ… Linux doesn’t need to pick a path to pursue, it just picks whatever proves most successful.
Why I just recently chose DragonFly: (anecdotal!)
I was testing different choices for a web server platform. I’m happy with Linux (specifically Arch Linux) for desktop, but I thought it does a lot of superfluous things I don’t need for a server.
I tried FreeBSD-6.2. Turned out that for some weird reason (I suppose a kernel bug), it couldn’t use my other hard drive. Got lots of DMA errors.
I tried DragonFlyBSD-1.8. No hardware problems.
People who are free to choose, choose their OSs for simple reasons like this. We take the first thing that works.
/* What Matt wanted to do is rather revolutionary and is rather risky for a product*/
I thought what Matt wanted do to was being tried in the AmigaOS a while back.
That’d be hardly surprising. Matt Dillon got his roots in Amigaland, you know.
Similar but very different: the original AmigaOS didn’t have memory protection..
This helps quite a bit for inter-process/system call communication!
I think the interesting thing will occur if more people get involved; assuming that the ‘on paper’ vision can be translated into a real world product, we might actually get to see an operating system that can deliver some real changes in how operating systems run on desktops.
I like the way they’ve moved alway from the conventional way of providing scalability – if all works out, maybe we’ll start to see it move back into larger projects.
Looks pretty cool: http://www.dragonflybsd.org/
I don’t know if it’s for promotion though.