A minor extra release of MINIX 3.1.3 was done, called 3.1.3a, correcting some flaws in 3.1.3. “Found after release time and fixed in 3.1.3a: mkfs does not check for overflow of firstdatazone field in superblock; a missing check in rename caused moving directories to hang sometimes; autopart truncates new partitions at 4GB. If you want to create a larger partition, use part (expert mode) to do that first; and more.”
“Core dumps are broken.” — that’s the most notable thing I saw in the release notes. I used to play around with 3.1.2, and I’m trying to imagine what could have broken it…
“I’m trying to imagine what could have broken it…”
The quality of the source isn’t very good, unfortunately. The mantra that uKernels are more stable is becoming more and more laughable.
Nice on paper, but close to impossible to debug and get decent performance out of (yes, I’ve actually tried heavy memory and process loads on Minix 3.1.3 and it just breaks down It’s also very buggy.
Seems to be written mostly by students without a clue.
Seems to be written mostly by students without a clue.
You do know who Andy Tanenbaum is, right? If you make wild claims like this, you better show some proof.
Just because someone is a recognized computer scientist/professor that has their name splattered all over computer science textbooks that often are specialized on the gory details of how OS’s are implemented doesn’t say anything about the quality of the product (outside the textbooks) that are associated with him. In other words, he may be mostly overseeing a bunch of students working on this, or perhaps just a bunch of monkeys trained well enough to get C code to compile: the short bug list in the header for this thread alone states that surprisingly little care was put into this release of MINIX, because the errors mentioned are those made by beginners and/or those that didn’t bother to properly test things. Let me explain:
“autopart truncase new partitions at 4GB” sounds suspiciously like someone using 32 bit unsigned ints when a larger int should have been used: creating a test case for boundaries of number representation would have been done by someone concerned with the quality of their product, and such 32 bit boundaries are obvious, especially to someone that’s attempted to find the amount of free space left on a disk after FAT32 came out, using the old Windows API for that which didn’t take that into account.
“a missing check in rename caused moving directories to hang sometimes;”: frankly, there aren’t that many cases to test for when moving directories in a filesystem. This is an amazingly stupid error to make and not have tested for before shipping.
“mkfs does not check for overflow of firstdatazone field in superblock;” is yet again, another stupid error made most commonly by beginners.
So, I’m guessing Andy Tanenbaum has more of his name than his effort involved in this release: I’d be ashamed to have released something with these noob bugs in it. I also suspect he’s had very little personal review time put into it, and at least in the case of the superblock overflow, that may readily be caught by running lint. This release of MINIX has “student hands” written all over it by this thread’s posting text alone.
(fixed typo)
Edited 2007-06-09 17:12
“””
This release of MINIX has “student hands” written all over it by this thread’s posting text alone.
“””
Agreed. But you left out the obvious gaffe that today, in the year 2007, 20 years after its initial release, and over 1.5 years after the launch of the Minix 3 branch, Minix only runs on 32 bit x86. That’s it.
Unportability was one of Tanenbaum’s early criticisms regarding Linux.
For a project that is intended to prove AST right after all these years, it seems to me that Minix 3 is doing a better job of demonstrating how wrong he has been about what actually works in the real world.
If portability were a goal of MINIX3, maybe you’d have a point.
“””
If portability were a goal of MINIX3, maybe you’d have a point.
“””
Why isn’t portability a goal, then?
Prominently displayed on the MINIX 3 home page is this statement:
“MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability”
If “being usable as a serious system” on “embedded computers” does not imply the need for a degree of portability, I’m not sure what would.
Edited 2007-06-09 18:06
Why isn’t portability a goal, then?
It’s an academic project. Portability isn’t particularly interesting. It’s much more interesting to figure out how to make a high-performance VFS within a client/server architecture, or how to create a system of fault-tolerant restartable services.
If “being usable as a serious system” on “embedded computers” does not imply the need for a degree of portability, I’m not sure what would.
Perhaps, but their work on resource utilization and fault-tolerence is probably more relevant in this regard. Portability is easy to do once the hard stuff is achieved.
“””
Portability isn’t particularly interesting. It’s much more interesting to figure out how to make a high-performance VFS within a client/server architecture, or how to create a system of fault-tolerant restartable services.
…
Portability is easy to do once the hard stuff is achieved.
“””
Really? On my planet the academics are more interested in portability than their corporate counterparts are.
BTW, how is MINIX’s high-performance VFS coming along?
And can you point me to some examples of its fault-tolerant services demonstrating real advantages over what is offered by existing systems which don’t subscribe to the microkernel philosophy?
Microkernels and message passing were supposed to be the best way forward for these things way back in the 1980’s. How come they are considered to be “the hard stuff” in 2007?
Edited 2007-06-09 22:36
Is MINIX3 not running on other architectures because it CAN’T be ported, or because no one has spent the time to do it?
Looking at the Minix website (www.minix3.org), the goals are:
* Applications where very high reliability is required
* Single-chip, small-RAM, low-power, $100 laptops for Third-World children
* Embedded systems (e.g., cameras, DVD recorders, cell phones)
* Applications where the GPL is too restrictive (MINIX 3 uses a BSD-type license)
* Education (e.g., operating systems courses at universities)
Reading a little further, I see:
Ports to the Xscale and PowerPC are underway.
Ok, maybe Minix hasn’t drawn the following of developers that other OS’s have drawn. That doesn’t mean it can’t be ported.
“””
Ok, maybe Minix hasn’t drawn the following of developers that other OS’s have drawn. That doesn’t mean it can’t be ported.
“””
The point is that this is yet another area in which MINIX 3 has utterly failed to support AST’s claims about the advantages of the microkernel approach.
At some point, one might expect to see actual results, and not just excuses about how MINIX is only intended for educational purposes, doesn’t have portability as a goal, just needs a few more developers, etc.
> Ok, maybe Minix hasn’t drawn the following of
> developers that other OS’s have drawn. That doesn’t
> mean it can’t be ported.
Yes, it does mean exactly that. Just because the number of interested developers is a non-technical hurdle doesn’t mean it’s unimportant. Experts in technical areas, like AST certainly is, tend to forget that. If his goals are to create a production-quality OS, and not a teaching OS as he first intended, this is one major flaw that he’d better fix as soon as possible. Alternatively, he could concentrate on writing a teaching OS and sacrifice portability and features for the sake of clean, simple, and small code.
Oh, and goals mean nothing if they aren’t reached.
Minix 3 was a complete rewrite. 1.5 years is a really short period of time for the creation of an OS.
It seems much more important to focus on making the OS usable than to port.
I believe earlier versions of Minix DID work on other computer architectures than x86-32… so not only has it been done, it took nowhere near 20 years.
Now that Minix has (apparently) been revived, we’ll see what we can see. Back when Linux started apparently Dr. Tanenbaum refused to extend Minix too much, but now this one seems to have more expansive goals.
FAKE EDIT: See for yourself here:
http://www.minix3.org/previous-versions/index.html
Minix2 for Amiga, Atari, Macintosh & SPARC. I grant you that the Amiga, Atari and Macintosh ports were probably all MC68x00 ports, but the rest of their architectures aren’t too similar.
Edited 2007-06-10 18:47
“autopart truncase new partitions at 4GB” sounds suspiciously like someone using 32 bit unsigned ints when a larger int should have been used
Nope, there was explicit code that capped partition sizes at 4GB, which was removed. It probably dates from when the FS had a max partition size limit.
“a missing check in rename caused moving directories to hang sometimes;”
Rename is one of the most complicated of the UNIX system calls to implement, because it must delete one directory entry and create the other in a single atomic operation. You have to maintain this atomicity property while handling all the possible failure mode (out of disk space, failed path lookups, process currently cd’ed into old directory entry, etc).
Edited 2007-06-09 17:53
How would you get an out-of-disk-space error when moving a directory? I thought it was basically replacing the name of an inode (or just directory entry) with the new one.
(I’ve only taken one course on UNIX, and it was mostly ‘learn to program in C’)
If you were moving it to a different partition or a network mount point, and that location was full, you would get an out of diskspace error
I meant rename, not move.
Obviously, you’re right.
How would you get an out-of-disk-space error when moving a directory? I thought it was basically replacing the name of an inode (or just directory entry) with the new one.
With s/move/rename/ you are mostly correct. Except, if the target name already exists, you’ll want to replace that link, and if that is the last link, remove the file. This should be carried out in an atomic fashion.
Quote:
How would you get an out-of-disk-space error when moving a directory? I thought it was basically replacing the name of an inode (or just directory entry) with the new one.
end quote
Never heard of moving between (mounted) partitions?
Never heard of moving between partitions?
The original post was about renaming of directories.
Most kernels don’t handle ‘moving’ stuff between partitions / mounts / etc. The atomicity for renaming inside a mount however is required to prevent running out of diskspace and other errors between the first step (unlinking the original node) and the second step (recreating the node at the new location, possibly replacing an already existing one) (I might have the order wrong, but that doesn’t change much).
Moves between different mount-points happen in user-space and are performed by the application (usually: first copy, then delete). Atomicity isn’t required here.
“””
You do know who Andy Tanenbaum is, right? If you make wild claims like this, you better show some proof.
“””
Andy Tanenbaum is a long time professor of the ivory tower variety who has written some good books on OS theory and is increasingly known for positions he has taken which have turned out to be wrong.
He is also not above presenting misleading data to support his positions, an example of which, regarding the performance penalty of message passing, I link here.
http://lwn.net/Articles/220255/
“The quality of the source isn’t very good, unfortunately.”
This is an argument to question the qualities of Minix to be used in education. Good formatted code, identifiers chosen wisely, along with good documentation makes such kinds of material usable from a didactic point of view.
Just have a look at the documentation quality that BSDs offer – and they do not claim to be systems used in education. The source is well documented, and every kernel / system call, library function, OS component (command or file) has an own manual entry.
Reaching good source code quality will make Minix appealing to developers, too, because they usually need to know more than Joe Q. Sixpack or his son in highschool.
Well well, you can always offer your coding skills to the project, if you think you can do better than those lame clueless students.
If you’d ever read one of AST’s books, you would never make such claims.
“””
If you’d ever read one of AST’s books, you would never make such claims.
“””
I don’t think that anyone questions AST’s grasp of operating system theory.
It’s the “real world” bit that generates the skepticism.
Edited 2007-06-09 23:40
It’s pretty shallow to dismiss microkenels as being less capable to alternatives by pointing to one incomplete implementation. The overall model may be more complex than other models but uKs have the potential to be far more secure and flexible than monolithic kernels as theorised by academics. Work needs to be invested to realise this. Minix 3 currently doesn’t have many of the memory management techniques found in current operating systems. The official Minix 3 doesn’t have dynamic linking support yet. There is only so much work a person with a life outside of software development can do; give these people some time.
I also contend against your assertion that they are
. I point to the canonical example of Qnx and I’ll also point to Beos and Darwin. Minix is no more difficult to debug than any normal software. You’re probably thinking of the GNU Hurd. I remember reading somewhere (I don’t remember the source now) that developers of the Hurd platform say that it is more difficult to debug a Hurd server compared to a Minix server because the Hurd servers are agressively multithreaded.
“I point to the canonical example of Qnx ”
That canonical example shows that uKernels cannot compete performancewise. Yes, Qnx guarantees timely delivery of services (it’s a realtime OS). Somehow, people confuse that with beeing fast. In reality, it means that the scheduler (cpu and I/O) can be as unfair as it wants, as long as timely service delivery is guaranteed.
The truth is, Qnx is very slow and not very scalable, but its goal is *real time*, not speed, nor scalability, so that’s fair.
So don’t use that as a proof of a fast uKernel. It just isn’t. It’s an urban legend.
Edited 2007-06-10 09:47
The truth is, Qnx is very slow and not very scalable, but its goal is *real time*, not speed, nor scalability, so that’s fair.
Not scalable? You mean how QNX is used in everything from VCRs, medical equipment, cars, and the Space Shuttle’s arm?
Edited 2007-06-10 09:52
Not scalable? You mean how QNX is used in everything from VCRs, medical equipment, cars, and the Space Shuttle’s arm?
Well, yes, it’s used on VCRs, cars and Space Shuttle but its multi-processor and clustering support kind of suck.
Nobody is saying that QNX is bad,but real-time kernels do have some disadvantages (just as they have some advantages) over other designs. Being “very fast” is not necessarily one of these advantages — although often being more responsive is (see the case of BeOS).
I think you would have to use a loose definition of the term microkernel to be able to say that BeOS is one. It may have been in the very early years. I don’t know what to say about Darwin. I don’t think it deserves to be mentioned alongside QNX/Neutrino, L4 or GNU/Hurd.
AFAIK the BeOS’ GUI responsiveness is primarily due to it’s aggressive use of threads and prioritizing the GUI ones. QNX feels quite unresponsive in comparison.
Having written/maintained neither a microkernel nor a monolithic kernel based system I can’t say which is the best when all aspects are considered, but I believe that with emerging hardware and languages, purely monolithic kernels seem to go the way of the Dodo.
I was not refering to BeOS being a microkernel (which it is clearly not) but to it being real-time. Despite being, in fact, quite slow, BeOS felt very responsive exactly due to the things you mentioned — a very good use of threads, giving the “relevant” ones (GUI-related ones for example) a higher priority, and due to its real-time architecture and scheduler.
I can only speculate about why QNX feels less repronsive on the PC — it’s probably the fact that it has not been developed with the idea of a desktop system in mind, so the interface and user threads do not get the kind of attention they get in BeOS.
Nevertheless, I guess I should state it again so that I don’t get noobed down, BeOS is surely *not* a microkernel (as far as I remember, it’s a hybrid kernel with a modular design). But, like QNX, it IS real-time.
slightly off topic…
QNX 4 (ooold version) was really snappy on the 386sx 25mhz with 12mb of ram PC I ran it on :] I used the qnx 4 nic demo floppy. ran great.
ok, that was totally off topic… I’m just one of those people that *has* to comment on a story.
You would have been far better off not attempting to backpedal by saying “I was not refering to BeOS being a microkernel” because your language in the mentioned post is very clear that you were: you should have stated “I got careless with my typing” and you would have been more believable.
Now, as to BeOS being real-time, BeOS is no more real-time than the NT kernel: it is soft real-time, with no guarantee of things being able to be done at a specific time or with a specific latency, while QNX (like other OS’s that are used for heavy duty stuff) is what’s known as “hard real-time” meaning if it is possible to schedule something, it will be scheduled, and it will happen, barring catastrophic hardware/software failure, and nothing is going to upset those priorities and change how things work, if it violates the intended schedule. Thus, they don’t have nearly the same response characteristics. An OS I developed for that’s also a hard real-time OS is VRTX: I was developing CNC machine software for a past employer. Is it a microkernel? I’m not sure, as I didn’t bother really investigating that, but it appears that (from code I remember looking at) everything ran in kernel mode, even the user process, since the software handled interrupts and semaphore operations by disabling/enabling interrupts explicitly with machine instructions, which is not legal to do from user-space applications.
Trying to load 3.1.2 into qemu (with and without kqemu) leads to a table with addresses or some other kind of numbers with processes. Coredump or something like that.
Just after you have to select the boot type of the CD.
Help …
In the mean time I’ll test dragonflybsd
Looking at the comments here makes me feel embarrassed for being a programmer.
why?
I tried Minix. No affense to Andy but it pretty much sucks compared to the other *nix OSes.
I would have expected a much more useful and stable OS after two decades of developement, even when written by one guy.
I’d say Minix’s current state is about the same as a GNU/Hurd OS; perhaps even worse because more software and drivers have been ported to Hurd than Minix.
Well, it was the first time I could see a publicity within one article. With not a great pleasure. But, with adblock plus and Firefox, I still can be a happy reader.
If not…well, I just go away…
“im just dropping by,nice post!”
http://www.freedownloadscenter.com/Multimedia_and_Graphics/Video_an…