John Siracusa, the Mac OS X guru who writes those insanely detailed and well-written Mac OS X reviews for Ars Technica, once told a story about the evolution of the HFS+ file system in Mac OS X – he said it was a struggle between the Mac guys who wanted the features found in BeOS’ BFS, and the NEXT guys who didn’t really like these features. In the end, the Mac guys won, and over the course of six years, Mac OS X reached feature parity – and a little more – with the BeOS (at the FS level).
The last piece of HFS+’s puzzle was FSEvents, anasynchronous file system notification API introduced in Mac OS X 10.5 Leopard. Siracusa detailed FSEvents quite clearly in his Leopard review, and added “as for the file system itself, can you believe we’re still using HFS+?” Siracusa stated that HFS+ had to be replaced eventually, and that Apple’s work on porting ZFS to Mac OS X could be the key.
Leopard shipped with very basic ZFS capabilities; it only had a basic read-only ZFS driver, despite rumours that ZFS would become the default file system for Leopard. A read/write version was available too at some point, but only to ADC members. Everybody thought that it would eventually find its way into the operating system.
Siracusa was not all that sure that ZFS would be the successor to the venerable HFS+, though. In 2006, he wrote that “although I would be satisfied with ZFS, I think Apple has a unique perspective on computing that might lead to a home-grown file system with some interesting attributes.”
It seems like he was right on the money, as news got out today that Apple has closed down the open source Mac OS X ZFS project. The characteristically short and useless notice on the Mac OS X ZFS project page reads: “The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.”
This pretty much means that no, ZFS will not be coming to Mac OS X, despite serious efforts by Apple in the past. This is not really something to be surprised about; the first big clue that ZFS would not become part of Mac OS X was when Snow Leopard shipped without any form of ZFS support – not even the read-only driver from Leopard.
Let the speculating commence, I suppose. Previous rumours pointed towards possible licensing issues, but it could also have something to with the NetApp patent lawsuit surrounding ZFS. Coincidentally, Apple posted a job opening for a file system engineer, and John Gruber states that he has heard that Apple is currently working on a home-grown next-generation file system.
What features in BFS could you not like?
They had a UNIX philosophy. You can’t hold that against them.
By “they” you mean the NEXT devs?
Creeptastic avatar, by the way. I like it.
I just fear, absolutely fear Apple homegrown protocols, file systems, standards, ect. Too much marketing smoke and reality distortion accompanies them, with too few details. The most useful homegrown thing they’ve ever, ever done is firewire.
Absolutly. Until they get something serious, they will hack something together, put a “bling-bling” interface on it, and call it “innovation”.
And boy, will they patent it!
Do they usually work?
While not entirely homegrown inside Apple, I think Zeroconf is a good example, possibly the only example, of a good protocol to come out of Apple.
Link-local addressing
Multicast DNS
DNS Service Discovery
All good standards. Though we have yet to see Stuart Cheshire’s vision of Ethernet and/or TCP becoming the standard protocol for all devices (TCP keyboard and mouse anyone?), I think Zeroconf is still a huge success.
The bad example I come up with is resource forks. While adding structured data storage to files isn’t a bad idea, the way it was implemented at the file system level made it very difficult to exchange files with non-HFS systems. This might have been what the UNIX guys at NeXT didn’t like about BeFS, it had the ability to store extended metadata attributes attached to a file that would be lost in a copy to another file system. If I remember correctly, the address book in an early version of BeOS was implemented using file attributes, which was cool in your walled off BeOS corner of the universe, but if you copied a “Person” file from BeFS to another file system or even tried to send one to another BeFS system using a protocol that didn’t support copying file attributes you’d end up with a useless 0 byte file. However, BeFS did support HFS resource forks by storing the raw fork data as a file attribute.
I’ve often wondered why Ethernet couldn’t be more pervasive as an interconnect of devices. It would simplify so many things from an IT standpoint and it is plenty fast enough for the majority of devices, especially since you can run 10/100/1000 on the same lines (not sure about 10GB). Heck, 10 GB Ethernet is faster than many (most?) hard drives can keep up with.
While ZFS is great, its advantages are targetted mainly to servers, from the user POV it’s just POSIX + snapshots/volume management. It doesn’t brings new things to the desktop (with time machine apple doesnt even need snapshots). Apple is the kind of company that could want to go beyond of POSIX and bring new ideas to the desktop…
Wouldn’t Time Machine profit greatly speed-wise from ZFS if only the changed blocks between two snapshots would have to be sent to the backup disks instead of whole files (which is especially painful with large ones)? I do not own a Time Capsule but I read that larger backups can be quite painful over the air. Plus, the sometimes long calculation of the changes would also disappear.
Wouldn’t Time Machine profit greatly speed-wise from ZFS
Sure, but just that – speedups (which could probably be hacked around in many ways). I think Apple would want to go beyond of all that – like presenting to applications something else that a path and a stream of bytes.
With the same hardware, the only way to achieve ZFS like speeds for snapshots, is to adopt zfs like methods. Waving you hand and using the word hack doesn’t change that. Apple probably wants ZFS like features ( for teh speed and reliability) but use it like HFS+ with the psuedo BFS features and legacy Mac-isims.
But that would take a lot of work, and frankly apple doesn’t give a crap about painless effortless snapshots, so no ZFS. They’d rather have something that took forever, but was easy to use, than something instantaneous but had a more difficult UI.
And I suspect that, for the most part, non-geeks would agree with Apple. That being said, I’d rather have something instantaneous and also have an easy UI.
I wasn’t trying to judge Apple, but its wrong to expect them to pursue performance over usability. They’ll select usability every time. Linux will most likely select performance, with flexibility.
Unix: Do one thing and do it well.
Apple: Be easy to use for most people.
Different philosophies. It sort of annoys me when Apple’s decisions result in lack of comparability/ extendability or when Unix is just difficult to explain how to use to other people. (find ./ -name *.txt | xargs grep -i “bob” ). Of course, now that OsX is Unixy you can do some of both. But, any new stuff they do is not going to be unixy.
In my never-humble opinion, I think you pretty much wouldn’t want to present application developers with a representation of a file that was much more complicated than a path and a stream of bytes. I doubt most application developers want much more than that from a file; for them, neat new features are just added complexity.
The stream-of-bytes-at-a-path model of a file has lived for so long and changed so little because it’s a good model that works very well for its intended purpose. It’s not likely that someone’s going to come up with some new-and-better metaphor for permanently-stored data on a disk, that’s going to supplant the filesystems of today.
Edited 2009-10-26 19:23 UTC
Apple is also the kind of company where the NIH mindset is very strong.
I can’t see Apple engineers willingly embracing Sun/Oracle technology.
“Apple is also the kind of company where the NIH mindset is very strong.”
That’s simply not true anymore. Their focus in the past was to develop everything in house to maintain control. Now their motivation is to ship the best product they can and differentiate themselves wherever possible. Often times that means including open source, other times its meant licensing 3rd party technologies and yet other times it means creating those technologies themselves.
Then why would they use the Mach kernel, the BSD userland, CUPS, etc.?
They were aquisitions: Mach from NeXT and CUPS when they hired the team that wrote it.
I’m sure Apple would have no qualms with ZFS had they aquired Sun
Crazy wild unsubstantiated drivel from me:
Maybe Apple wanted to buy Sun, but Oracle beat them to it (better offer, faster negotiations, whatever)?
There were merger rumors for years. A disaster for both companies IMHO; AOL Time-Warner all over again.
Apple should have bought SGI, if anything, for the XFS devs and graphics tech.
dtrace ….. sun product
Because they were desperate!
Edited 2009-10-24 07:26 UTC
How about “no more filesystem checking”?
There is no fsck for ZFS. End to end checksumming does it all.
There is no separate, offline fsck. But there still is the online, background “fsck” known as scrubbing. And it’s recommended that you do that at least once a month.
It would bring low level checksumming with error correction (protection against bit-flips) and super-flexible support for multiple disks.
They could use it to implement a much better time machine with short snapshot intervals, requiring a fraction of the IO-usage and storage space of the current hardlink implementation.
They could also use its support for SSD caches, which means that you could add a small and expensive-per-GB but superfast SSD disk to your storage pool and have your most frequently used files automatically and transparently hosted on the SSD while the less common files are on your large and cheap 3,5″ SATA disks.
While I’m sure that we’re all excited by the technical advantages of ZFS, the fact is that Apple thinks those reasons are not enough to use it – and they now have experience of using it.
I’m running OpenSolaris as my primary desktop.
Doing riskless OS upgrades with snapshots is the best invention since sliced bread.
There are many more features that are nice on the desktop (cloning, compression etc.)
What hardware are you using? !!!
I’ve been trying to find a standard machine to do this with for a long time. A computer where I do not have to spend extra hours loading a driver from a CDROM burned from a different computer just to get the wired network interface working. That is utter nonsense. Do you have a recommendation? I love OpenSolaris, except for that pain.
I am also running OpenSolaris as my primary desktop. It works fine. I just use intel 9450cpu, 4850ATI (no 3d driver, only 2d), ordinary SATA discs. Here is a hardware compatibility list, so you can see components that work with OpenSolaris:
http://www.sun.com/bigadmin/hcl/data/os/
Actually, ZFS would be ideal for media professionals who use OS X.
Think about musicians or sound engineers who deal with files that are hundreds of megabytes in size. Rather than having dozens of copies of the same file taking up gigabytes of diskspace, ZFS would just store the differences.
Plus support for software RAIDing would make recording of ultra high quality audio a breeze where currently latency is often an issue.
ZFS also has native support for compression (which is ideal when your a media professional and frequently handling data that can’t be lossy compressed)
ZFS also doesn’t require defraging nor scandisk/fsck’ing – which is in line of Apple whole philosophy (as in “it just works”)
And lets not forget the improvements to Time Capsule (as already mentioned).
ZFS could have been an awesome addition to OS X and a valued asset for media professionals who regularly work with high resolution samples.
HFS+ makes filesystem geeks like myself have nightmares, the way it stores it’s data and resource forks is ridiculous and it makes FAT32 look like XFS. Dominic Giampaolo (author of BFS and the excellent book “Practical File System Design with the Be File System”) works for Apple now, why doesn’t Steve pick of the phone and be like, “Hey, remember that really neat filesystem you whipped up in 9 months for Be back in the day? Here are 10 more engineers, I’ll be in touch.”
I thought they hired Giampaolo in 2002? If he is still there, he probably has somthing impressive cooking in the skunkworks.
Edited 2009-10-23 22:36 UTC
8 years is a hell of development cycle for an FS.
I think he added journaling to HFS+, then worked on Spotlight after that.
Yes, that is what I heard.
Pretty sure resource forks are deprecated in Mac OS X (and have been for a couple of versions).
dbg has been working on HFS+ for quite a while; were do you think the journalling and FSEvent stuff came from? These are direct descendants of BFS features.
used the Win7 buzz to kill promising tech without too many people taking notice
On those topics:
HFS+ = crap
http://www.smh.com.au/news/technology/utter-crap-torvalds-pans-appl…
Win7 thumbs up
http://www.engadget.com/2009/10/23/linus-torvalds-gives-windows-7-a…
Ya, because Linus is the be all end all of what is good and what is not.
Is it me or hasn’t Linus changed much in 25 years (appearance-wise). It’s kind of creepy. I think I will dress up as Linus for Halloween.
It’s a wig!
Neither of them mentioned HFS+. When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors – then I’ll give the slightest toss what people like you think of Apple or Microsoft.
WTF? What are big name vendors? Why is their software superior to open source software? Is IBM a small name? What about SUN? What the heck does that have to do with file systems?
If you want to know what a good file System is, you ask people who care about file systems. Microsoft has no choice NTFS or FAT32 is a no brainier. Apple HFS+ or UFS no brainier. Linux ext4 vs reiserfs vs XFS vs btrfs. That’s competition that requires people to evaluate file systems based on independent benchmarks as opposed to blind os fanaticism.
Out of sight out of mind.
The Linux install base probably outnumbers OS X 100 to one. People who think that Linux is small-time compared to Apple really have no clue at all what they are talking about.
True, absolutely true. Its used much more in high performance servers where file system performance and resiliency are more important than they are in desktops.
Some years ago, I read in a german forum that someone “does not like the Linux file system because the pictures are too big” – obviously refering to some icons in some file manager.
This shows one thing: Users aren’t interested in file systems per se, in most cases they don’t even know what a file system is, or they confuse the term with something else. Users are just interested in what benefits a particular file system gives them, and those who “give them” the file systems (along with operating systems they sell) should promote advantages of the file systems they use according to what it means to their specific target audience. Apple’s Mac OS X is primarily targeted to the home market and the professional workstation market, not to server farms or heavy virtualization sites.
Legacy.
I was always fine with UFS2, but do honestly prefer ZFS as its follower in BSD and Solaris. Buf for Mac OS X, it’s highly debatable if ZFS or UFS2 are the best choice, remembering the fact that the target audience’s interests primarily indicate what to develop (and to sell), given specific characteristics of hardware used, as well as the settings in which it is used.
Blind OS fanatism seems to be a result of excellently working marketing. This is applyable to the same folks who demand MICROS~1 “Office” on every platform and who cry for “Photoshop” on Linux. Often, the same folks have pirated copies of everything they use.
Um, yes the first one does mention HFS+
To quote:
Maybe it’s hard to believe for an apple fan[atic], but Linux allready has “better hardware and software selection from big name vendors” than Apple and their os.
… or Microsoft and their OS. Linux has hardware support in the bag. As far as being a popular platform for desktop productivity apps, then the point is a fair one. The big players in commercial desktop applications don’t seem to pay any attention to Linux.
Linux has hardware support in the bag, eh? Funny you should say that, especially since there are loads of devices that work in Windows or even OS X but have poor drivers or no drivers in Linux. If your hardware follows a standard or has a good open source driver then Linux has your hardware in the bag, otherwise you’re more than likely sol, because as a driver development platform Linux is simply awful (kernel versions and such nonsense).
If Linux really wants (and I hope they do) the “big players” to create applications for their OS then they better get to work on standardizing an API to work with in as many areas as they can. It is pretty damn hard to “write an application for Linux” and have it work most distributions without recompilation or installing every dependent library along with it. It is much easier for an application developer to target a given “product” like Ubuntu 9.04 than to target Linux as a platform.
The people that drive Linux standards need to consider the overhead of this “many platforms within a platform” problem and find a solution before they can ever expect the masses to come develop commercially viable software for Linux (outside vertical markets, that is). It is the fluid nature of Linux that is both appealing to casual developers and the bane of mass commerce.
And for the record, I am very much a fan of Linux and use it daily. But being a technical person who does development daily, I will say that as a business desktop programming platform Linux leaves a lot to be desired. My opinion, of course, but I’ve seen nothing in recent history to change it.
…have by then hopped onto another bandwagon, and will be praising yet *another* OS while trashing MacOSX and the others?
? -> Linux -> OpenSolaris -> Apple -> ?
is how my Kaiwai radar screen looks right now. The edges are a bit blurry.
Edited 2009-10-25 03:03 UTC
Oh no – Maclots vs. Freetards? I can’t decide which side I want to see lose more, since they’re both so insufferably obnoxious in their own special ways.
Aren’t most Linux systems still using ext2 for their filesystems? And he’s complaining about HFS+ having legacy features? At least it’s journalled.
I still can’t find a Linux distro with btrfs as the default (or even an option for non-/boot filesystems). Which is a shame, I’d like to check it out and compare to ZFS on my FreeBSD server.
OS X is the best desktop (and laptop) UNIX I’ve ever used.
No, actually: most of the distros I’ve tried have used ext3 as the default for some time, which is journaled. Some are even using ext4 now.
Just because the installer won’t create a BTRFS file system doesn’t mean it’s unavailable. It’s at least possible that, if the kernel on the install disk had BTRFS support built, that you could create a BTRFS file-system before you start the installation (say, by using a convenient live disk, like gparted or SystemRescueCD) and just select that partition as the installation target from withing your distro’s installer. Slightly technical? Sure, but not impossible, and well within the capabilities of… well, anyone who has any business testing and speed-racing BTRFS.
I’m not sure why they would want to kill promising tech unless it just didn’t make sense for them to pursue it.
ZFS is variously thought to be… manna from Heaven, the spawn of Satan, a universal panacea, a rampant layering violation, a spiteful licensing attack upon Linux, or a sure-fire way to enlarge various body parts.
Maybe ZFS was, in the end, not deemed to be a good fit for MacOSX and Apple.
I don’t see anything sinister, here, that would have to be done under cover of darkness.
Edited 2009-10-25 03:39 UTC
use OpenBFS, it’s MIT-licenced.
Besides, it wouldn’t be the first time they rip off ideas from BeOS anyway… so why not take the real version instead of some skim ?
Oracle isn’t like Sun. They’re agressive and deeply closed source… sooner or later they can treat you.
The fundamental problem I think people are avoiding to address is ZFS major memory hogging; this might be ok if you have a massive multi-core monster with minimum 2GB memory to get decent performance. That isn’t acceptable for a file system that is supposed to ‘rule them all’ that can scale from an embedded iPhone/iPod Touch device all the way up to a Mac Pro with a high end configuration.
If they are going to create a new file system then they’ll need to be able to convert the existing users file system to the new one then defragment it after as to avoid any performance loss because of the conversion process (FAT16 -> FAT32 -> NTFS yielded massive fragmentation). I assume with the moves they’ve made in 10.6 that it is now possible to swap many parts of the system out and replace them incrementally now that, for example, there is a standard way of interacting with the file system, for example. If I remember correctly there was a PDF put up on the Apple website which talked about replacing many ways of achieving something with a single system call thus making a programmers life easier.
Regarding the filesystem, I wonder if they’re going to possibly use an existing one? although it might make sense to have an in house one, there are also some great ones out there such as HAMMERFS from DragonflyBSD, or maybe Apple can buy VxFS from Symantec (who recently bought Veritas)? hopefully they’ll deliver something that is reliable. With that being said, HFS+ isn’t as bad as some claim. In the 8 years of using a Mac I’ve never experienced data loss because of the file system going pear shaped.
Actually I was thinking the same about HammerFS: Why Apple does not adopt it as its FS? It is a nice filesystem, it has a very sexy license and they would eventually help the HammerFS development and DragonFly because of that.
I’d be more willing to agree if it wasn’t for personal experience with my core 2 macbook hitting swap every day with 1gb and leopard. It wasn’t really usable until I upgraded (to 4gb).
Anyway, I had hoped for ZFS on the Mac, but I’m just as annoyed that they are removing UFS. HFS+ doesn’t even support sparse files.
And what you experienced has nothing to do with the HFS+ file system at all. We’re talking about ZFS and the memory used to improve performance. It is a known side effect of the file system design – ZFS was never designed to be used in an environment where memory is at a premium.
UFS was a walking disaster area when one considers the litany of issues people had with it. Apple is eventually going to replace it with something that will scale from embedded to servers so that they don’t have to have duplication and thus unneeded extra cost. HAMMER will do what you need – are there features missing? of course but HAMMER is in continuous development with the short comings being addressed.
Unlike ZFS, HAMMER provides everything plus a lower memory foot print – I’d say that is a pretty good alternative to ZFS.
Of course not. Why would you think that I connect OSX memory usage with HFS+ ? My point is that leopard as deployed on desktop class platforms is _already_ a memory hog in my experience. Thus it’s hard for me to speculate on ZFS not making the cut because it _might_ be a memory hog on OSX.
Why should a next gen apple FS be required to span all platforms? This is as likely as not to lead to undesirable compromises on both ends, even if it is cheaper. I say horses for courses. (do we buy apple products because they are cheap or cheap to design? does apple pass cheap design costs on to us?).
Hammer might be great on OS X except for the fact that it does not currently exist on OS X. I wouldn’t bet on that changing either, but would be pleased to be proven wrong.
It probably did not make the cut because it is not only a memory hog, but because it is slow and does not have the features necessary for compatibility with OS X applications.
Apple is good at making slick user interfaces and is good at marketing their hardware.
The original iPhone, for example, had slow and overpriced hardware compared to it’s contemporaries at the time. In all respects it was a mediocre product with the major exception that Apple was able to make the interface attractive and easy to use and was able to market it intelligently.
Apple uses HFS+ because a file system is really irrelevant to the sort of thing that people buy OS X for. It does not really matter in terms of desktop user experience that behind the pretty face lies a OS that depends on a file system that is fragile, overly complex, and slower then what is offered by Windows or Linux. So what if your applications take 2 seconds longer to start up and that you have a 15% higher chance of data loss during a improper shutdown?
Apple designed their UI so that you can’t really tell the difference either way.
If people actually paid attention to Apple’s documentation they would of not been stupid enough to try to run OS X on UFS.
UFS was provided for POSIX-compatible file system for things like compliance testing and running database products. HFS+ is NOT posix compatible.. UFS was never alternative to HFS+
Edited 2009-10-24 06:57 UTC
I’m pretty sure that HAMMER doesn’t give you writeable snapshots, which ZFS does – that feature could be very useful for some purposes. For people who just want free or cheap read-only snapshots, HAMMER should satisfy, feature-wise.
Interestingly Matt Dillon (DragonflyBSD lead developer) was considering ZFS for a while but decided it didn’t solve the problems he was interested in (he wants to do single system image clustering, i.e. tying a cluster together into a single logical unix system. HAMMER is designed to accommodate that goal in some way).
I wonder if apple is looking at using WAPBL http://www.bsdcan.org/2009/schedule/events/138.en.html
While this may seam like a bad idea. Think about the fact they already have UFS support . Getting UFS2 support and WAPBL would make a good fit. They are both BSD licensed supported and incredibly stable.
First, ZFS runs just fine with 1GB on my EeePC none multi-core monster. Second, have you checked the recent GB prices?
… I guess the Gods let me have those instead …
I’ve lost 1 TB + 1.8 TB due to file system poo-poos that were caused by HFS+ and nothing else …
Anyways, NEVER run HFS+ on large volumes with massive amounts of data without keeping your copy of DiskWarrior in your back pocket – just in case …
I have to agree, my macbook suffered a HFS+ file system fault which FSCK and other checks could not fix, even tried booting off the disc. I had to reformat, reinstall and then restore to get it running ok. Corrupted a lot of data for no reason. The laptop hadn’t been switched off without a shutdown, it just developed a fault.
ZFS is one of the best FS’s However i also hightly rate NTFS, as ive used that with some heavy duty file operations with both small files and large files and even after quite a few crashes the NTFS keeps going. The only thing i would say about NTFS is that sometimes it likes to fragment itself quite badly.
Another thing about NTFS is that it rarely goes belly-up but when it does it does so in a rather spectacular way. Ever had your cluster bitmap become corrupted, i.e. the section of the mft that tells the fs which space is used and which is free? When that information gets out of sync you essentially face the issue of disappearing files, because the fs is writing over files that have gotten marked as free space and updating the mft accordingly. the mft and cluster bitmap itself can be fixed rather easily, but there’s no real way to undo the damage it has already done except to restore from a backup. Of course, if you run a mission-critical server, no matter what fs, and don’t have a working backup then you’re asking for whatever misfortune you get.
I’ve had similar experiences – OS X and HFS+ account for a disproportionately high number of the tech support calls I get due to data loss. Most of the time, it’s a thumb drive or enclosure that was unplugged/powered down without being un-mounted first – a bad idea with any OS, but I’ve only ever seen it result in actual data loss with OS X & HFS+.
Perversely enough, I’ve found that the quickest solution is usually to connect the drive to a Windows machine with the “MacDrive” software installed (commercial app that lets Windows read from HFS+ volumes), then plugging it back into the Mac. I’m not sure why, but it’s worked for me 9 times out of 10.
Count me on this list too. I have lost two file systems due to corruption in the last 2 years.
In both cases DiskWarrior was able to recover them. Which leaves me wondering which aspect of HFS+ I should be more pissed off about.
1) That HFS+ is a curruption monkey filesystem.
2) That apple utilities are so crappy that they can’t even repair their own file system.
I have not had a Windows, Solaris, or Linux file system corrupt on me since 2002. <- and of course the utilities provided by the OS were able to fix it.
Which all Macs have had for a good while now.
So don’t port it to embedded devices.
That doesn’t mean that said devices can’t still interact with OS X though.
Lets not forget that iPods (or at least my misses iPod) still run on FAT32 – so it’s not even as if Apples embedded devices currently run the same FS as their desktop machines.
No they wouldn’t.
They only have to recommend the new FS for clean installs.
Apple should really implement Ext4 using their own code; it should be fairly quick to do and it would be miles away better than HFS+.
I need two hands to count the number of times HFS+ has gone pear-shaped for me and I’ve lost data. That’s not impressive at all. I shudder to think of what Leopard users must go through, considering that their operating system deletes files before putting them back on disk when you’re just trying to save them.
And then when Btrfs comes out, Apple can nicely re-code that too.
But honestly, for god’s sake, get rid of HFS+ and get rid of it SOON.
Maybe I’m just the odd one out, but I’ve had issues with data loss on ext4 in the event of a system crash or hibernation gone bad and none, I repeat none, with HFS+. That being said, I agree fully that HFS+ is antiquated in design and should get replaced with a better filesystem. But please, please not ext4!
Btw, AFAIK ext4 still has those ugly limits on xattr size (4k/inode for all of them or something)…
Mount with the “nodelalloc” option. Ted T’so is still singing “La! La! La!” with his fingers stuck in his ears, admiring his great benchmark and fragmentation avoidance numbers, and thinking that the patches he put into 2.6.30 really and truly mitigated the important problems inherent in delayed allocation in a meaningful way.
I believed him, sorta. Until it trashed several of one of my customers’ C/ISAM files after a crash. And then did it again a week later. (The crash was unrelated to the FS and will soon be fixed.)
Currently, I have “nodelalloc” set in fstab, and “data=journal” set as default in the superblock. I don’t think I really need data=journal, but the customer says they see no noticeable performance penalty, so I’m leaving it.
However, I think that “nodelalloc” is probably the only change I really needed to make. We ran ext3 for years at the defaults, and *never* had these issues, even under adverse conditions.
Edited 2009-10-25 02:23 UTC
Heh, thanks for that; I’ll bear that in mind the next time I do an OS install.
I’ve certainly seen this sort of thing happen – to some test systems thankfully. I just don’t think ext4 is necessary in any way. ext3 was the end of the the line for the ext filesystem line and I don’t think stretching the codebase out any further to do things it wasn’t mean to do has done anyone any good.
Hey! Extents are nice. And safe. 48 bit is nice. And safe. Delayed allocation was reckless… and not safe. Turn it off, and continue enjoying the stability of EXTx. And shame Ted T’so when appropriate. He’ll notice, eventually. He’s just living in his own little File-System Superstar world right now.
Edited 2009-10-26 23:01 UTC
Yes they are nice additions, but I would have preferred the addressing of concerns like dynamic inode allocation which would have been a nice improvement over ext2/3. It’s a very common problem, running out of inodes. I certainly understand the backwards compatibility reasons for that, but it’s a very common and real problem nonetheless.
While I respect the need for backwards compatibility with filesystems and why new filesystems like ZFS often have a tough time, I can’t help but feel we’re at the end of the line.
Really? I’ve never had such a problem. Inodes are generally over-allocated by a very wide margin with the defaults. Even on the multi-user systems I administer. Static allocation has definite advantages when it comes to fsck. When you know where things *should* be, you know better what to do when they don’t seem to be where they are supposed to be.
Dynamic inode allocation might seem a good a good idea at the time… in the same sort of way that killing one’s wife with a Judo hold might. But I don’t think it really pays off in the end.
Edited 2009-10-27 02:05 UTC
I will never use ext4. Use ext3 as a general purpose filesystem or XFS if you really need specific performance but don’t use ext4 – it just isn’t necessary. I won’t use a distro that won’t let you change the default filesystem.
While I respect Ted Tso’s work over the years I’m afraid he’s another developer who has got to the point where he is too wrapped up and proud to see the failings in his own code and those close to him. I don’t care for his defensive tone over ALSA either.
Edited 2009-10-26 23:00 UTC
…is there was a very good reason to drop ZFS. To actually announce they were going to use it was a big step. It would be interesting to hear the full details…
I say this because writing a FS isn’t an easy thing, it’s crazy to write one when there is already one existing. If there was functionality they wanted, it would be easier to add it to ZFS then to start from scratch…
Apple use Unix, OpenGL / PDF, OpenCL, OpenAL, CUPS, gcc, llvm, Webkit and the list goes on and on. Most of these have been improved by Apple and given back to the community. Apple also benefit greatly from the community, it’s like having thousands of extra coders on the payroll, but you don’t have to pay them.
From what I have seen, they don’t re-invent the wheel anymore unless they see a need, there is no point. No company does (well, shouldn’t). Usually the “need” my elude us or be closely attached to revenue, but it’s still a “need”.
So, I would guess they are either looking at something else, or cooking their own. If they are cooking their own, you can bet it will be a lot better than HFS+. Better than ZFS, I seriously doubt that (but we can hope)…
I dont think 1 GB RAM is too much for a server enterprise file system? Enterprise servers tend to have more than 64 MB RAM? As soon some app needs RAM, ZFS will release it. Until then, it will grab all RAM it can get. Which is a good thing, RAM should be used.
I think that the rumours that ZFS needs several GB of RAM just to boot, comes from FreeBSD first port attempts. The ZFS port to FreeBSD used lots and lots of memory, but that was because of a bug. The FreeBSD developer explains:
http://queue.acm.org/detail.cfm?id=1317400
“The biggest problem, which was recently fixed, was the very high ZFS memory consumption. Users were often running out of KVA (kernel virtual address space) on 32-bit systems. The memory consumption is still high, but the problem we had was related to a vnode leak.”
Also, keep in mind that the high memory usage is also due to very aggressive caching on ZFS’s part
There is something about all of this that I have never understood. The linux page cache and buffer cache make aggressive use of memory for caching. To a great extent, application pages and disk data are treated the same. (Though not exactly the same. e.g. /proc/sys/vm/swappiness.) And yet when more memory is needed for applications, it is available in a flash. Thus no one ever speaks of Linux filesystems as having a memory *requirement*. What flusters me about ZFS is all this talk about it having a memory *requirement*. A filesystem should not have a memory *requirement*.
Is this an artifact of the strange way that ZFS was implemented? i.e. a result of it being a “rampant layering violation”, as Andrew Morton once quipped? In Linux all that sort of caching, for all block devices, as well as applications, takes place in one unified layer. But in ZFS, I guess it allocates large amounts of memory and does its own management of it, independent of the rest of the kernel?
Edited 2009-10-26 18:25 UTC
ZFS releases all memory as soon as an application needs RAM. The thing is, to achieve good performance you need at least 1GB RAM and 64bit CPU. If you have 512MB RAM or 32 bit CPU then the performance will not be so good.
I used 1GB RAM and 32bit CPU and I only got 20-30MB/sec with 4 discs. With 64 bit CPU, I get over 100MB/sec. ZFS is 128 bits, so it likes 64bit CPUs.
So, ZFS does not _require_ much ram nor 64 bit CPU, but your performance will not be good without them.
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-October/0331…
> Apple can currently just take the ZFS CDDL code and incorporate it
> (like they did with DTrace), but it may be that they wanted a “private
> license” from Sun (with appropriate technical support and
> indemnification), and the two entities couldn’t come to mutually
> agreeable terms.
I cannot disclose details, but that is the essence of it.
I don’t buy that at all. It’s a reason given on a mailing list where people are frantically running around for an answer other than “Apple couldn’t integrate ZFS into OS X properly and felt it was the wrong solution in the long-run that might well create more work.” While reading and writing to ZFS on OS X has approached something like production quality, using it as your own true filesystem is something else. Performance issues need to be analysed and corrected (ZFS is doing a lot of things that have never been seen in widespread desktop filesystems) as well as doing far deeper integration with the operating system. HFS(+) has been bludgeoned into doing that over many years.
Apple has integrated many software components under a variety of open source licenses and never had problems before. They might have wanted Sun to give them a special license or come to some kind of support agreement, but that really shouldn’t have been any trouble at all for Sun. The relationship would have been extremely beneficial to both Sun and Apple considering the workload that could have been shared, especially considering Sun’s takeover by Oracle and Apple’s historically bare filesystem development resources.
Not big news, but nice to see that decent reporting is still alive in a few hidden places on the internet.
http://devwhy.blogspot.com/2009/10/loss-of-zfs.html
That’s a pretty well done blog article that covers the main probable points.
http://milek.blogspot.com/2009/10/apple-abandons-zfs.html