Developers have received from Apple a ‘ZFS on Mac OS X Preview 1.1’ package, which offers preliminary support for the ZFS file system, originally developed by Sun Microsystems for their Solaris OS. Currently, the Mac OS is based on the HFS+ file system, but leaked screenshots of earlier versions of Leopard showed options for formatting hard drives for ZFS. Reportedly, this preview allows full read and write capabilities with the latest developer build of Mac OS X 10.5 Leopard, Apple’s upcoming version of its OS X operating system.
ZFS might be useful on a huge SUN server system, but I do fail to see what it could do for my laptop or desktop. I just can not see what is so “good” about it. Doesn’t at all make my computer faster, safer, or give me more useful (yes, useful, it has to do something for *me* as a *user*) features.
I would much rather see Apple use the meta-data capabilities of the HFS+ filesystem more. That would create more usefulness to me. I am banking on solid state disks to take care of speed for me:)
ZFS might be useful on a huge SUN server system, but I do fail to see what it could do for my laptop or desktop.
Your lack of insight is not a justification or an argument.
I just can not see what is so “good” about it. Doesn’t at all make my computer faster, safer, or give me more useful (yes, useful, it has to do something for *me* as a *user*) features.
Go and have a read; http://en.wikipedia.org/wiki/Zfs educate yourself.
I would much rather see Apple use the meta-data capabilities of the HFS+ filesystem more. That would create more usefulness to me. I am banking on solid state disks to take care of speed for me:)
Can you please expand on this, I fail to understand what it is you are trying to say.
HFS is not a very modern filesystem. From what I’ve read, its internal data structures are not conducive to parallel create/delete/resize operations because all files are stored in a single B-Tree that was not designed with multiprocessing in mind. Then again, OS X was a really poor multiprocessing OS in general until Panther and it is still a work in progress after that.
Moving to ZFS will remove another bottleneck in the OS X kernel. HFS+ is a really old filesystem design and while replacing it, Apple is smart to pick the best new alternative in the non-GPL open-source world.
HFS+ since 10.4 has ACL built in. It’s journal capabilities aren’t up to snuff, but clearly ZFS will change all of this and then some.
checksumming all your data is good enough to sell me on it. It’s a bit like free ECC memory!
HFS+ is not the most reliable file system. If you don’t have enough free space, then you might have some problems.
http://www.macfixitforums.com/printthread.php?Board=xutilities&main…
My personal experience with OS X confirms what the tech says. I’ve had file corruption, but nothing that Diskwarrior couldn’t fix. fsck is not enough.
ZFS is going to be great and welcome addition to OS X. I just hope read/write is not pushed aside until 10.6.
“””
“””
Don’t let the 128 bitness fool you. It’s full of common sense goodness that applies to us real world folks. The checksumming. Fantastic administration utilities. There is no reason that we Linux guys couldn’t achieve the same level of administration ease, but given years and years… we haven’t.
Back to 128 bits, though. 16+ million terabytes, the limit for a 64 bit filesystem, really *should* be enough for anyone for a long while. And anyone who can afford more hardware than that would also benefit from writing an application specific layer to handle a greater amount of storage. The 128 bit part was pure marketing.
Assuming you’re not trolling here, this pretty much sums it up :
http://uadmin.blogspot.com/2006/05/why-zfs-for-home.html
I can thing of plenty of uses at home where snapshots, end-to-end error detection and recovery, simple administration, on the fly compression etc. would be useful.
So yes, it would make your computer faster (ZFS can offer blazing performance), safer (snapshots so you keep point in time copies of your important files, end-to-end data protection) and more useful (compression, easy administration, pooled storage).
Nice article.
It would be nice if Apple created a nice GUI for ZFS administration.
Would would be _real_ nice is if that GUI were called Time Machine. Is seems all time machine features are fast and cheap with ZFS.
http://blogs.sun.com/erickustarz/entry/zfs_on_a_laptop
http://unixconsole.blogspot.com/2006/07/using-zfs-on-external-usb-h…
ZFS might be useful on a huge SUN server system, but I do fail to see what it could do for my laptop or desktop.
Understand you point of view. Changing filesystems is never trivial. However, Apple really do need to update HFS or get a modern filesystem that is going to grow with future development.
ZFS would make things such as backups a whole lot easier and more reliable with snapshot, you’ve got built-in free compression and encryption and it would help Apple’s new backup thingy no end.
you’ve got built-in free compression and encryption
Didn’t think they had encryption working with ZFS yet.
Didn’t think they had encryption working with ZFS yet.
They don’t. Well, kind of. An alpha release for ZFS encryption was released a few days ago.
Didn’t think they had encryption working with ZFS yet.
They haven’t……yet. It’s being worked on as far as I know. But at least you’ll have it from one filesystem, one package and one familiar set of tools.
I just can not see what is so “good” about it. Doesn’t at all make my computer faster, safer, or give me more useful
Built in compression makes your computer faster. Even though it takes more processing, the bottleneck is the hard drive. If you have less reads it will be faster.
Why you as a user should care about ZFS.
1) Checksumming — ZFS will know about your bad drive before you, and hopefully you data, does. This alone is a good thing.
2) Snapshots. “instant” backups before you go running that large batch process on all of those files. Push a button so you can easily go back after you discover that missing parameter that corrupted the entire batch. When working with things like Video, large images or audio files, there is no such thing as a “quick backup”. With Snapshots, backups are instant and you can recover your work if you (or, say, your cat — they get in to everything) accidently destroys something.
3) Seemless space. Drives are enormous today out of the box, ZFS makes them even moreso. When your video folder fills up, no longer do you have to come up with contrived partitioning schemes to put some videos on drive A versus drive B. Just slap a new drive in to the machine, ZFS will fuse the two together and now you instantly get “more space”, and it’s all effectively continuous space rather than partitioned space. So, your “My Videos” folder can just grow seemlessly. Hello 1Tb video directory splattered across a that old 250G, a 400G and that new 750G drive you just got. This is all transparent to you, of course. All you do is add the drive. A quick format, no copying, no moving crap around, no links, just more drive — just like a stick of new RAM.
I’m not going to mention ease of administration, striping, raiding, sub volumes, etc. Those are most likely transparent to the end user. A slick Apple GUI will hide all of that mundane stuff that makes ZFS Very Nice for the Drive Array folks out there, but not really related to the end user.
So, safety with checksums, instant “just in case” backups, and painless new storage are all fruits that ZFS bring to the table for the single user.
It was ZFS in the drive array with the checksum.
Snapshots and raid-z could also be quite useful.
Nothing new under the Sun, or Apple.
Haven’t we seen a lot of articles on OSAlert about Apple either announcing ZFS support, or taking back or ‘correcting’ a previous announcement?
I wonder what it is now…
Haven’t we seen a lot of articles on OSAlert about Apple either announcing ZFS support, or taking back or ‘correcting’ a previous announcement?
Yes we have, and this has been going on for years about Apple supposedly adopting one thing or another. Usually, it’s prompted by Sun because they generally envious of Apple.
I’d take it with a pinch of salt, but clearly, if OS X is to move forwards, particularly as a server, Apple need something new.
Solaris with ZFS, FreeBSD with ZFS (and soon probably the other BSDs), and upcoming Microsoft’s nemesis number one with ZFS, it might be time for Microsoft to consider implementing it, too (to allow me deleting that FAT32 shared disk).
Instead, they’ll probably come up with their own crap. Yet again.
Solaris with ZFS, FreeBSD with ZFS (and soon probably the other BSDs), and upcoming Microsoft’s nemesis number one with ZFS, it might be time for Microsoft to consider implementing it, too (to allow me deleting that FAT32 shared disk).
Instead, they’ll probably come up with their own crap. Yet again.
Well, in theory, nothing stands in the way of a third party port except finding someone smart enough, yet crazy enough, to do it
http://en.wikipedia.org/wiki/Installable_File_System
I was hanging out in an OpenSolaris chat room when I was trying to set up a zfs fileserver on my AMD Athlon Thunderbird 1.33GHz (which won’t happen anytime soon because there are linking bugs which prevent non SSE machines from booting).
Anyway, the guys in there were saying I’d be stupid to have a filesever (even for a home fileserver with 2 users) using ZFS without a 64 bit processor and minimum 2Gb RAM (preferably 4).
So, if you need 2Gb RAM for just the filesystem does that mean for desktop usage I’d have to get 4Gb RAM to have the performance of a HFS+ 2Gb machine?
I’d like to see that transcript. That’s ridiculous.
IIRC, Solaris had some bugs where the ZFS cache was sucking a lot of memory. Sun is working on fixing them, but I can imagine that such small machines are not much of a priority for them.
I’m sure that Apple will make ZFS “just work” before they ship an official release of it.
So you need a processor released in the past 18 months, with even a barebones box that has DDR2 667/800 4Gb/8Gb RAM option [Corsair ValueSelect 2Gig kits are dirt cheap], and I’ll leave the rest up to you.
Go to Newegg and build a box. If you spend more than $500 you’ve added extras unnecessary to meet even these offhand minimum requirements.
Perhaps it’s just me, but I have a problem with building a brand new box for running a goddamn filesystem. I have always favoured security and data integrity over performance. Still, if you need these specs for getting decent performance, what are you going to need to run services on it?
I expect additional overhead from modern software and I understand their focus on new hardware. It doesn’t prohibit optimisations, though. It does spoil my plans from getting a Solaris server with ZFS and Zones on an older box (Athlon XP 2500+ with 768 Mb RAM).
“””
Perhaps it’s just me, but I have a problem with building a brand new box for running a goddamn filesystem.
“””
To be honest, I can’t help but suspect that these people saying you need a 64 bit processor and multi-gigabytes of ram to get decent performance with ZFS are full of it.
Perhaps this will help:
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2007-06/msg…
This may well be a reason why ZFS is backburned for Leopard. If it does show up, it will show up in the OS X Server first, and then work its way down to regular OS X. But if it’s as memory intensive as it sounds, then a 1GB notebook could well be struggling.
I can easily see 2GB machines being the “norm” when Apple does decide to roll ZFS in to the main stream.
Perhaps it’s just me, but I have a problem with building a brand new box for running a goddamn filesystem.
Exactly. Traditionally my fileservers have been built out of my older machines.
The problem is….even my newest machine (P4 3.0GHz w/ HT) isn’t 64 bit.
Something is wrong if the fastest computer in your house is a fileserver.
I do know a lot of people who are using it with 32 bit and 1G of RAM – without any drawbacks. Of course it’s FreeBSD I’m talking of. Solaris, well, Solaris isn’t Mac OS X and it is not FreeBSD. Solaris is sometimes a huge ressource hog.
I think their idea behind a lot of RAM and 64bit were disk cache and faster checksumming. Talking a fileserver…
I have a fileserver on a [email protected] and 1 GB RAM. I use 4 Samsung 500GB discs in a raidZ. It works wonderfully. Though it isnt fast, average transfer rate is like 30MB/sec. It is because it is 128bit and it doesnt really like 32bit P4 CPU. When I upgrade to 64Bit Penryn, I will see dramatic improvements in transfer rate. One guy at Sun, blogged about his dual core AMD 2 GB RAM and he gets ~120MB/sec transfer rate. If you have a slow CPU it can be slow. But the security ZFS gives me is so nice to have. I have never worries anymore. Before I backupped important files to several hard discs, but now I worry until I have it on ZFS raid. Then all my worries disappears. ZFS can make truly wonderful things. Detect faulty power supply in a few hours and automatically corrects the read/write errors the supply causes, etc.
The hardware you use affects a lot of things as well. Use good hardware SATA cards and hardware NICs and you’ll get better perf.
>One guy at Sun, blogged about his dual core AMD 2 GB RAM and he gets ~120MB/sec transfer rate
Maybe as peak in some laboratory condition, but not in reality. I saw it on a Solaris server and FreeBSD, but such high rates are to some extent nonsense.
>>One guy at Sun, blogged about his dual core AMD 2 GB RAM and he gets ~120MB/sec transfer rate
>Maybe as peak in some laboratory condition, but not >in reality. I saw it on a Solaris server and FreeBSD, >but such high rates are to some extent nonsense.
The Sun X4500 thumper which has dual(?) opterons and 6 SATA cards with 48 discs with 500GB, achieve 640MB/sec read. So, yes, the guy achieved over 100MB/sec. But he had 64bit CPU. There can be dramatic performance increase if you double the number of bits, right? I achieve now, 32bit P4 1GB RAM, 30MB/sec. I cant wait till I get a Penryn which could play in the same ballpark as dual opterons, in terms of performance. I will easily achieve over 100MB/sec then.
When HFS+ was introduced with Mac OS 8.x around 10 years ago, it was rather impressive that it supported up to 10 TB drives. Of course, the B-Trees that keep track of information were less of a big deal than they were in the mid-1980s and even less so now. HFS+ has received incremental improvements since Mac OS X with journaling and case sensitivity, although it always preserved case.
ZFS is a logical way out of what’s now a rather old file system.
Improved problem recovery would be the greatest advantage as HFS+ looks more fragile every day and the Disk Utility/fsck doesn’t seem to handle the worst of the errors. I wonder the effect on third party disk repair and recovery software.
Improved capacity would mean a one time shift from HFS+ to ZFS and a lack of meaningful limits to file size. Of course, applications will continue to grow files to take advantage.
What kind of machine will be needed? It’s obviously a lot of data to manipulate, regardless of how clever the designers and programmers have been in making ZFS efficient. When HFS+ was introduced, the PowerPC 601 and 604 were common in Apple machines and the Pentium II and Pentium III in other desktop machines. Will a PowerPC G5 processor be the minimum or will it be a Core 2 Duo? Apple have been recently moving away from the G4 processors and rumours say that an 867 MHz G4 is required for Leopard to run at all.
I have read somewhere that ZFS most likely will use more cpu power and will drain my battery faster than HFS+, is this correct?
I can see the benefits for this filesystem on a server and a workstation, but except for checksumming what are the benefits on a laptop?
(powerbook G4)
Instant file / folder forking. Want to try something out on a 15GB video file but don’t want to have to copy the file? Fork in with ZFS and only the changes are saved off as a shadow. You can then roll back to the original file instantly or commit to the new one.
In three years time we could be seeing ZFS+FCP all integrated offering super fast non-linear / parallel editing which will add tons of flexibility and speed to video editing.
Will ZFS supports aliases? HFS+ aliases are a great feature that will be sorely missed if ZFS does not handle them.