ACM’s latest journal had an interesting article about RAID which suggested it might be time for triple parity raid. “How much longer will current RAID techniques persevere? The RAID levels were codified in the late 1980s; double-parity RAID, known as RAID-6, is the current standard for high-availability, space-efficient storage. The incredible growth of hard-drive capacities, however, could impose serious limitations on the reliability even of RAID-6 systems. Recent trends in hard drives show that triple-parity RAID must soon become pervasive.”
ZFS is already ahead of the curve as it introduced triple parity RAID in last summer’s release of OpenSolaris.
Browser: Mozilla/5.0 (iPhone; U; CPU iPhone OS 3_0_1 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7A400 Safari/528.16
* Sigh *
A. ZFS is not a standard (unlike, say RAID6). It’s an implementation. Far worse, it’s a patent encumbered implementation of a triple parity RAID level.
B. You’ll amazed to hear that most of the world doesn’t use OpenSolaris.
C. As far as I remember, OpenSolaris’ modified grub cannot boot from ZFS RAID. (And nothing beats losing your boot record + kernel on a production machine, right?)
– Gilboa
There is no standard detailing RAID-6, just a widely accepted definition.
ZFS happens to implement something with similar redundancy characteristics to RAID-5 and RAID-6 and also implements triple parity.
So? That doesn’t change that OpenSolaris supports triple parity right now. Since OpenSolaris has networking support, including CIFS, iSCSI, and NFS serving, it can still be useful to those who don’t currently use OpenSolaris.
ZFS is not the only way to provide redundancy for a boot partition.
Semantics.
I can use a on-board RAID5 controller (read: soft-RAID) built array and move it from Windows to Linux and back, it’s a standard.
… And yet, its still irrelevant for the rest of the world.
So?
– Gilboa
My god, you are a deluxe troll.
“As far as I remember, OpenSolaris’ modified grub cannot boot from ZFS RAID.”
False. OpenSolaris and Solaris 10 have the ability to boot from mirrored ZFS pool for ages.
“I can use a on-board RAID5 controller (read: soft-RAID) built array and move it from Windows to Linux and back…”
I can’t imagine how you can possibly try to compare HW and SW based RAID. Anyway, I can import my zpool on Solaris, FreeBSD, Linux/FUSE and OSX. With Linux’s LVM or md-based RAID – no luck. I also can control onboard RAID controller from Solaris with raidctl command.
“…it’s a standard.”
Wow, Linux fanboy talks about standards. You might want to talk about when will Linux be POSIX compliant or when will it finally get standards-compatible NFS4 implementation? So far, Linux is ignoring standards whenever possible.
“… And yet, its still irrelevant for the rest of the world.”
Even if that was true, I don’t understand why people like you have the need to bash (Open)Solaris. You don’t like it – don’t use it.
“Gee, why can’t I use ZFS on my Linux severs?”
Because GPL fascists made the license to mandate it. Why do OpenSolaris and FreeBSD exchange code both ways, but Linux can not?
What I don’t understand is, why under any Solaris or ZFS related post has to be some BS like this. You are upset that there is technically superior OS to Linux, go hack the kernel all day, go petition Linus for stable API/ABI or do something meaningful. I certainly don’t comment on every Linux-related post about what a pile of crap it is, although I could.
You call me a GPL fascist, Linux fanboy, and I’m the troll.
Enjoy talking to yourself.
– Gilboa
Edited 2010-01-19 14:43 UTC
I did not call you a GPL fascist, unless you were the author of it or the decision maker behind licensing of Linux kernel. I did however call you a Linux fanboy and your posts fully qualify you as one, no doubt about that.
I have no problem with you not wanting to communicate with me, but I would suggest you extend that to all discussions about topics where you don’t have anything to say. You shouldn’t bash a product/company/technology just because you personally don’t like it.
You can move a RAID array using a certain implementation and move it to another operating system that also supports that implementation. But people generally do not need to do this in the real world. (And there isn’t really a standard filesystem worth running between Windows and Linux.)
Different hardware and software RAID implementations are implemented in different ways; they are in no way standardized other than they all implement some form of mirroring, striping, and striping with parity (for those supporting RAID5/6). Linux dmraid understands some of these proprietary formats.
OpenSolaris is not irrelevant to the rest of the world – it’s freely downloadable and usable, just like that “Linux” thing that was irrelevant in the early 90s. And the new ZFS features in OpenSolaris will be in Solaris at some point.
I personally use Linux md RAID1, RAID10, and RAID6, and these arrays only work on Linux.
I don’t run any Linux applications on the server other than those needed to share the filesystems / arrays. I could provide the same services using OpenSolaris (or Windows, or FreeBSD…) And OpenSolaris has the huge advantage of ZFS.
You mentioned the inability to boot from ZFS RAID with GRUB as if it were a major disadvantage that makes ZFS unusable in a production environment. This is not the case.
Having a driver for a particular piece of hardware in 2 OS’s does not make a standard. Common, sure, but standard, no. Now, if you could build the RAID 5 array on one brand and version of controller and the use that same array with a different brand and version controller, that would be more standard like.
Various RAID levels aren’t like a standards spec to follow, more like an abstract description of an algorithm.
Let’s say RAID5 is like a V6 engine in a car:
V6 = Use 6 cylinders arranged in a 2×3 V configuration.
There’s no reason to expect a Honda V6 to fit in place of a Ford V6. Just because they’re both a V6 doesn’t make V6 a spec standard, just a common way to design an engine.
RAID5 = use 1 disk worth of parity information distributed and staggered across the entire array.
There’s no reason to expect a RAID5 array built with an Adaptec controller to plug right in and work in place of an LSI built RAID5 array with an LSI controller. Just because they’re both a RAID5 array doesn’t make RAID5 a spec standard, just a common way to build a disk array.
OK.
RAID5 is not a standard / RFC / IEEE / etc.
It’s common.
I chose the wrong wording.
I stand corrected.
– Gilboa
RAID is not a standard either. What does userbase have to do with anything? Same could be said about linux/BSD/windows whatever… “not everyone uses it”
You can use RAID in your solaris root partition, and then manage extra partitions and their redundancy via ZFS, and yes with triple parity even. If you are putting many TB’s of data in your root partition, you deserve what you get.
What the heck do patents have to do with triple parity? I think you just had an anti-solaris narrative and decided discussing triple-parity was a good place to dump it…
Edited 2010-01-19 03:24 UTC
.
As I said in another post, if I can take a soft-RAID5 array (E.g. on-board controllers) that was built under Windows and move it to Linux and back, its standard.
Option A:
Hardware RAID7.
A. Cross platform support.
B. Battery backup for write cache.
C. Hugh cache.
D. Full triple parity for world + dog: boot sector, boot partition, OS, data, etc.
Option B:
ZFS:
A. Works under OpenSolaris, and to a less extent, FreeBSD. (And no, FUSE/Linux is not an option)
B. No write cache.
C. Triple parity for data only.
Gee, why can’t I use ZFS on my Linux severs?
Wait a minute, let me think…
CDDL… Not being able to reverse engineer a GPL’ed version of ZFS under Linux due to patents…
No idea. Really. None.
God I hate fanboys.
I’ve been using Solaris since the mid ~98. You? *
– Gilboa
* I know that in your fanboyish eyes, anyone that doesn’t share your view that ZFS is the best thing since sliced bread, is either stupid or has something personal against Solaris, but when you grow up, you’ll understand that some people might have other priorities other than “Oh look! Shiny!” – E.g. not being tied to a certain platform or problems with OpenSolaris’ support model that makes it irrelevant for mass production deployment. (And no, we are not turning this thread to a flame war…)
Gee, you are a Linux troll.
It always amazes me when these Linux zealots, without any hesitation, assume that something is either somehow bad or irrelevant if it is not part of Linux. And if it is part of Linux, it can never be bad or irrelevant. Gee.
By your comment I assume that you don’t really appreciate the fine irony of naming yourself strcpy, right?
– Gilboa
Sure.
And what I mean by this is that instead of appreciating different operating systems and being glad that (Open)Solaris has ZFS, you come here ranting about patents and Linux, implying that Solaris/ZFS would not be production ready, and even bringing CDDL to the table even though it is GPL that is incompatible (even with itself, duh). That’s enough signs of Linux zealotry for me.
Edited 2010-01-19 18:30 UTC
One word.
No.
I’d -really- suggest you improve your reading skills.
Yes, but that’s only if you use a file system that both operating systems can read and the controller is recognized by both operating systems. It’s only possible because the controller abstracts the array in to a logical volume that the PC can see — not because it’s a “standard” RAID configuration. The user land portion of the operating systems doesn’t care about anything but the file system. The OS kernel and drivers take care of the hardware abstraction layer so if both platforms don’t have drivers for your controller’s chipset or can’t read the file system format then you’re out of luck. I don’t see how any of this makes portability between operating systems easier merely because you chose a RAID. I could use fibre channel or iSCSI to mount a volume on either OS, too, but that’s also accomplished via drivers and file system support as the OS still doesn’t care anything about how the disks are organized by the storage manager.
… We’re past that stage.
I already admitted that using the word “standard” in the context of RAID was a mistake.
Can we move along?
By all means. It would be helpful if there were an option here to see a threaded view of conversations.
I’m guessing you’re a handheld device because ‘threaded view’ is the default setting on desktop browsers.
I think you are projecting a tad too much here… regarding “fanboyism” Good grief.
And BTW, you don’t seem to know what the word “standard” means.
C) You can mirror your ZFS system disk. That makes it less likely that you loose your production system.
Sure, ZFS has a different license than Linux, so what? ZFS is still open and other OSes use it. If Linux has problem with the CDDL license, so what? Does the whole world revolve around Linux? Why? “If Linux does not support it, it sucks”? Why so ego centric? Use FreeBSD instead than Linux, then. You will get better quality than Linux, and you will get ZFS.
Regarding standards, ZFS is not a standard, but it is open. And you can import your zfs raid into other OSes and export them. Even to CPUs with different endian! Can you do that with an ordinary raid?
And, zfs protects against silent corruption. CERN did a study on 3000 hardware Linux racks, it turned out that 152 of the Linux racks corrupted the data, without even noticing it! CERN noticed this, because they wrote a prespecified bit pattern, and after a short while they noticed differences in the expected bit patterns and the actual raid data. Had they ran the test for longer, they would have seen even more corrupted data! As the CERN guy concludes: ordinary checksums (which raid does) is not enough! You need end-to-end checksums to detect these errors so you can correct them – he suggests zfs as a solution to this problem.
http://storagemojo.com/2007/09/19/cerns-data-corruption-research/
Triple parity is needed because, as disks get bigger, the raids take longer time to repair if a disk breaks. With large disks, it can take over a week to repair the raid! During that time, more stress is on the other drives, so they also break. So two disk parity is not enough – for big drives. If you use small disks, then 1 or 2 disk parity is enough. Triple parity is only needed for large disks > 2TB.
It’s not the standard but it’s a standard.
And while I agree with you that an agreed open standard for triple parity RAID is needed, there is at least already a file system available for use which supports such.
Plus at least said implementation is open source (even if it is patent encumbered)
What about FreeBSD or Linux (albeit via FUSE)?
Or how about one of the many other Solaris derived projects from pure Solaris to Nexenta (OpenSolaris plus debian user space tools)?
Besides – if you’re business is large enough where triple parity RAID (or lack of) is a serious issue, then I’m sure they can either afford to run a dedicated *Solaris file server (even it’s only virtualised) or learn how to run their additional tools on *Solaris.
You’re right, however OpenSolaris can boot for a ZFS mirror.
And why would you want to boot off your large data drives anyway?
It would make more sense to keep your OS separate from your production data.
Don’t get me wrong, I’m not suggesting that there isn’t a need for an official standard.
However I’m also saying don’t be quick to dismiss ZFS just because it proprietary.
I’m not quick to dismiss ZFS.
I actually have a OpenSolaris VM w/ZFS on this machine (under Linux/KVM).
I am saying that ZFS is not the solution to everything.
It cannot replace hardware RAID controllers.
Some people (like me) have major misgivings about the lack of separation between the disk level (E.g. Hardware / Software RAID) and FS layer. (E.g. NTFS, ext4, etc).
And last and not least, *Solaris is also an OS. And you don’t select an OS just because it has a shiny file system.
– Gilboa
Edited 2010-01-19 14:46 UTC
For some applications, ZFS can be a better (cheaper) fit than hardware RAID.
If you don’t like it, then why don’t you explain why you think it is a bad idea instead of trolling?
No one said “Everyone should use Solaris because it has ZFS”. The first post mentioning ZFS simply mentioned that ZFS now has triple parity in OpenSolaris.
A. I wasn’t trolling. I have nothing against ZFS or OpenSolaris (I use them both… not in production, though). I am -very- much against the people who automatically post “use ZFS instead” message as a response to each and every storage-related-news-piece.
B. As for your question, I see ZFS’ lack of layer separation as major issue due to the following problems:
1. We have >30 years of experience with dealing with FS and volume manager errors. In essence, even if your FS completely screwed up huge chunks of its tables (no matter how many copies of said tables the FS stores), in most cases the data is still salvageable.
2. We have >20 years of experience in getting screwed by RAID errors. If something goes wrong at the array level and you somehow lose the array data/parity mapping or parts of it, the data is doomed. Period.
3. As such, I’m less afraid of trying new FS’s, ext3, ext4, btrfs, ZFS. As long as I can access the on-disk data when everything goes to hell, I’m willing to take the chance. (Of-cause, as long as I don’t get silent corruption that goes undetected for years…)
4. On the other hand, I want my RAID to be tested and tested again, and I want it to use as little code as humanly possible. (E.g. Linux SW RAID [1])
5. ZFS is relatively new and it combines 3 layers that I personally prefer them to be separate. A simple bug in one of the bottom layers (say, the pool management layer) can spell an end to your data in an recoverable way. And with a file-system as complex and relatively immature as ZFS (compared to say, ext2/3 or NTFS), this a -major- flaw.
D. Last and not least, while ZFS is -a- solution to improving the resiliency of RAID arrays, in my view, the OS lock-in, patent issues (that prevent other OS from implementing ZFS), and less than ideal implementation makes ZFS a far from ideal solution.
– Gilboa
[1] $ cat /usr/src/kernels/linux/drivers/md/*raid*.[ch] | wc -l
13660
Edited 2010-01-19 17:47 UTC
For the last time: there is no OS lock in. I’ve been patient with you but you keep on spouting this BS.
You keep moaning about choice and how people should be open to other file systems, but so far all I can see is you blithering on about how ZFS won’t run on your favourite OS.
In fact, you’re starting to come across as the type of person that many of the ZFS engineers at Sun were fighting against when drafting up what license to apply to their file system.
The sort of person that expect everyone to bend over and kiss the holy grail of GPL as if it was the second coming.
I mean seriously – you’ve made 2 good points and the rest of your posts have been a self-indulgent CDDL rant loosely masquerading as a scientific argument (and your rant is particularly worthless given high end virtulisation costs nothing these days)
Yes you don’t like the joined-up layers of ZFS – but that’s opinion. There’s no “right” or “wrong” way – just a preferred way.
Yes you’d like to see more universal standards.
But lets not overstate the facts just so you get some attention while stood there on your soapbox.
Have you ever heard of a backup?
Here is a link to the Wikipedia page on backups, in case you are not familiar:
http://en.wikipedia.org/wiki/Backup
Why take a chance when you can make backups?
So your complaint is nothing to do with the actual design itself, but just that it’s new.
FreeBSD also has ZFS and there is a Mac OS X port. OpenSolaris is open-source; the result of lock-in if it was the only OS with ZFS is minimal.
In any case, with virtualization it doesn’t matter.
And also: md and ext4 are Linux only.
What patents in ZFS that prevent reimplementation? Why do you keep repeating this? Are you sure you are not trolling?
Gilboa, you are hilarious. Are you sure you are not SEGEDUNUM, he keeps repeating weird stuff, like ZFS requires several GB of RAM to start – even though several people explained that is not true, he keeps claiming that.
Regarding your ZFS “rampant layering violation”, that Linux kernel developer Andrew Morton called ZFS (now why would a Linux developer call ZFS something like that?). Ive heard that BTRFS is doing something similar with it’s layers, does someone know more on this layering violation by BTRFS?
I dont really understand why you have something against a superior product, because of a different design. If you see a product with different design, but the product is best on the market, compared to a product with standard design but the product is inferior – which product do you choose? Do you refuse to use a database application if it is not using the standard three layer model (DB, logic, GUI)? If a product is not using standard programming languages, but instead is using something esoteric such as Erlang – do you refuse to use the product (even though the product is best om the market)? I understand your reasoning if ZFS were inferior, but almost everyone agree that ZFS is the best filesystem out there. So, if a product is best, but has a different design – how can the design matter to you? It is only the result that matters to most people. If something is best, then it is best. No matter the design, the price, or whatever. It is best.
ZFS has tried to get rid of old assumptions and redesign filesystems from scratch, targeted to modern devices. And that is a bad thing? When the first chip was invented with superior performance to transistors – you would refuse to use chips because they were different?
The main ZFS architect Bonwick explains why ZFS has different layer design, his point is that it is not necessary to use unnecessary layers – you can optimize a layer away if you are clever enough. If you are not clever enough, you continue to use the standard solution:
http://blogs.sun.com/bonwick/entry/rampant_layering_violation
Regarding your “ZFS is OS lockin, patent issues” etc. First of all, ZFS is not OS lockin. There are other OS than Solaris that use ZFS. Wrong again.
Second. How can you say ZFS is lockin, when the code is open and out there? If SUN goes bankrupt, we have access to the ZFS code and can continue development. What happens if your hardware raid vendor goes bankrupt? Do you expect your hardware raid to continue development?
Which is most lockin, a hardware raid controller (that needs device drivers to an OS, you can not move your discs to another controller, nor to another OS) or ZFS (you can compile open ZFS code to every OS you want, and move your discs freely between the OSes, even with different endianness). Hardware raid disks can not be moved to different OSes, and if they use different endianness, you are screwed. ZFS can move discs between Apple Mac OS X, FreeBSD, Solaris, OpenSolaris, and every other OS that compiles ZFS – even between different CPU architectures with different endianness! You can not do this with hardware raid – hardware raid is lockin, you are forced to wait for drivers to your OS, you can not do anything, you have to wait for the vendor makes something. ZFS is not lockin, but hardware raid is lockin.
Man, you are just plain wrong in almost everything. The things you say are not even factually correct. It is like “In my opinion, that 2m guy is shorter than the other 1.5m guy” – but that is simply not true, factually. You can say “I dont like ZFS” – but your reasons to do so are false. Hardware raid is the most lockin there is, you can not do shit, only the vendor can do something. You dont have access to the BIOS code, you have nothing. If the vendor bankrupts, you can ditch your card.
And hardware raid is less safe than ZFS. My friend who is CTO for a small company, lost two raids due to bugs in hardware raid BIOS. they confirmed the bug but are not releasing patches yet. This was one year ago, I dont know what happened since then.
While I do see where you’re coming from, I think you’ve over-racted to the opening post as nobody was suggesting that ZFS was the solution to everything.
We were just saying that currently there is a solution to the lack of triple parity RAID and that solution is ZFS. It was a very specific point he was making rather than the generalised “ZFS will solve world debt” type evangelical speak that you’re (understandably) sick of reading.
Sure there’s needs for other solutions and standards – nobody disagrees with that. But the fact remains that ZFS DOES address the triple parity problem.
So personal opinions of ZFS aside – the original poster was spot on with his comments
Actually you do if the purpose of an OS is to serve files.
Choosing an OS is about selecting that system has the right tools to do it’s specific job the best.
So if you need a server with a file system such as ZFS, then selecting *Solaris because of it’s FS is the correct decision to make. Just as if you want a media centre, you’d be more interested in graphics and sound card support than it’s file system.
Besides, you talk as if you can’t get samba, apache, et al for *Solaris, which clearly isn’t the case.
Guess I misread the initial post.
As you suggest, even though I’m using (mostly for tests and multi-platform support) Solaris and FreeBSD machines, and do appreciate ZFS, I’m somewhat tired of the general “ZFS is the new green” auto-posts.
As for what I have “against” ZFS, pleas read this:
http://www.osnews.com/thread?404962
ZFS does solve this issue, but so does software RAID1 over two sets of RAID5/6. (Read: it greatly depends on your definition of what counts as a solution…)
One cannot ignore the fact that unlike a theoretical hardware RAID7 controller, ZFS/OpenSolaris combo is not suited for everybody – far from it.
– Gilboa
But nobody said ZFS was suited for everybody!!
We’ve already covered this so stop misquoting people.
Oh and a hardware RAID7 controller isn’t suited for everyone either.
Why can you not replace hardware raid controllers? Dont you know that ZFS protects against much errors than hardware raid? I would not trust hardware raid, actually.
Here is another presentation on Silent Corruption from the CERN guy (their large physics multi billion machines produces huge amounts of data, imagine corrupted data worth of billion of dollars). He concludes that checksums (hardware raid) is not enough. You need end-to-end checksum (ZFS). He talks about they get corrupted data on their Linux rack servers, silently:
https://indico.desy.de/getFile.py/access?contribId=65&sessionId=42&r…
Even better; here is a website explaining how bad raid-5 is, and lots of shortcomings hardware raid has:
http://www.baarf.com
Lots of sysadmins explains technical details there.
Edited 2010-01-19 17:36 UTC
Forgive me for being blunt.
But you are the 2’nd man to post this link.
I’m well aware of silent corruption issues.
Never the less, if you have taken the time to read the rest of the thread before posting, you’d notice that I have fairly reasonable reasons for avoiding ZFS in production use, even though I do use Solaris.
– Gilboa
Edited 2010-01-19 17:56 UTC
That’s entirely subjective.
I have an idea, how about adding just a little more RAID?
I got a great idea.
Lets write an extra copy of every write somewhere….and the cool things with this design is that reads will be TWICE as fast since the data exists in two different places, also its even more secure than RAID7 because you can loose not just 2or3 disks but up to ONE HALF of all your disks and still have your data.
I should patent this idea.
/seriously at some point it just makes more sense to standardize on RAID 10 doesn’t it?
The simplicity of the algorithms of the failure modes for coding in the storage devices alone is less complex/more reliable.
Edited 2010-01-18 20:30 UTC
I’m starting to think that 3 way mirroring is not crazy, like the article says:
In the ZFS context I can see the value of 3 way mirroring just because a 2 way mirror wouldn’t be able to recover a checksum error during a reconstruction, just as with single parity raid. On the other hand, in most cases I think I would feel just fine with 2 way mirror and a good backup.
Also, looks like it’s not just Sun on the triple parity path:
http://www.ntapgeek.com/2010/01/netapp-triple-parity-raid-patent.ht…
Only skimmed the patent, but it seems like this netapp implementation uses only XOR, like RAID-DP I think, which is interesting.
No. You can lose one drive in each pair of mirrors. A two-drive failure will result in data loss if both mirrors in a pair happen to fail. A two-drive failure in standard RAID6 does not result in any data loss.
Assuming 1 TB drives in a 4 TB array, 1E-15 bit error rate: Here are roughly calculated probabilities of data loss during rebuild after a single drive has failed and been replaced:
mirror (8 drives total): .00859
two mirrors (12 drives total): .00007
single parity (5 drives total): .03436
double parity (6 drives total): .00118
triple parity (7 drives total): .00004
quad parity (8 drives total): 1.6E-9
You’re wrong.
A. A theoretical RAID7 running only wastes 3 drives – no matter how many drives you have. (15% in 20 drive array). RAID 10 will -always- waste at-least 50% of the space.
B. RAID7 can reliably survive the loss of 3 drives. RAID 10 can only reliably survive the loss of one drive. If you lose two (a full mirror), you lose everything. (The top level strip set is lost)
– Gilboa
Uh, no. That’s not how RAID 10 works. Since it’s a stripe of mirrors you can lose all but one drive in each mirror set. How many actual drives that translates to depends on how many drives are in each mirror set and how many mirror sets you have.
Really you’re both right. Best case catastrophic loss: You loose 50% of all your drives, but it’s only 1 drive out of each mirror so everything is still fine. Worst case catastrophic loss: You loose both drives in a single mirror and the entire striped array is toast. So you could loose up to 50% of the drives and still be okay or you could loose 2 and be screwed.
A. If you only want to waste 50% of your data, you must use 1:1 mirroring.
A1. As long as your array uses 1:1 mirroring, losing two drives of a certain mirror-set will kill the array.
B. If you use a 1:3 mirroring, your RAID10 will survive a 2 drive failure, but will waste a staggering 66% of the total disk space.
Either way, RAID 6 is far more efficient and/or resilient – let alone a theoretical RAID7.
– Gilboa
Sure, but that means you CAN lose more than one drive. If this is a viable compromise or not depends on your needs and requirements. Storage is (relatively) cheap these days, after all.
The gist of the article was that a RAID7 should be defined for 3 or MORE parity.
But in all fairness, I believe the answer is rethinking storage strictly in terms of mirrors. So I believe that object based storage is the future.
The idea being that you have something called “storage” and it’s constructed out of mirrored “elements”. Mirrored elements are ideally created using the best independent paths possible. As failures occur, elements that are affected are re-mirrored, again to the best independent path. Such a system usually requires independently redundant meta data areas for fast element lookups, etc.
A highly reliable object storage system would use 3 element mirrors (for example).
Sure, there’s more space used to provide extra reliability, but such a system will also likely greatly outperform write parity approaches on both read (if done well) and write.
Also, these kind of object based storage systems should be able to scale well beyond petabytes ideally (something that is hard to do today).
Rebuilds on parity based RAID can take days… and it’s getting worse. With good granularity, object based systems might rebuild in a matter of seconds or just a few minutes.
there are so many to choose from. ZFS is under a CDDL open source license & runs on FreeBSD, Linux, & OS X. But check the great & powerful wiki of pedia for more non-standard RAIDs & non-RAID disk arrays. the fact is that disk storage gets more complicated relative to its capacity & it doesn’t matter what you use so long as it works for you & your org.
Browser: Mozilla/5.0 (iPhone; U; CPU iPhone OS 3_0_1 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7A400 Safari/528.16
I worked on the Linux ZFS port using FUSE.
1. Some people feel that FUSE filesystems are second class citizens on Linux. I happen to agree, but you may not.
2. Quite apart from the fact that it’s a FUSE filesystem, zfs-fuse has bugs. Some people wake up one morning and their pools don’t import. Sometimes this can be fixed with specialized single-purpose tools from the author and sometimes they can’t. For more zfs-fuse failures, check the mailing list.
3. If you think ZFS is neat, I strongly recommend throwing your support behind btrfs. It’s not as good as ZFS, but you will eventually be able to trust your data to it on Linux.
You worked on the original ZFS-FUSE implementation? Cool. Were you the original Google SoC developer?
There was some talk somewhere about ZFS coming to Linux natively, that someone was taking care of the patent and licencing issues. Do you know anything about that?
Is there any particular reason that btrfs isn’t as good?
Last time I checked, it seemed to be a couple of missing features away from being equivalent to the first versions of ZFS.
Is it just a matter of being younger?
I think you answered your own question there.
But Btrfs is a great filesystem embryo and I’m looking forward to using it some time in the future, when it’s more equivalent to the current versions of ZFS.
According to its web site, “btrfs is under heavy development and is not suitable for any uses other than benchmarking and review. The btrfs disk format is not yet finalized but it will only be changed if a critical bug is found and no workarounds are possible.”
In the mean time, if you need a robust production file system I would recommend choosing a mature file system that’s more appropriate for your needs. For example, GFS or OCFS2 are enterprise ready clustered file systems. They are both GPL licensed but I wouldn’t be so concerned about licensing unless you have overwhelmingly strong beliefs that require you to strictly adhere to superstitions instead of relying on more rational metrics like performance and stability.
It’s also worth nothing that Accusys has a range of controllers that support triple parity RAID. Search their site for “RAID TP” for more details. q.v. http://www.accusys.com.tw/FAQDetail.aspx?Lan=en&FAQ=39
Since RAID is merely an acronym in common usage, its use in describing a product is not restricted by any standards organization. If we wanted, we could define RAID 7 here on OSAlert and nobody could challenge our authenticity. However, we may need a marketing group to extol the virtues of RAID 7 and increase its global mindshare. “Our amazing disk array utilizes parity calculated from a bifurcated NAND engine to protect data segments by cross-pollinating moon beams with pixie dust.” Hello, angel investors!