WinBtrfs is a Windows driver for the next-generation Linux filesystem Btrfs. A reimplementation from scratch, it contains no code from the Linux kernel, and should work on any version from Windows XP onwards. It is also included as part of the free operating system ReactOS.
WinBtrfs GitHub page
If you’re running a distribution that defaults to Btrfs, or you actively choose to use it on other distributions, and you also happen to dual-boot Windows because your boss makes you use some garbage corpo software, this driver will make your setup a bit easier to manage.
My biggest gripe with btrfs is that it doesn’t auto recover from failures. It sucks that even when full redundancy is maintained that the system fails to boot. Sure, the right sequence of commands gets you back up and running, but in a colocated production environment that’s a failure. Btrfs brings some awesome features to the table and I honestly would have liked to include them in my brother’s build, but he’s not an admin….and when I tested it last year and it failed to boot, that makes btrfs unfit for purpose. I don’t know why btrfs devs aren’t prioritizing this, mdadm doesn’t have those faults, it just works. Again, I like btrfs in theory, but the system shouldn’t fail waiting for an administrator when valid copies of data still exist.
Completing a ZFS recovery is some commands also but it doesn’t even do this… you just end up with duplicated boot environments on the mirrors if one fails the other boots and keep going.
Alfman,
Their mode of operation seems to be taking safest actions, probably due to failures of them in the past.
Yes, it will not boot by default if there are inconsistencies. But as you said it is possible to get it in read only mode and manually repair, or ask it to repair automatically on the fly.
For colocated scenarios, I think IPMI would be more than sufficient. (Or “real” redundancy, as in a fallback secondary server as failsafe).
sukru,
If there are any lingering issues, they need to fix that. People like me are very anxious for it to be production ready.
I wasn’t talking about any inconsistencies though. Degraded raid arrays are an anticipated state for raid solutions and it does not indicate that any data has been lost. Other raid solutions (ie lvm, mdraid, hardware raid, zfs, etc) can continue to operate degraded with zero downtime. They’ll even start clone data to a hot spare automatically if one is available. All other raid solutions have been designed to loose redundant disks while allowing the OS to continue to operate normally. This is a huge benefit of using raid, and it has saved me a few times. However btrfs stands alone in brining down the system until an administrator intervenes. I really wish they would fix this.
I’ve thought about fixing this using external tooling, but it’s complicated by the fact that disk failure can happen at any time, not just at boot. And it would be hard for me to guaranty that my tool has handled all of btrfs’s failure modes. The fixes really should be made within btrfs and it needs to be as robust as other raid solutions.