Last year, Ubuntu developers pushed to remove Zsys from Ubuntu’s Ubiquity installer. This is an integral tool Ubuntu created to make it easier to manage and maintain ZFS-based installations. In a bug report they bluntly noted that ‘priority changes’ in the desktop team meant Zsys was no longer something they want to “advertise using”.
[…]As of writing, Zsys remains available in the Ubuntu archives but development of it isn’t looking healthy. Canonical’s contributions effectively fall off a cliff circa April 2021 based on GitHub commits, with only a trivial tweak made in April of last year.
Daily builds for the upcoming Ubuntu 23.04 release come with a brand-new installer that has been built using Flutter to Canonical’s exact needs. But guess what this new Ubuntu installer does not include? An option to install Ubuntu on the ZFS file system.
I thought the Linux world had settled on Btrfs as the “ZFS-like” file system for the platform, and had no idea Canonical had even been working on giving users the option to install to ZFS. With Btrfs already being the default on e.g. Fedora for a while now, it seems that is a better route to go for Ubuntu and other distributions than trying to make ZFS work.
re: btrfs
At some point in the past, there was a data loss bug (I think). And that event seems to have forever tarnished the reputation of the filesystem. (Personally, that did not stop me from using it as the primary for a decade).
It actually has more features than ZFS. For instance, it can remove disks and shrink an array (which is common for home setups, as a replacement drive is not usually readily available). It can also “raid” using uneven sizes. For example, having three disks of 5TB, 5TB, and 10TB will give you a redundant 10TB in raid 1. Thinking back, growing at random intervals and random disks is another feature over ZFS. And of course it comes standard with Linux with no licensing issues. Overall btrfs is basically a SOHO version of ZFS, which is still actively maintained.
(The raid5/6 “write hole” is still there: https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices . But that is a limitation of the raid scheme, especially on software, not the implementation).
I wouldn’t go that far. That’s overselling things.
I’ve been using it for my Fedora installs, and it’s great as an alternative to ext4.
There are still problems around btrfs. The tooling around subvolume quotas isn’t finished. Swap files don’t work on multiple disk volumes. RAID support outside of mirroring drives doesn’t really work, and as such, it needs to be layered on top of md devices. It can’t export space as a block device, and as such, it needs to be layered on top of LVM.
btrfs has shown promise for years, but it hasn’t really lived up to the promise. It might get there, eventually.
Flatland_Spider,
We can debate how far btrfs goes, and which features are more important that exists in one of ZFS or the other.
But still, it has found a somewhat niche, but stable, corner on the Linux market. Many distributions has it by default, on others it comes part of the kernel, and hardware NAS systems from Netgear, and Synology use it.
As for RAID, again depends on person, but I have not used anything other than RAID 1+0 in a long while. But, yes btrfs could benefit from those improvements.
I agree. btrfs has its uses. btrfs should probably be considered the default Linux desktop FS. It’s much nicer as a root FS then ext4 or XFS, especially on single disk desktops, laptops, and VMs. Subvolumes with quotas are much more flexible then partitions, and combining btrfs with containers is pretty cool.
However, btrfs would probably be better if the devs gave upon replacing md and LVM and focus on UX and the niches it works well in. The rough edges are kind of dumb.
Yeah, RAID 0, 1, or 10 are the most useful levels, but sometimes there are use cases for RAID 5 or 6. If the devs insist, I’d like to see them rethink RAID like how ZFS has done with RAIDZ.
There are many other things I would like to see, like tiered storage to create something like Apple’s Fusion drive or the ARC in ZFS.
> There are still problems around btrfs.
You’re right, but the ones that you go on to list are not the big ones.
IMHO there are far more important issues. The 3 big ones for me are:
Critical problem № 1
Btrfs does not give reliable free-space information. Aanswers to the `df` command are estimates and they can be wildly wrong. That doesn’t just affect the command line. It affects all GUI file managers and it affects all scripts that try to estimate free space before proceeding.
Result, a script checks there is space to do something, it sees there is, it does the thing, there wasn’t enough space, and it fails.
Example: the openSUSE package-updating tools. They check for space, see it, proceed, make a snapshot, and suddenly there isn’t space and the operation fails.
Critical problem № 2
If you fill a Btrfs filesystem, it fails. Problem № 1 makes this a high risk.
Critical problem № 3
Btrfs still does not have a trustworthy repair tool. Either the FS needs to be robust against failure, or it needs to be fixable. Neither is true.
Many users don’t need quotas, don’t need RAID, etc. But everyone can encounter the above issues.
These are the ones which I run into and annoy me the most on a regular basis.
I forgot about that one. The super catchy command of “btrfs filesystem df /” is the correct way to check space. XD
I agree those are issues which should be taken care of.
I feel quotas are super important to subvolumes, even if they aren’t used by many users. Easy quota use would help with keeping the FS from filling up. The wisdom to limit ‘/var’ and ‘/home’ to keep ‘/’ from filling up is still good advice.
I rely on subvolumes pretty heavily. It’s the one feature I like about btrfs, and part of their appeal is being able to limit space via a quota instead of creating a partition which is much harder to change after the fact.
I don’t think btrfs needs to try and replace ZFS. It’s not catching ZFS, but it does have some features which make it a compelling alternative to ext4 and XFS. There are some good features which could be very useful to regular Linux users if the btrfs devs would realize where their FS fits into things.
You can now expand raidz vdevs (adding drives to an existing vdev) and remove entire vdevs from a pool. You’ve always been able to use drives of different sizes in a vdev; however, “extra” space on larger drives is wasted until all the smaller drives are replaced. There’s been a lot of development in OpenZFS over the past 2-3 years, fixing or working around a lot of the pain points for home / non-enterprise use.
As for “has more features than ZFS”, that hasn’t been true for a long time. Has different features, sure, but ZFS has more features that are actually useful. Have you looked at the output of “zpool get all” or “zfs get all” recently?
There is no ‘trying to make ZFS work’. OpenZFS (merged with ZFSonLinux) is rock solid and mature.
A lot of people want BTRFS to succeed and that’s cool. I like ZFS and will keep using it until there is a compelling reason to switch to something else. ‘The Linux world’ as a whole has anything but settled on BTRFS.
Is there actually anything to “like” with a file system for a home user? I mean, it should just work, and that’s it?
jalnl,
These file system features are mostly used by enterprise users BUT it can depend on your individual needs. I make heavy use of snapshots using LVM. This functionality is built into both ZFS and btrfs. Also redundancy features like raid may be useful. Having experienced the rapid death of an SSD, I make use of raid in all my computers now. Most home users don’t use these kinds of features and just need a file system that works like you said.
It depends on what the home user is doing.
I like btrfs as an alternative to ext4 and xfs. It provides some features which are very handy on laptops and desktops.
The ZFS tooling is much nicer, but ZFS really shines in server scenarios.
Yeah, and ‘it just works’ is what is nice about ZFS. If you store any kind of data like images, video or documents in general (i.e. pretty much everyone), a reliable checksummed file system with duplicates and silent recovery is something worth looking into.
I personally want it for the system partition too, but you know, different strokes and all that. And if people want to use BTRFS for that, I think that’s great. Me, I just like how the logic and tooling behind ZFS is really well designed and does exactly what it needs to.
Also nice, transparent compression. Can be very useful on the desktop, since chances are there’s a fair amount of non-compressed files. And for the larger compressed files (e.g. media) it has early bail out if it’s not compressing enough and so just stores the file as-is.
And for the paranoid/mobile minded, transparent encryption, either full FS or per zvol or what not.
And snapshots can be really nice for system updates/upgrades, snapshot and upgrade, if things bOrk or what not, just roll it back. If you have separate volumes for root and home dir, nothing lost. Can even do it from an emergency bootable USB or what not. ZSys was even better, in that it created grub entries for the previous snapshot state. So on update it’d snapshot the system/root before doing the update, and add a grub entry to easily boot back into that if it broke things too badly.
> it should just work, and that’s it
Well, no.
A checklist of things that would be good to have that neither ext4 nor Btrfs provide:
* resilience against failure in the event of unexpected outages (e.g. media errors or power failure)
* rapid reliable recovery without long checks at startup
* reliable snapshots allowing rollback from unsuccessful software updates
Each of these mustn’t compromise the earlier ones.
It’s doable. ZFS on Solaris demonstrated it. No all-FOSS solution currently delivers it.
Uhm, you do realise OpenZFS is an “all-FOSS solution”, right?
tux3 and bcachefs are interesting projects, and I’d like to see more movement behind them.
Is Tux3 still in development?
Hard to tell. The last message on the tux3 mailing list is from May 2021 asking if the project is still alive. LOL
https://phunq.net/pipermail/tux3/2021-May/002363.html
A pity. It is an interesting project indeed.
I remember hearing about bcachefs, but it seems to be very far behind compared to existing alternatives.
From their website:
50TB is small potatoes, even for some home / home office use cases. (4K RAW video is measured in TBs/hour).
Wish them the best of luck.
It started much later then the others.
Egh. That’s nice, but I’m probably going to go with ZFS or XFS for heavy storage anyway.
The Linux world has settled on btrfs as the successor to ext4. They’ve kind of given up on challenging ZFS.
The desktop teams weren’t going to get anything except headaches from ZFS desktops.
In the server space, ZFS works really well, and it has features btrfs doesn’t have. There are several really large national science Linux installations which use ZFS for storage. They put their resources behind ZFS rather then btrfs.
I thought fedora was going down the Stratis / XFS/ LVM route instead of btrfs, but maybe not. I see much more info about BTRFS than stratis.
Fedora includes whatever the community decides to include. Which is why CentOS got positioned to be the upstream of RHEL.
CentOS and RHEL/Clones are going down the Stratis route. I’m not sure where development is at.
Flatland_Spider
Actually I’m pretty sure the vast majority of the centos community wanted centos to stay downstream of RHEL and benefit from the inherit production stability. Redhat themselves (and not the community) are the ones who saw this as detracting from their sales and therefor discontinued legacy CentOS, repurposing the brand as “centos stream”, which essentially became upstream testing for RHEL.
https://phoenixnap.com/kb/what-is-rocky-linux
Because of this switcheroo, I think there’s merit in using rocky linux in instances where centos would have been used in the past.
The vast majority wanted something for nothing. There’s no free lunch.
They needed to contribute something, and they did not.
We’ll see how long the RHEL clones last this time. CentOS is only around because RH brought the project in-house because CentOS was a good for business. People forget CentOS was almost dead around the release of C7 because the clientele CentOS attracted only wanted free stuff with no work.
It’s a little more complicated then that. The monetary aspect was part of it, but not the whole reason. This wasn’t the main monetary reason anyway. The marketing battle with Canonical was the main monetary driver.
RHEL needed an upstream which was closer to RHEL then Fedora is. Fedora is only tangentially related to RHEL, and this was a problem for RH. RH products have FOSS upstreams. It’s part of what RH does, but RHEL did not have an upstream. RH needed a community project which people could build solutions around, the way people build solutions around Ubuntu, and the projects could then be pulled into RHEL and supported.
RH is also moving RHEL to a 3 year release cycle to keep it fresher. Packages will need to flow from Fedora into RHEL in a quicker manner, but there wasn’t a way for the community to contribute. Packages, customers wanted, not flowing into RHEL was a knock against RH.
The solution: put CentOS upstream of RHEL and downstream of Fedora. Make it open so people and companies could contribute to a project centered around fixing bugs and improving stability. Two things the Fedora community doesn’t prize as much as adding new features.
I’ve figured out how to create nspawn containers for whichever distro I want in the RH ecosystem, so I’m not particularly concerned about what OS the host is running at this point. I can fake it and make it! LOL Mostly, anyway. I’m working on figuring out the networking around C7.
There is only one program I have to maintain which doesn’t support CentOS Stream (looking at you GitLab >:-| ), and Rocky Linux is the RHEL clone I would look at. Otherwise, I don’t have a reason to switch. In fact, I have more reasons not to switch since it looks like CentOS is going to be picking up more features from Fedora while keeping the RHEL stability.
The whole thing is dumb. CentOS Stream is only a few weeks ahead of RHEL.
Flatland_Spider,
To be clear, logically I understand your point of view, however there is a contradiction with companies using the GPL expecting something in return. A company that genuinely believes in the GPL,has no legs to stand on in complaining about downstream users who are unambiguously doing what the license explicitly allows. The view that users should not be entitled to do these things implies hypocritical disagreement with the very GPL terms their software is officially being released under. In a way redhat is both a benefactor of and victim of the GPL.
CentOS is extremely popular in data centers, I’m not sure but it might even be the most popular one. It’s quite prevalent in commercial products running linux too.
https://www.midatlanticcontrols.com/building-automation-products/american-auto-matrix/
Obviously redhat/ibm bought centos to control it, but other forks still have the prerogative to use GPL software in ways that won’t benefit redhat. It’s very hard for me to conclude that redhat deserves sympathy without broad reflection on the merits and sustainability of FOSS in general. I’ve struggled with the compensation aspects of FOSS for a long time and I admit that I don’t have the answers.
I agree with you about what happened, but my point was that they killed off the normal centos releases for themselves rather than for the community. Hypothetically a better solution for the community would have been a new upstream distro (call it “centos upstream” or whatever) without killing off the normal centos releases.
Yeah I understand that, but at the same time server environments being a few weeks ahead can be the difference between experiencing bugs or no bugs. Holding off a few weeks or more for non-critical updates can be the best practice. Not just for linux, but I’d include windows here too.
Alfman,
RH didn’t care about CentOS. The one thing they did care about was Oracle rebuilding RHEL and selling it as Oracle Unbreakable Linux.
CentOS solved the problem of free RH products. CentOS is great tool to get people into the RH system.
It is very popular, but the people it’s popular with just take. They don’t contribute back. FOSS projects are only as healthy as the work and resources people put into them. CentOS was not a healthy project before RH stepped in to keep the project from folding.
I’ve benefited massively from CentOS being a RHEL rebuild, so no judgements towards people who want to run it. I don’t even particularly like CentOS. I like Fedora, but CentOS and RHEL happen to be the devil I know, get paid to deal with, and the least bad option very frequently.
I also recognize people need to contribute back to the FOSS projects they use to keep them viable.
My 2 cents: Have an upstream project which releases frequently with a different name, and then fork a stable paid product, under a different name, off of that. The paid product could even been a hosted version.
Those are best answers I’ve come across.
People need to QA changes, or dog food them, before pushing them live.
I have a few systems designated non-critical which get abused like that after the test environment. If they’re okay after a while, the updates get rolled out to everything else. This was in place even when CentOS was a downstream of RHEL.
One of the nicest developments is CentOS tagging packages as security related now. `yum check-update –security` will show only security related updates, and `yum update –security` will only apply security related updates. This is much better then what it was previous.
I haven’t noticed BTRFS gain any more popularity over the last few years. Those who saw it as a reliable filesystem already five years ago went with it five years ago, and those who remained doubtful have not changed their opinion. There are strong opinions for every possible stance: in favour of BTRFS, against BTRFS, in favour of ZFS, against ZFS, and even against both.
Personally I switched to BTRFS around five years ago and have never considered switching to anything else.
sj87,
I think it has to do with most users not looking for or caring about any of the advanced features they offer and ext4 already being good enough for them. These are the kind of users who will use whatever is preinstalled and/or they’re already familiar with. There’s little incentive to change what works.
Yes I understand that. Why would you change what works for you? Personally I use raid with LVM. and it already works for me.
I’ve considered replace it with btrfs except for two points:
1) Immature fsck – to be honest I don’t know if this is still true, but I keep hearing that it’s incomplete/not production ready.
2) The lack of logical volume management, which I need for virtual machines.
I’ve also considered zfs, which supports logical volumes but I’m very discouraged by it not being mainline. As much as I hate letting kernel bureaucracy and license incompatibilities decide my choice of file systems, I’ve supported out of tree file systems in the past and building 3rd party drivers against linux’s unstable ABIs is a source of headaches and more trouble than it’s worth for my own distro.
https://www.theregister.com/2020/01/13/zfs_linux/
Lack of exporting space as block devices? As someone pointed out when I asked about this, “btrfs is just a filesystem.” People are aware it’s not going to compete with ZFS, even though other people like to think it is.
I missed that feature when I first switched over, but I ultimately decided qcow files were better for production use since they were easier to move, and I could snapshot the VM from the host OS.
Next, I realized VMs only really made sense when running another OS on top of Linux or running a custom kernel. I was trying to cut down on the abstractions to increase performance, and containers were just isolated processes on bare-metal. This is where btrfs subvolumes came into play and why I switched from ext4/XFS.
The ZFS on Linux experience isn’t bad. I wouldn’t trust it as a root FS because it is out of tree, but with an LTS/stable kernel the experience is not bad.
I have two CentOS file servers with ZFS storage, and I don’t have any problems with them. The DKMS based ZFS package do take a little bit to compile after updates. That’s something to be aware of, but they’re stable systems.
I was running a Fedora test server with ZFS storage for a little bit, and I did run into an issue where the ZFS package lagged behind Fedora kernel updates. I wasn’t paying attention and accidentally updated the kernel. The machine rebooted, and I didn’t have any VM storage when it came back. LOL
I think that’s been fixed now. The ZFS project added the kernel + version as a dependency for the ZFS rpm package to hold the kernel updates until ZFS catches up.
Flatland_Spider,
Haha, I started with qcow files and switched to LVM. With qcow you implicitly need to stack file systems on top of each other, which I dislike and I don’t think it buys me anything. I learned the hard way that I don’t want the host having humongous file systems and getting stuck on long fsck operations. IMHO it’s better having small file systems on the host that are completely independent from guest file systems. You can do snapshots with LVM and IMHO admin is more practical when you have actual block devices on the host. Using qcow files there’s a guestmount utility that opens the file in a VM and mounts it back on the host via nbd, but I find LVM faster, easier, more direct. Also it’s not tied to qemu. I appreciate that not everyone needs logical volumes, but I still consider the lack of logical volume integration a con for btrfs.
I suppose it depends, Containers might make sense for what I consider “shared hosting”. But I still think VMs make a lot of sense in instances where users want/expect full control over their own resources and environments. VMs offer full isolation while not limiting what can be configured inside. You spin up a machine and give the owner the keys, and users do whatever they need without your help. Containers like docker can be problematic in that you may need to modify the environment, package management, and init systems including systemd to be happy. Maybe it’ll be a problem or maybe it won’t be. Sometimes you just want process isolation inside a container, but other times you want a full OS and I’d still recommend a VM for the latter. Using a VM makes it trivial to switch between a physical server and VM with simple imaging tools. To each their own
As long as the package maintainers keeps the kernel and drivers in sync then it should be stable.
Debian does the same:
https://wiki.debian.org/ZFS#Status
But on the other hand sometimes I do need to build my own kernels and I do it routinely for my own distro, which puts the responsibility on me. An end user wouldn’t notice, but out of tree drivers need a lot of maintenance due to the non-stable ABI.
I’m curious whether this happened because somehow you didn’t update the ZFS package on your end, or because they hadn’t updated it yet on their end? Either way it would be an issue if these aren’t perfectly synchronized at the point of installation. It’s also problematic that updates aren’t atomic – if the process is canceled for any reason, you can end up with a partial install. I’d like to see distros support a better solution.
It might be possible to use snapshots and roll them back, but rolling back can cause data loss if anything other than the installer is actively running. Ideally the entire boot system is atomic and independent from the rest of the system. Does BSD handle this better?
I’ve used LVM and ZFS volumes for my personal stuff. My stuff wasn’t expected to migrate to different servers, and snapshotting servers before changes wasn’t part of the criteria.
ZFS was definitely my favorite of the two.
Yeah, guestfs. I’ve played with it to figure out how to customize VMs in order to spin up lots of custom VMs quickly off of a common template. I didn’t get very far because I realized what I was trying to do was easier done with containers.
Similarly to your experience with LVM, the container images or filesystems are easier to work with. I do have to run the filesystems on a CoW filesystem to get snapshots, so btrfs it is.
I’m not concerned about qcow being tied to qemu. Qemu works well enough. Some other hypervisors look really cool, but they also have limitations qemu doesn’t have.
LVM and ZFS work really well at exporting block devices, and I’m good with continuing to let them do that.
Definitely. VMs are more flexible and are much better when the user wants a server to themselves.
I’ve been working with podman and buildah instead of Docker for application containers. I’ve mainly been building application containers to build Go binaries for CentOS 7 on Fedora.
The init system doesn’t seem to be a problem any more. There are ubi builds for systemd and sysvinit. I haven’t gotten very far with the application containers, but I also haven’t run into any roadblocks which require workarounds.
My main complaint is how fat installs of certain distros are, and the limited number of UIDs and GIDs in the container. I can add the container to the FreeIPA domain, but I can’t login because the domain UIDs and GIDs are outside the range assigned to the container. LOL
The namespace and cgroup not getting a full UID and GID range is a problem.
I’ve been pretty happy with systemd-nspawn to create system containers. Most of the time I want dependency isolation, resource quotas, and a virtualized network stack since I’m more application focused. The userland and environment are the most important things for me. I’m not doing anything besides running multiple services on a server, and containers make that easier, especially being able to multiplex the network.
I haven’t worked with LXC, but it should be similar to nspawn. I’m really trying to get back to OpenVZ style system containers.
I do still use VMs for prototyping the container systems. It’s easier to recover from a bad network config choice in a VM.
I’m a “rebuild everything easily, DR all the time” person. LOL Containers help with that.
P2V or V2P isn’t a use case of mine, but people are working on C2P and P2C.
OpenZFS hadn’t updated their package for the new kernel. It was something like moving from the 5.13 kernel to 5.15. OpenZFS wasn’t specifying which kernel version the package depended on, so dnf didn’t find any reason the kernel shouldn’t be updated.
There was a ticket about it, and I think it’s been fixed in the meantime.
There is a dnf plugin to snapshot the btrfs FS via snapper before running the updates. I have this installed on a test system, but I haven’t tried to recover to a snapshot.
Another avenue being explored is the immutable OS and ostree-rpm way. New updates are layers that only take effect after a reboot.
If the application can store data on a network FS or redundant object store or traffic can be migrated to other servers, it’s less of a problem. That’s my plan to deal with the problem. Move the problem off of that server. LOL
I don’t think FreeBSD does. It creates a new boot environment for OS upgrades when running on ZFS, but I don’t believe pkg snapshots the OS before package upgrades. Snapshotting the FS isn’t particularly fine grained, and people haven’t tried to work on the problem as far as I’m aware.
I’m not sure about DragonflyBSD, but NetBSD and OpenBSD don’t have C0W filesystems.
Flatland_Spider,
Yes, I was thinking this too, but the massive granularity of a snapshot makes it a dilemma to actually recover snapshots. Unless you take steps to strictly isolate what was being snapshotted and recovered, there are tons of scenarios where things could go awry.
On production systems, restoring a known good snapshot must be balanced with loosing recent data. You can end up in a tumultuous situation of having to diff two different snapshots including files and databases and manually merging files and records back together…yuck!
I am not familiar with it but if it makes the boot environment atomic, then that sounds like an improvement. With typical distros there is often cross dependencies between the boot up kernel & initrd and the root file system, which is a problem. Eliminating this was actually a goal for my distro, it boots successfully even if the root file system is gone. It’s how I install it.
I agree physical server based redundancy has merits. It might even be pretty straitforward if you already have it for realtime fallover and load distribution. Alas, I find that most of my clients are not willing to pay for that level of redundancy and sophistication. For better or worse “cheap” often takes the cake in hosting. It’s not fun or interesting but that’s what it is.
Ok, I was just curious because sometimes they solve problems before linux does.
Yeah. That’s probably why no one has attempted it, and instead moved to immutable systems.
A quick and dirty hack around this is the “offline-upgrade” plugin for dnf. This is for regular installs. It downloads the packages and sets an upgrade service to run on next boot. Once the server is rebooted, the updates are installed, and the server reboots again. This solves the problem of having to quiesce the server prior to updates.
Yeah, it’s the same problem with backups. People have to decide how much data they’re prepared to lose.
I’m a big fan of systems which have native clustering and replication capabilities. That has other problems, but it increases the margin for error.
It does. The caveat is work should be done in a container and services need to run in containers.
Packages can be layered on top of the base OS, but it’s recommended they be kept to a minimum. The preferred method is to rebuild the base OS include or exclude items. There are tools for rebuilding the OS image, but I haven’t had time to work on that.
It takes a little getting used to.
Fedora Silverblue is the desktop version: https://docs.fedoraproject.org/en-US/fedora-silverblue/
Fedora IoT is the standalone server version: https://getfedora.org/en/iot/
Fedora CoreOS is designed for running K8s: https://getfedora.org/en/coreos?stream=stable
They’re all based on the ostree stuff: https://ostreedev.github.io/ostree/, https://coreos.github.io/rpm-ostree/
OpenSuse has their MicroOS project which is similar: https://microos.opensuse.org/
I’ve tried Silverblue for a little bit, but I need to get more familiar with rebuilding it to get all the tools I like installed.
Server wise, the versions are configured differently then regular Fedora installs. They rely on the ignition config files to set everything up, and I need to play around with it. I also need to get better at building containers and running my services in containers.
That’s cool. Is is based on PXE boot? The server PXE boots by default, and if the ramdisk finds a root FS it boot it othewise it lays down an image?
I have the advantage of figuring out how to keep SLAs and keep the internal systems available 90% of the time, as much as possible.
Yeah. For many orgs it’s a hard sell. “Just don’t turn off the server. We don’t need updates.” is too common.
The situation has gotten better over the years, but there are also areas which could be improved.
Clustering object stores are pretty common. Minio, Ceph.
Redundent network filesystems too. Ceph, Gluster, Minio (sort of).
One of my craziest thought experiments involves how to make the webserver stateless by itself. It involves writing an Apache or Lighttpd module to directly interface with a Gluster cluster. Instead of the OS mounting the Gluster FS, the webserver access the cluster directly. This would also require a log module which could write logs directly to a logging server instead of the FS.
I haven’t quite worked out how to deal with uploading content, plus other issues. My best idea for uploading content is to make people upload the project to a repo with CI/CD taking over from there. Most things use application servers these days, so application containers. mod_cluster might work in this scenario.
Allowing people to SSH into a box to upload content is a problem in this system, for instance.
Rather I meant that the discussion around BTRFS hasn’t evolved anywhere over the years. The tone of the chattering is still mostly sceptical and people keep asking when will it be ready, or if it will be ready. Fedora started shipping BTRFS by default a long time ago yet that hasn’t done anything to improve the mainstream perception of the filesystem. Nobody even remembers that Facebook has been using BTRFS for much longer.
Those are valid questions. It’s a 13 years old FS with caveats and rough edges. It has a very haphazard feel to it, and that’s on the project.
It’s the Windows of Linux filesystems. LOL
FB is the reason btrfs is an option in Anaconda on Fedora, and the reason btrfs might show up in CentOS. btrfs and Fedora has been an on and off relationship.
@sj87
“Fedora started shipping BTRFS by default a long time ago”
If you count “about a year” as a long time, sure
But I believe you might be confusing Ferdora and openSUSE, which has had BTRFS as a default filesystem for several years now, integrated with snapper, YaST, boot environments, etc. The whole shebang.
I know there is at least one other Linux distro that uses it as default but I can’t remember what it was right now (it wasn’t an openSUSE derivative).