Btrfs—short for “B-Tree File System” and frequently pronounced “butter” or “butter eff ess”—is the most advanced filesystem present in the mainline Linux kernel. In some ways, btrfs simply seeks to supplant ext4, the default filesystem for most Linux distributions. But btrfs also aims to provide next-gen features that break the simple “filesystem” mold, combining the functionality of a RAID array manager, a volume manager, and more.
We have good news and bad news about this. First, btrfs is a perfectly cromulent single-disk ext4 replacement. But if you’re hoping to replace ZFS—or a more complex stack built on discrete RAID management, volume management, and simple filesystem—the picture isn’t quite so rosy. Although the btrfs project has fixed many of the glaring problems it launched with in 2009, other problems remain essentially unchanged 12 years later.
One of those projects we’ve been hearing about for years. I think most distributions still default to ext4 – except for Fedora.
OpenSuse also has been the default one for a long while as well.
It’s a good file system for anything below raid-1.
They haven’t sorted the kinks for the rest, especially not automated it.
Yea, the tooling is awful. It’s all so half-baked.
In comparison, ZFS tooling is so nice to work with.
Except I’ve seen a ton of stories where people lost it completely because its fsck checker barely works.
Great, stories are good. I’d like to see more statistics, and considering that Fedora and OpenSuse have them as default now, I’d say my own lab tests trying to break it in RAID-1 and below probably are closer to the collected statistics that led to those decisions.
Yeah, that happened to me… twice. I’m sticking with ext4 until the end of time, it’s perfectly sufficient for my needs. I used FAT32 for my main data storage for a long time (when I dual booted XP) and never lost any data, although the filename restrictions, 2GB file limit and lack of executable bit were annoying.
> It’s a good file system for anything below raid-1.
Based on my own experience (actually using BTRFS on many dozens of different systems since roughly Linux 3.19), the raid1 profile is solid as long as you aren’t using hardware that completely ignores write barriers, but then that’s true for single disk usage as well and also true of most replicated storage implementations, so not exactly what I would call a deal breaker) and truly only care about single replication (though the recent double and triple replication profiles that got added a few kernel releases back also work equivalently well if the limited testing I’ve done with them is any indication). Used right, it will even save you from hardware issues that would destroy most non-ZFS RAID arrays or an ext4 or XFS volume.
I understand the usual complaints about array management that most people come up with when I say this, but the issue there is one of where the array management automation should live. ZFS thinks it should live in the drivers (and most other software RAID tools feel the same), BTRFS thinks it should live in userspace. One is not inherently superior to the other, (though I would argue the userspace approach allows much more flexibility) they’re just different. The only actual issue there is that there is no standard automation tooling. Not really an issue if you actually have enough background to just automate it yourself (and _that_ honestly is not that hard, speaking from experience of having actually done so for dozens of different systems), but an understandable ‘problem’ for the often inexperienced individuals that get pressed into administering systems.
ahferroin7,
I think that bad hardware is responsible for a lot of the issues over the years. Even a flawless software implementation can be subject to (and judged by) glitchy hardware. The power loss scenarios are especially worrisome. Every time I can recall encountering data loss was the result of a power failure scenario even for otherwise reliable file systems. Furthermore I’ve experienced catastrophic SSD failure during a power loss such that SSD’s internal structure became corrupt and could not even be reformatted.
I suspect a significant portion of systems would exhibit flaws under rigorous testing and that we’re actually leaving a lot to chance.
https://www.ece.iastate.edu/~mai/docs/papers/2016_TOCS_SSD.pdf
I’ve read more of these reports but it’s always the same story…
https://engineering.nordeus.com/power-failure-testing-with-ssds/
This makes it much more difficult to write a robust file system if it needs to compensate for hardware deficiencies. A file system with redundant journaling such as ext4 may prove more resilient against hardware flaws than a newer FS that is mathematically correct yet lacks redundancy, such as btrfs. IMHO the design of btrfs using COW blocks is very impressive, but it may be more fragile in the face of faulty hardware that doesn’t behave as expected.
My experiences have left me paranoid and I won’t trust any file system without a battery backup. And it’s not just the file system, I use LVM for snapshots and volume management, but that too can get corrupted by power loss. So I always recommend getting a UPS to do a proper shutdown.
I can’t see a link in your post, I’m guessing you’re referring to https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/
I’m hopefully waiting for bcachefs. TBH my only requirement is to use a small SSD as cache for a big HDD, I managed to do it with lvm-cache but it’s a PITA to set it up.
I’m using bcache (not *fs) with btrfs. It works fine, I think Fedora installer even supports setting it up.
Is this in the alternate blivet(?) partitioning?
Fedora 34 defaults to btrfs.
Just this week I experienced corruption on BTRFS where the filesystem produced multiple files with identical filenames (in the same directory!) but zero content. I wasn’t able to rm them directly but at least deleting the whole directory seemed to work… This corruption resulted after a power outage but it’s still pretty weird.
There’s also another BTRFS bug that causes the whole OS to freeze. It occurs when I pull the USB-C cable off my laptop. That cable is connected to my display which in turn has a USB disk attached to its hub. (The USB disk having a mounted standalone BTRFS partition.)
There’s some archaic null pointer dereference error happening but I am not able to store the logs as even the root filesystem cannot be written to anymore and the system only responds for a microsecond every 10-15 seconds…
I would like to report that bug but so far haven’t got the logs captured lol. It only happens by random so I cannot reproduce it 100 % of the time, contributing to my laziness on reporting it.
Btrfs has some features not available in ZFS which are very useful for home or smaller installations.
Most important one is being able to shrink volumes, and do RAID with not-exactly-lining-up drives.
For example, it was possible to have RAID1 with three drives with 4TB, 4TB, and 8TB sizes. The volume would be 8TB and have 2-redundancy. If the 8TB drive fails, and the usage was less than 50%, the volume can readjust to two drives instead.
ZFS on the other hand is unable to shrink pools (at least not in an easy standard way).
(things might have changes since the last time I set up a NAS, though).
I agree with this. Not having to deal with partitions on laptops and desktops is nice. LVM kind of takes care of this, but quotas on subvolumes are much more flexible. ZFS is kind of overkill for a desktop or laptop. It can work, and work well, but that’s not really what it’s designed for.
Btrfs on VMs, which generally only have one giant partition anyway, is also really nice. Shrinking the partition is the killer feature, and once again, quotas for usage management.
Btrfs subvolumes also integrates with containers nicely. I’ve been using systemd-nspawn containers, and things have been working really well. Creating a little VM with a btrfs root partition then running containers on it works quite well.
This is possible with ZFS, but it would work slightly differently. The data would be mirrored to two drives, and the max size would be 4TB until all drives have been replaced with 8TB drives.
I find the ZFS approach much more intuitive and much more inline with what I would want when mirroring a drive.
Yeah, ZFS is built around the idea data only grows and the server will get replaced. I think shrinking pools has been in progress since 2015, but I’m not exactly sure where it’s at.
Flatland_Spider,
I was surprised to see containers (specifically docker) directly interacting with btrfs. For small sizes it is actually a very nice (and stable) filesystem.
At this point, I think they have wasted too much time to get RAID5 working. I would have personally preferred they took their losses, and remove RAID5 from the feature set (and maybe add as a way in the future TODO).
I think they should give up on RAID for the time being and revisit the topic later too. btrfs on top of mdraid works fine, and it would allow the btrfs team to focus on things that could be better.
Flatland_Spider,
I don’t like the FS violating normal volume semantics and I also think raid & volume management does not belong in the FS layer (even if it worked ok). However I do think mdraid and LVM desperately need to be combined into something that’s smarter and more flexible.
Adding physical volumes to LVM is trivial and awesome, but if you’re running LVM on top of raid, it becomes messy and inflexible because you can’t just deal with it in LVM, you have to create a new raid device using mdraid, and then add that to LVM. Then if you do have a loss and and buy a new bigger disk and rebuild the array, the added capacity is wasted because mdraid is incapable of distributing capacity across all of LVM’s volumes.
That these two remain separate is limiting IMHO.
LVM2 also has other limitations that should be addressed too: the implementation of thin volumes isn’t great, it would be nice to have bcache / flashcache integration. Even though I’d appreciate having a next generation volume management for linux, I think it would be seen as redundant especially given the push for file systems to do raid & volume management internally. Even if the work were done, the politics might not be there to merge it. After all the AUFS file system I used to compile for my distro never got merged despite years of the author trying Some of you may have tried KNOPPIX, which used the same file system.
Flatland_Spider,
They need to break the “everything is a single volume” abstraction for several things:
1) Bit-rot protection: If a checksum does not match, they can force one replica copied onto others
2) “Thin” provisioning: LVM2 does this partially, but not dmraid. There is no “initialization” phase filling 10TB drives with zeroes. If blocks are not used, they don’t need to be in sync.
Alfman,
LVM can do RAID, too, and in the past I had preferred it over using mdadm. But once you also add encryption everything becomes difficult.
So basically:
Disc -> dmraid -> device-mapper (encryption) -> LVM -> ext4 vs
Disc -> device-mapper (encryption)* -> btrfs
Makes btrfs an easier choice.
The downside is needing to keep all device keys in sync, and probably writing a script to unlock them at the same time.
Yes a tighter integration would be very useful. Maybe a more powerful API with tools to automate these kinds of setups would go a long way.
Alfman,
Btrfs and subvolume quotas are a good replacement for LVM. However, Btrfs can’t export space as a block device, so LVM and ZFS volumes are still necessary for that functionality. As I found out when I tried to replace ZFS with btrfs on my KVM test box.
Hypothetically, mdraid could be extended with an API to facilitate communications between layers. The communications between layers is the thing which makes vertical integration attractive. ::shrug::
This is my complaint about the LVM + mdraid combination, and the reason ZFS is so attractive.
Tux3 looks promising, but I don’t have high hopes of it getting merged due to kernel politics.
Alfman,
Yes, but much like btrfs’s raid, it’s not as feature complete. For example I use raid-6, which is a no-go.
Yes, I agree, but part of the reason it’s an easier choice is because LVM and dmraid haven’t been refined as a modern unified toolset. If they were then using btrfs over LVMRAID could be a pretty easy choice for many of us, especially for those of us who need to manage volumes outside the file system for administrative reasons. For some use cases btrfs raid is very awkward and leaves a lot to be desired. Take VMs for example, it makes far more sense to perform raid on host volumes instead of in the guest where the file system gets mounted.
Bundling everything into one tool, increases code complexity and makes things potentially less robust for things like raid5/6 versus specialized tools. It’s also somewhat less flexible for admins who benefit from the separation of volumes and file systems.
I’ve automated LVM for my own purposes, including addressing the quirks of LVM2 thin volumes. While that helps, it can’t overcome the limitations of the current LVM tooling. I actually don’t think these are device mapper limitations at the linux block layer, only limitations of LVM itself. I would really like to see a new generation of LVM, but with btrfs getting all the attention this probably won’t happen.
sukru,
Oh yeah, I forgot btrfs has checksumming. It doesn’t matter when there is only one disk, such as the scenarios I’ve been using it. I’d rather they get the tooling sorted out first, because it’s grab bag currently, then figure out all the advanced features. Or just ask the mdraid team to build hooks and add features since that team knows what they’re doing with RAID.
I’m pretty sure, LVM RAID is LVM on top of mdraid. The RAID part is automated by the LVM tools. Which btrfs could replicate until the mdraid code could be replaced or extended.
I haven’t really bothered with thin provisioned LVM volumes. My prod VMs run on sparse qcow2 files, and I generally run out of CPU or RAM before I run out of disk space.
Dedup and encryption are the other two features I’d like them to finish before tackling RAID again. FDE would be nice, but I’d also like encrypt individual subvolumes, files, and folders.
Flatland_Spider,
Personally I think the level of integration between mdraid and LVM would be too significant to justify complex communications between them. IMHO it really should be one unified tool. LVM’s flexibility in terms of physical volumes management and mdraid’s flexibility in terms of raid algorithms really should be able to work together rather than one behind the other.
When a new disk gets added, mdraid is unable to gracefully expand the array because it’s making very primitive assumptions for the layout. LVM can handle arbitrary physical volume combinations and keep track of the full mapping, but it makes very primitive assumptions about raid. They could both compliment each other very well, not by stacking one on top of the other, but by marrying the two.
Basically I’d love to see one unified tool that understands the concepts of arbitrary physical volumes, arbitrary logical volumes, thin provisioning, arbitrary raid levels (possibly per logical volume since not every logical volume has the same requirements), re-balancing, sparse block checking/rebuilding, snapshots, SSD caching,
LVM has knowledge of sector usage that would strongly benefit dmraid, no need to initialize unused sectors the way dmraid needs to today. No need to wait a day for the entire disk to rebuild because LVM knows which sectors are being used.
I think there would be merit in having a few hints between the file system and LVMRAID so that the file system can request parity copies of the data (the file system may have reason to believe a mirrored copy is different by using file system CRCs).
Yeah, the guys at sun definitely thought this through. RIP.
I’ve never heard of it before now.
https://en.wikipedia.org/wiki/Tux3
Neat.
Re kernel politics, it makes it difficult to commit to working on something when you don’t know if it will ever be mainlined.
Flatland_Spider,
Last time I was only aware of raid 1 working, but it looks like things have come ahead with LVM raid.
https://www.systutorials.com/docs/linux/man/7-lvmraid/
Wow, I’m really excited about this!
LVM raid does not use dmraid (I’m not sure if it uses any of the same code, but it doesn’t use a stacked dmraid volume).
As you can see in the documentation, the raid is setup per logical volume, whereas dmraid only knows how to do one single raid level across the entire disk/partition. LVM implementing raid levels above 1 are new to me, but with raid1 mirroring LVM would just allocate blocks for logical volumes twice while guarantying that mirrored copies are stored on different physical disks, All other LVM physical volume management functions worked as usual. If you added a disk, LVM would just start to use it as new sectors get allocated, I don’t believe it supported any kind of re-balancing (this may have changed?).
I’m going to need to test LVM’s higher raid features
I dislike LVM2’s implementation of thin volumes, it carries a lot of ugly evolutionary baggage and exposes you directly to it. As a user you want the entire volume group to serve as a pool, but LVM2 doesn’t work that way and you can’t easily resize pools. At this point it’s probably hard to fix because backwards/forwards compatibility.
Flatland_Spider,
re: checksumming / bit-rot
There was a custom btrfs setup on top of dmraid by NetGear. They supported bit rot protection, but don’t seem to have shared this upstream: https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/Bit-Rot-Protection-explained/td-p/1132146
The sad thing is, they seemed to have stopped development of their NAS devices, and stealth exited the market: https://community.netgear.com/t5/ReadyNAS-Beta/ReadyNAS-OS-Dead/td-p/2008122
sukru
Interesting. I wonder if it was anything more sophisticated then snapshots, a cron job, and file checksum list?
ReadyNAS devices were pretty decent at one point. Qnap and Synology kind of ate the consumer NAS market though, so it’s not surprising Netgear left.
jolla’s first smartphone used btrfs but with only 16GB of storage you quickly ran into problems after updates. Fortunately you can now (when it’s too late) flash Sailfish OS with regular ext4 formatted partitions.
Meanwhile I’m over here still missing BeFS and all of its ahead-of-its-time features.
BeFS was really nice.
I wonder if BeFS could have grown to include btrfs and ZFS features?
ZFS: RAID-esque drive pools.
ZFS: Exporting space (volumes) as block devices. (Mainly useful for VMs.)
ZFS/btrfs: Getting rid of partitions and managing the logical separations via datasets/subvolumes.
ZFS/btrfs: Datasets/subvolumes to create BeOS jails/containers.
ZFS/btrfs: Snapshots via something like CoW.
ZFS./btrfs: Send the snapshots to a remote machine.
ZFS: Boot from a snapshot.
I’m actually playing around with ZFS via OpenIndiana Hipster on a spare workstation right now (typing this on that machine), but not in a serious capacity. Depending on how I like it — and so far I’m highly impressed with it on an Ivy Bridge I7 with a GT 1030 GPU — I’m tempted to run it as my main workstation OS on a proper machine with ECC RAM and a RAID controller for a while to learn the ins and outs of ZFS. It’s my first time playing around with ZFS at all, as well as with a Solaris based OS. I have a couple of older Dell tower servers in the attic that would suffice.
FreeBSD might be a better choice for that. Their ZFS implementation is more current is updated more frequently, too.
FreeBSD is the best option, and the second best is CentOS. It works on Fedora, but extra care needs to be taken when there are kernel updates.
If you haven’t already heard this, the best advice when a RAID controller is in play is to export all the disks as individual drives (JBOD). ZFS expects complete control over the drives, and things could get weird if it’s layered on top of another container.
Linux really needs to get the management thing and the consumer facign experience. 99% of potential users eyes will glaze over at the tech talk and not want to go anywhere near a flakey beta. That’s just a fact and irresponsible throwing away of market share especially now Windows has shot itself in the foot. People can make their excuses but that is how it is.
I mean this is no different than talking about Windows or MacOS using a new file system. This talk is for websites like this or for people like us. Most users will use whatever is default on an install or what the installer recommends. A new user installing Ubuntu will just use what the installer defaults to. Aside from Fedora, none of the big name distros default to Btrfs so I think it’s a non issue as anyone that needs Btrfs knows what it can and can’t do.
@lakerssuperman2
You don’t get management or marketing otherwise you wouldn’t be saying that.
I get both of those things. The point your original post doesn’t fully address, but posts below from dsmogor do, is that Linux isn’t in and of itself a product. Products are built on top of the kernel and other software elements in the Linux ecosystem. As mentioned Chrome OS and Android are two huge consumer facing products, but also individual Linux distros are as well. Whether it be good old Ubuntu or something like Linux Mint or PopOS, those specific companies and groups worry about marketing Linux as a product and rarely do those groups tout things like filesystems to your basic Linux convert. Companies like Canonical and Red Hat bridge that gap for marketing and packaging Linux for users.
lakerssuperman2,
That’s a great point that gets overlooked even by those of us in the industry. We get so accustomed to talking about “linux” as being the whole deal when in fact it’s just a kernel. I wouldn’t be surprised if most of the criticisms for “linux” are actually criticisms of the distros or userspace environments that aren’t directly related to the linux project at all.
@lakerssuperman2
Yes I do know all this and if you can’t grasp the point that “Linux” is shorthand for bigger complex discussion in all honesty I don’t want to waste time with you. The reframing and manovering just to score a point is beyond tiresome. So no you don’t get it.
HollyB,
You not wanting to debate with anyone who presents a countering viewpoint is beyond well established, but that does not invalidate his point, nor does it mean he doesn’t “get it”.
The problem for you is that linux kernel developers are serving their interests and not yours. Yes, I can see how that’s frustrating because you’d rather linux focusing on your opinion and needs, your comments make it clear that linux will always fall short for you. That’s fine, so be it. The truth is that linux mostly caters to the development crowd and most people aren’t in their target demographic. Sure you can complain, but as far as I can tell you’ve never paid any of them a dime or even filed a specific feature request, so why should they owe you anything? Consider their point of view: users like you will complain, complain, and complain, take, take, and take without giving anything back to the community. At the risk of being very blunt, they may not want you as a user; impossible to satisfy and no reward for trying.
I’ll post this here since I can’t reply directly to HollyB’s last comment. The notion that Linux is shorthand for “bigger complex discussion” is fine, but it’s also a point you didn’t clearly make. It’s also something that is beyond the ability of any one entity to address. Who should address the connotations associated with the word Linux? Red Hat? The Free Software Foundation? Canonical? None of them have the reach to do what you’re suggesting. Ubuntu’s mission statement was Linux for Humans. They can’t stop people from writing articles about a largely optional Linux filesystem. And even if they choose to not use it, Fedora will and we’re back to my original point.
There was no maneuvering or reframing. I simply said no one group can do what you’re suggesting. Further, you’re suggestion that Linux is shorthand for larger technical complexity might be true, but again, this is an article about filesystems that Linux can use. This is no difference than any of the articles about when APFS came out for Apple products or any article on NTFS for Windows.
And my final point was, I’m not sure who you think this is a problem for. If you mean the new average user of Linux looking to switch, they’re going to likely end up with a distro like Ubuntu that hides this stuff from the user to keep things simple.
Anyone else would likely have the technical know how seek out and read such technical discussions.
> none of the big name distros default to Btrfs
OpenSUSE (and I _think_ SLES) use it by default, and take _really_ aggressive advantage of subvolumes.
> it’s a non issue as anyone that needs Btrfs knows what it can and can’t do.
This is assuming quite a lot, and it’s generally not true. Maybe more true than it used to be 2 years ago, but the general rule is that _a majority of users don’t read any documentation_ (this is not specific to BTRFS, it just causes more issues there than in other places). The percentage has gone down a bit, but it used to be that significantly more than half the questions across IRC, the mailing list, and places like Stack Exchange regarding BTRFS had to do with the most basic aspects of managing the filesystem (usually a result of a misunderstanding or complete lack of knowledge about the two-stage allocation mechanism that BTRFS uses). In almost all of those cases, the question was trivial to answer by just reading the documentation, but none of the people asking had done that _at all_.
It depends on which market we’re talking about.
Linux owns a huge chunk of the server space, and it’s very well suited for that.
Personal computing devices, not so much. I do agree a company needs to take Linux and polish it like macOS, but no one has figured out how to make money doing that just yet.
The biggest problem being there are software suites which won’t touch Linux which professionals need, and that hinders adoption. Office, Photoshop, etc. are good enough, and any competitor will need to be 1000% better and become an industry standard to make a dent. Otherwise people are going to keep making the same choices.
Michael W. Lucas has a good explanation of the problem in his FAQ. He uses Word and InDesign because the publishing industry says so. https://mwl.io/faq
I’ve been really impressed how easy Fedora on the desktop has been in the last year or so. I’ve switch a couple undemanding users, and everything has worked without any problems.
Hypothetically, FreeBSD would make a great base for a Unix like desktop OS since it’s kernel ABI is stable, but it would take a team of engineers to get it up to speed.
@Flatland Spider
Too many tech heads put the cart before the horse. Just look at Wayland and that’s just one roadcrash in the Linux space. Until the base organising issues are fixed anyone with a product is just going to shudder and walk away because they don’t want to get involved.
The Linux world can fix its own issues with effectively no cost but for the intransigence. If you’re going to ship a product which gives people nightmares versus ship a product which doesn’t why pick the hardest route? If someone wants a mysterious person to come along with deep pockets to speculate on a polished consumer ready version of Linux why would they bother if they are going to get their arm chewed off? They wouldn’t and they didn’t get rich by doing so.
Linux is essentially a sophisticated vertical *embedded* framework upon which end user facing products are build. It’s not an successful ecosystem by itself, but products built on top of it can be.
Android and Chrome OS are testament of it. None of the products build on top of it have so far managed to enjoy any measure of success as an alternative to Wintel platform comparable to MacOS, but well nobody else have succeeded either. Many of them much more polished and internally consistent than Linux have ever been. It just looks that there’s a very limited space for competing propositions in the platform space and usually market ultimately converges in duopoly (e.g Win-Mac, Android-IOS, Win-Linux (server space), and I guess soon AWS-Azure).
One of the example when Linux have outdone it’s competitors as a platform is Docker. Both MS and Apple have been caught off guard with this development and are slopping trying to catch up.
I also think the motivation for Linux desktop have silently shifted through the years. People are no longer striving to provide windows alternative for Joe Average (for whom it’s getting less relevant anyway) rather than define the golden standard of an engineering workstation (developers workstation in particular) as the focus of commercial alternatives have been drifting away from this goal in a futile efforts to compete with Android/IOS.
If you see that in this light (esp. with the success of Docker) I’d trump at least a moderate success for so called Linux desktop.
dsmogor,
Linux desktops aren’t really for average Joes TBH, but it could depend on the use case. If the user only needs a web browser and repo software, then the user shouldn’t have too much trouble. But linux always has (and likely always will) be targeting developers for the simple reason that developers are the main group contributing any resources to its development. End users aren’t really paying to shape linux desktop and so they shouldn’t really expect it to cater to them.
It’s been quite the opposite IMHO, android & IOS have been utterly futile at competing with traditional form factors. Instead what they have done is opened up a completely new mobile market, which is extremely popular. Still for productivity tasks that have always been the realm of personal computers, mobile solutions are lousy for most types of work. This is highlighted during the pandemic where schools and employers clamored to get personal computers, not mobiles, for students and staff. Of course it could be done just as a matter of principal, but the experience is inefficient and sub-par for most who are trying to get work done and not just using mobile at their leisure. Even something as basic as reading osnews is less pleasant on a small form factor with squished text and so much scrolling. And that’s before we even get to typing & editing, which is just awful. So many things like managing files and emails are just worse on mobile.
So while desktop & mobile can kind of do some of the same tasks, I don’t find mobile to be any more competitive with desktops than desktops are competitive with mobile. They are rather distinct markets. Consider that if we had started out with mobile smartphones first, we’d actually be see many consumers seeking to upgrade to a full desktop experience. In general I don’t think they’re directly competing with one another. Smart watches on the other hand, it’s going to be hard for PCs or mobiles to compete against those
@dsmogor
You’ve proven my point about tech heads putting the cart before the horse.
The entire X community put their weight behind Wayland as the X replacement even though they knew it was going to be painful to replace 30yrs worth of code.
Wayland is working well now, and features people have been asking for are filling in. How everything is structured is really exciting, and I’m eager to see it completed.
What would you suggest?
Linux has picked up lots of support in the last few years. In the last few years, Windows, macOS, and Linux support has really blossomed into a trend.
How so?
This is where I feel a company needs to come in. The community can do what it wants, and the company can buy hardware for testing and pay developers to prioritize the unglamourous work of bug fixes and polish.
Exactly, this is why I left Windows for the *nix world. FreeBSD, OpenBSD, and Linux are my happy place. ::vibing::
I do understand what you’re saying. I added macOS machines to my stable a couple years ago because while Linux is the best Unix-like desktop platform, software can be kind of buggy at times.
In the macro view, the hard way creates billable hours and props up people’s egos. If we didn’t pick the most convoluted way to do things, it wouldn’t be profitable, and the bros wouldn’t get to brag about how much pain they can endure. We could have built a tech world which is user friendly, but we didn’t.
You’re describing Mark Shuttleworth and the original goal of Ubuntu.
Mostly for the good of humankind, and the belief that the only way we advance is by making knowledge cheap and freely available. So idealism.
Sometimes people just have to do something dumb to make the world a better place.
There is some pretty good data to back this up. Company creation since 2000, for example, many companies being based on FOSS and all companies are using some FOSS in the day to day.
This gets into another conversation I’ve had about FOSS and how states should sponsor it since it’s a net good. Humans advance by sharing knowledge rather then hording it.
Anyway, I’ve spent some time thinking about how to a free *nix consumer company could be successful. XD The latest incarnation of the idea centers around selling complete RPi based computers instead of kits, kind of like System76. Another was NUCs in custom cases with cooling solutions which aren’t awful.
I have some opinions about the subject.
Flatland_Spider,
I’m a mix between idealist and pessimist, which is a tough combo, haha.
I feel there is so much potential for progress if we remove barriers to entry and embrace innovation as the central pillar for our economy instead of profit. Alas, as it stands, wall street goes all in on profits. Profits are the ends that justifies all means regardless of the toll it takes on society, the middle class, clean water & air, our health & well being, anti competitive behaviors, etc. Policies like software patents have become riff with abuse and they’re used to limit innovation in order to create monopolies and increase profits. It couldn’t be more backwards from the ideal.
With the rise of oligopolies increasingly displacing small and medium businesses, it feels like new opportunities are getting worse. I’m sad to say it, but a significant portion of my client base is struggling financially. and my own business has been squeezed by higher costs and diminishing revenue. I want something more sustainable. Obviously this is sustainable for the big guys, but what about everyone else?
I’m trying to make a push into AI work, but as of right now I don’t have the right sort of clients for that in that they’re not ready to pay for it.
This may not be the greatest venue for it, but I am interested in hearing your thoughts
Re: LVM raid (starting new thread because wordpress handles deep threads poorly).
I did some quick testing for LVM raid functionality and would seem that high raid levels are indeed working now.
/dev/test/test3 can be formated,mounted, etc and LVM takes care of raid.
LVM’s documentation indicates LVM is using the MD driver…
However I did not create an MD device, nor does the kernel report that LVM is stacking on top of dmraid or visa versa (at least according to lsblk, which ordinarily does display raid devices).
I find the possibility of eliminating mdraid really intriguing, but in my short testing I’ve already come across a hiccup…LVM is not able to rebalance the array and will fail to create new logical volumes if it isn’t balanced. I’ll have to see if it is possible to do manually. This could be really problematic with snapshots.
Also, I don’t yet see a way for thin volumes and raid to work together given that they use distinct LVM volume types.
Maybe I was being too pessimistic about LVM’s progress, I’ve got more research to do to see if the limitations can be mitigated with external tooling. I’d still like to see it become more refined.
Interesting that the RAID part is much more integrated with LVM now. I wonder if these changes are because of RH’s Stratis storage project which is based on mdraid, LVM, and XFS.