“UDS is over! And in the customary wrap-up I stood up and told the audience what the Foundations team have been discussing all week. One of the items is almost certainly going to get a little bit of publicity. We are going to be doing the work to have btrfs as an installation option, and we have not ruled out making it the default. I do stress the emphasis of that statement, a number of things would have to be true for us to take that decision.”
Does this mean that Canonical is getting involved into btrfs development to make it stable in time?
No?
Who would’ve guessed that….
Edited 2010-05-15 03:14 UTC
….more users testing btrfs would be a terrible hindrance to development!?
So your counter-argument to the fact that Canonical is not willing to do actual programming work on btrfs is that it’s a good thing that Josef Bacik from Red Hat and Chris Mason from Oracle are now supposed to be obligated to waste their time searching through Launchpad for bugs in Ubuntu’s implementation in btrfs?
Considering that Canonical was not able to properly back-port a Fedora Xorg patch for GLX 1.4 (causing a massive memory leak that affected neither upstream Xorg, nor Fedora) and needed to break the feature freeze by reverting to GLX 1.2 in the late RC phase of Ubuntu 10.04, I guess the work of the geniuses at Canonical causes more headaches to upstream devs than “more testers” are worth it….
It was a flippant comment in response to your sarcastic post (you are taking things a little too seriously).
The memory leak may have not been caught if there was less testers. How are the developers obligated to search through launchpad?
Why does Ubuntu make it harder for upstream?
Fedora 13 has been delayed from the 18/5/2010 to 25/5/2010, it makes them no better or worse than Ubuntu in terms of issues near release.
To note I like Fedora and Ubuntu equally. I just don’t understand why people try to such the life out of projects when they are at least considering implementing something new.
Edited 2010-05-15 12:29 UTC
Since you mentioned Fedora, I would like to note a few things:
* Upstream does get affected if a downstream patches something badly and bug reports get filed upstream. This is a common problem and patches needs to be avoided as much as possible unless you have the expertise to do them properly and carefully.
http://www.happyassassin.net/2010/04/27/when-qa-works-x-org-and-mem…
* A major distribution that relies on support and services to make money need to be strong upstream developers or atleast get enough staff to fix problems rather than try to offload it to another distribution
http://airlied.livejournal.com/72817.html
* Fedora’s standpoint of having a published release criteria and the willingness to postpone releases if the blocker bugs are not fixed does affect the quality of the release although delays should be avoided when possible to do so.
Well, we all know that *most software is seen in fedora before ubuntu, but the suggestion that Ubuntu uses or builds upon fedora is ludicrous. Ubuntu is based on Debian. Any time Mark Shuttleworth outlines the issues that need to be worked on in the next release, the developers in question use debian testing/unstable as the base for these changes. For instance: Mark wanted a better graphical boot experience. The devs looked into debian testing and grabbed plymouth. I know Redhat, and hence fedora, developed this piece of software, but if it wasn’t in debian at some point, ubuntu wouldn’t have used it.
<p>
I see so many comments here about how ubuntu “steals” from fedora that I can’t ignore them any longer. Red Hat (and hence fedora), are THE top suppliers of ENTERPRISE linux, and as such, have more developers working not only on existing FOSS technologies, but also coming up with new software, to suit their goals and clients. This is the nature of FOSS. Just because I use your software to make something better suited to my particular situation, which may not be YOUR priority, but which may also cause bug reports to be filed upstream (mostly by inexperienced bug filers), does not mean I’m stealing. Isn’t this the entire reason we all wanted to use FOSS in the first place?
In conclusion, and as a developer who both causes problems for upstream and hears about problems downstream; RELAX!!!! If you can’t handle a few (or many) emails a day asking what’s wrong, and subsequently redirect them to the people who know about the problem (or fix them yourself, as many bugs filed upstream are only found out when used by more people downstream), you either need more people working on your project (which you should ALREADY be grateful for), or should probably go to work for a closed-source proprietary company like Apple (they never hear about anything downstream, or don’t listen).
<p>
DickMacInnis.com
Learn to read. Nobody wrote that Ubuntu is based on Fedora. Ubuntu uses specific features from Fedora (like Plymouth). Many features Ubuntu uses may be in default Debian, but are primarily developed by Red Hat with Fedora in mind: For example Red Hat tries to make Nouveau 3D “good enough” for compositing window managers in Fedora 14.
As Fedora is always released ~1 month after Ubuntu, that’s too late for Ubuntu 10.10.
Even with smaller financial resources, Canonical could assign a developer to also work on Nouveau and finish that work a month earlier in time for 10.10. Canonical does not do that. Canonical employs people but only very, very few of them participate in upstream FOSS projects.
Comparing the work force, Mandriva and Canonical are roughly equal in size (~100 employees), but despite Canonical’s much deeper pockets, Mandriva is ahead of Canonical in every FOSS contributions statistic I know.
Apple (co-)develops WebKit, CUPS, GCC, LLVM, and much more. Strange that a “closed-source proprietary company” contributes much more to FOSS than Canonical…
Stop spamming.
I should note that Canonical has more than 300 employees. Not 100.
http://www.linkedin.com/companies/canonical-ltd
Not sure of the organization split of developers vs others but you can make rough estimates
There are more ways to contribute to Linux than to actually write code. People who send bug reports contribute, people who have ideas on how improve usability contributes, people who help with marketing contribute.
Canonical have done, and still do a lot marketing for Ubuntu and thereby Linux in general. Accusing them for not contributing enough is uncalled for. Leave the marketing to Canonical, and let the coding be done by the coding experts.
Considering that they don’t have anyone actually working on the development of the file system, it seems like that might not be the best idea for them. I don’t think anyone else is using it as the default filesystem. If Ubuntu wants to be the stable linux distro, they really shouldn’t do that. If they want to be the bleeding edge distro, that contributes significant patches to the upstream providers, then that would be a great idea to have it default.
My guess is that after Canonical heard that MeeGo adopts btrfs as default, they hope that Intel and Nokia do the stabilizing work for them….
I didn’t hear about MeeGo adopting it as default. Intel can probably be counted on doing a good job shaking out some bugs, but its not going to be widely tested. There is no MeeGo Market, and there won’t be one before the next version of Ubuntu.
IIRC the first MeeGo devices are targeted for release in Q3 or Q4 2010 which is around the time Ubuntu 10.10 arrives.
This means that Intel and Nokia have a deep interest in joining the Btrfs development team to ensure its stability.
Both companies have capable operating system programmers. We can at very least expect some Btrfs patches from them.
Ok, I understand that, but there won’t be the wide spread user testing that always shakes out bugs. Again, I’m not bashing Ubuntu, its just different then the way they’ve done things in the past.
Fedora already does that.
Why use a default that Grub can’t understand? That makes no sense what so ever. Not to mention that Btrfs is new enough that there aren’t many rescue tools that support it. That will create a pain for experts and panic for a novice if they actually need to muck around recovering a system.
I don’t pretent to be an expert on the inner workings of GRUB, but I’d imagin that filesystem support can be built into GRUB.
After all, GRUB didn’t originally support ZFS but now OpenSolaris can boot ZFS filesystems.
I think the bigger issue is not GRUBs support but rather the fact that BtrFS isn’t yet ready for consumer desktops
You make it sound as if that’s set in stone. Both are FOSS. Grub2 can be modified to understand Btrfs.
The more important question is: Are Canonical for a change willing to do this by themselves?
Given Canonical’s track record, I guess they’re waiting for Red Hat to do this in time for them (which already doesn’t work out for the Nouveau 3D drivers, because Fedora 14 will be released 1 month after Ubuntu 10.10 and adopting a driver that’s even beta by Fedora’s bleeding edge standards is not something Canonical will do).
That said, I’m hoping that I get positively surprised by Canonical, though my hopes for that aren’t high.
Yeah right.
When did Ubuntu start using Grub 2?
When did Fedora start using Grub 2?
F12 certainly does not.
From a recent LUG problem report, it appears that 9.10 does and it appears that quote a lot of people are really having fun (not) with it.
IMHO (Which might be wrong) it seems that Canonical are a little more ‘gun-ho’ with stuff that is not quite ready for ‘prime time’ so to speak than Fedora. Given the higher usage of Ubuntu, I’m not convinced that this is all together a good thing.
I’m hoping the F13 is pretty stable as my guess is that a lot of it will be used in RHEL 6 later this year.
Just my 2p worth on the subject.
RHEL 6 uses Fedora 12 (not 13) as base.
I believe it was Ubuntu 9.10 where Grub 2 was first time default for all new installations.
Confirmed. I remember when I installed ubuntu 9.10, made wi-fi work by downloading and compiling a non-broken package from linuxwireless.org using a separate computer running windows, discovered that there was still no sound after updating everything, said “oh, well, that’s why I install ubuntu releases on a separate partition after all”, tried to modify GRUB settings in order to make it target my “old” 9.04 partition, and discovered some strange config file in a syntax which I didn’t know of with a big warning on top of it telling me not to edit it.
9.04 used GRUB Legacy, hence GRUB 2 appeared with 9.10.
Edited 2010-05-16 07:42 UTC
Btrfs is a nice file system and all, and is probably the future for Linux in general, but it’s way too early to even talk about making it a default file system. Let’s have a release or two where is is fully working and stable, then we’ll talk about defaults.
According to http://lists.meego.com/pipermail/meego-dev/2010-May/002183.html btrfs already works better than Ext3
That’s nice, but what about the current standard: Ext4?
I read the other day a very nice comparison between Ext3, Ext4, XFS and Btrfs. Ext3 outperformed Ext4 by far. Btrfs surprised everybody by even outperforming Ext3 on most tests (though not all tests).
I’ll try and find that review and post a link. But it’s not the only review I have found with come to similar conclusions. Btrfs is quite usable already and the built-in compression made the filesystem perform even better (with todays fast hardware).
Please do, I’d be very interested to read that.
From my own *unscientific* usages, I’ve found Ext4 a very pleasant upgrade from Ext3 if just because of the significantly improved fsck times. But I’ve never really looked into it any more than that. So I’d be interested to know how much of an improvement / step backwards Ext4 really is.
I dont think you should focus on performance. Is your data safe with XFS, ext4, etc? No. Read these PhD thesis and research papers:
XFS is not safe against data corruption:
http://pages.cs.wisc.edu/~vshree/xfs.pdf
And neither is ext3, JFS, ReiserFS, etc:
http://www.zdnet.com/blog/storage/how-microsoft-puts-your-data-at-r…
“I came across the fascinating PhD thesis of Vijayan Prabhakaran, IRON File Systems which analyzes how five commodity journaling file systems – NTFS, ext3, ReiserFS, JFS and XFS – handle storage problems.
In a nutshell he found that the all the file systems have
. . . failure policies that are often inconsistent, sometimes buggy, and generally inadequate in their ability to recover from partial disk failures. ”
But ZFS successfully protects your data. Here is an research paper on this:
http://www.zdnet.com/blog/storage/zfs-data-integrity-tested/811
I couldn’t see where/how to edit my original post, so replied to it to give the links. The tests was done by Phoronix.
* Btrfs Gains As EXT4 Recedes
http://www.phoronix.com/scan.php?page=article&item=linux_2632_fs&nu…
* Btrfs versus EXT4 comparison using the Linux 2.6.33 kernel:
http://www.phoronix.com/vr.php?view=14524
* And an slightly earlier test also done by Phoronix regarding Ext4:
http://www.phoronix.com/vr.php?view=14516
* And then the latest one done with Ext3, Ext4 and Btrfs on Netbooks (I haven’t read this yet):
http://www.phoronix.com/scan.php?page=article&item=ubuntu_netbook_f…
Edited 2010-05-16 21:40 UTC
They just changed to Ext4 as the default in new installations of 09.10. Why would they switch so soon, especially to something not so ready?
To quote Arjan van de Ven from Intel who works on MeeGo:
http://lists.meego.com/pipermail/meego-dev/2010-May/002183.html
But that comment is no more helpful either as it doesn’t discuss what the issues with ext3 were let the fact that ext3 has been superseded by ext4.
I want to know:
* Why was BtrFS less of an issue? (and what the issues were)
* What were the issues with ext3?
* Would those issues still have existed if they’d used ext4?
* and were the BtrFS issues more technical than the ext3 issues (it’s all very good and well saying there were more issues with ext3, but if those issues were easy to fix and BtrFS’s weren’t, then ext3 will still make a better consumer fs for the moment).
Dude, I provided a link to a mailing list archive. Act on your own and read the entire thread. At least some of your questions are answered there.
I did and they weren’t.
They quickly went off the topic you posted
Why do you even ask the questions here? Why would a MeeGo project member answer them here?
Do you always need someone to tell you what to do? I mean… seriously … ask on that friggin’ mailing list and not *here*! Got that?
But hey, because I’m a nice guy, I break down the mailing list thread for you:
Different MeeGo team members made it very clear that btrfs works better than ext3 in their extensive testing. They made it clear that btrfs currently has some problems, being currently relatively new (one problem they specifically mentioned is the increased space requirements that they are working to resolve), but that ext4 also has its fair share of problems problems for the very same reason and OTOH without even the benefits of btrfs’ feature set.
For crying out loud, this is hard work!
Because you posted that quote in response to a comment someone made on here about BtrFS not being ready so I was making a point that you’re post was no more helpful than the comment you were originally criticizing.
In fact, let’s get back to the start (and with some heavy paraphrasing):
original comment: BtrFS isn’t ready
your reply: someone told me it’s more ready than Ext3 so your comments aren’t helpful
my reply: care to elaborate or are you just going to leave us there with an equally unhelpful post?
your reply: it’s all in the link. I’m not going to help beyond that.
All I wanted was some data to back up the comments made. Something tangible and conclusive. Instead I’ve read nothing more than faceless techies arguing non-quantities.
</rant>
[edit]
I take that last part back: ggeldenhuys has provided some real figures and studies.
Thank you ggeldenhuys
Edited 2010-05-16 22:14 UTC
Simply because Ext4 is crap and very slow. Even Ext3 is MUCH faster than Ext4. Btrfs is quite usable already and already outperforms Ext3 and Ext4 in many (most) tests.
Also, just because a distro makes some or other filesystem the default, doesn’t mean you as an end-user needs to you use that default. The installation allows you to choose any file system you like.
And when in doubt about a file system, do what I do. Make one of your non-bootable partitions (eg: /opt) the file system in doubt. Then play around with it, copy/read files from it, play movies form it. Do software compiles from it. If it doesn’t work out for you or doesn’t perform to your expectations, simply copy the data off and reformat that partition with a different file system. It’s not that damn difficult.
I think Ubuntu has a good idea. Put Btrfs out there so it gets some exposure – that’s the only way software improves really fast.
http://www.phoronix.com/scan.php?page=article&item=linux_2634_fs&nu…
From April, but the general consensus is that ext4 is the best overall filesystem.
And a big warning is there too. DO NOT ATTEMPT TO RUN postgreSQL on BTRFS. The one real life example, btrfs chokes and dies on (doesn’t crash, just crawls).
While Phoronix is great for news around Xorg (they do a lot of original research on this topic by following mailing lists, etc.), Phoronix’ benchmark methodology is usually flawed. Somebody once even made the joke that Phoronix would benchmark Linux distros by comparing the CD images’ checksum and the higher one “wins”.
That’s a pretty big accusation. Can you back it up with some facts?
Sure: http://www.kdedevelopers.org/node/4180
Talk about misleading. The linked article complains about a test not being honest, but the test actually does explain the result. If anything they can be accused of being too thorough. Showing you where you won’t notice a difference between options. When building a system its just as important to know where throwing resources will not result in a performance increases.
No real surprise considering Btrfs was only recently put in the Linux kernel. Now that most/many Btrfs features have been implemented and the on-disk storage format has been set, now it needs to be tested in the wild and performance tuning can start. Btrfs with compression already outperforms Ext4, and ext4 has been around for some time now. That tells you something!
I’m definitely eager to try Btrfs with compression enabled (this is what I loved about ZFS’s idea too). With today’s fast hardware and the hard drive being the biggest bottleneck in any system – compressing data means reading less data off the hard drive – which will translate to increased system speed.
Yeah, I know btrfs is a moving target. The point of that was that ext4 is not the crap that the other guy said it was. It is better overall than ext3 according to that benchmark. When arguing I usually tend to believe the one with the most evidence in their favour.
Ubuntu always seems to rush into implementing things that are still in Beta alpha: Grub 2, Plymouth… and Pulseaudio.
I hate pulseaudio with a PASSION, for me it’s a virus, always taking 40% of the cpu. Removing it was the best thing.
Anyway, that’s my 2 cents.
I don’t know much about BTRFS, but I know that XFS is an awesome journaling file system developed by SGI. Its been in the Linux kernel for some 10 years or so.
With all the ‘features’ of BTRFS, I’m a bit concerned about the overhead. It seems pretty good for large database servers and so forth, but not sure how well it works in low memory environments (think netbooks, or devices, where incidentally XFS works really well).
XFS Did catch on. Its the OS of choice for servers these days. It is geared towards problems that a graphics workstation company like SGI had: large files. Its performance with smaller files isn’t as good.
The features of btrfs is very sweet, just the rollacks (snapshots) are cool enough. Performance wise it’s getting there:
http://www.phoronix.com/scan.php?page=article&item=ext4_btrfs_2633&…
At almost each release, Ubuntu did break something. Synaptics touchpads, pulseaudio, free radeon driver, windows management habits… Now this release, like 8.04 LTS, will be a special one : they’re going to break both the UI *and* the filesystem.
“The Btrfs disk format is not yet finalized, and it currently does not handle disk full conditions at all. Things are under heavy development, and Btrfs is not suitable for any uses other than benchmarking and review.” (From the developers, at http://oss.oracle.com/projects/btrfs/ )
Does someone really think that they’re going to get this to a stable state in 6 month, without contributing to the project the tiniest bit ?
The on-disk format was finalised with kernel 2.6.31, and only forward-compatible changes are allowed.
That page is old. The new page is https://btrfs.wiki.kernel.org/index.php/Main_Page
The disk format has been stabilized, and it is supposed to be pretty stable. Still, I’m not sure if it’s ready for Ubuntu yet. Personally, I’m worried more about the performance than stability.
Thom, where did you get that “Next Ubuntu Release” from? It’s not in the linked article. If you have another source, please link it. From all I read btrfs will probably be _default_ perhaps in 12.10. My source is Phoronix: “Ubuntu Has Plans For Btrfs In 2011, 2012”
http://www.phoronix.com/scan.php?page=news_item&px=ODI0NA
Edited 2010-05-15 09:39 UTC
There’s a blog from the person who gave the talk at the Ubuntu Dev Summit:
http://www.netsplit.com/2010/05/14/btrfs-by-default-in-maverick/
Edited 2010-05-15 10:15 UTC
Sure it it. It’s the “Maverick” part of “btrfs by default in Maverick?” in http://www.netsplit.com/2010/05/14/btrfs-by-default-in-maverick/
I don’t like Gnome Shell and I don’t trust Btrfs. 10.04 has been solid for me. Thankfully it is an LTS, so I will be staying with it as much as I can.
Ubuntu 10.10 won’t use Shell and the default FS is only of any matter on unpartitioned drives.
I like the concept of Btrfs and open to the idea of making it an option as a file system in the coming releases of Ubuntu. But I wouldnt necessarily just jump on the bandwagon with it until likely the next LTS version.
I wouldn’t want it to necessarily be the default version right off the bat, give it a couple of release, get some feedback and then make a determination.
THat’s exactly what Canonical is doing: Wait for Oracle, Red Hat, and the parties involved with MeeGo (Intel, Nokia, Novell) to do the stabilizing work on btrfs and then pick it up without getting involved in the actual stabilizing process.
If (and only if) btrfs is stable enough for 10.10, Canonical will pick it up. If not (because Fedora 14 and possibly even MeeGo 1.0 is released after Ubuntu 10.10), Canonical will wait until 11.04 or so.
I really don’t know why people can’t read and comprehend articles anymore. “Possibly default” and “not ruled out” means it MAY happen, given the circumstances are right. But hey, don’t let actual facts stand in the way of your Ubuntu bashing and conspiracy theories.