In January, we had read the various arguments regarding Mozilla’s decision
not to get an H.264 license. This has generated a
lot of
discussion
about the future of video on the web. With Youtube, Dailymotion, Hulu and Vimeo having adopted H.264 for HD video, Mozilla and Opera should use the codecs installed on a user’s system to determine what the browser can play, rather than force other vendors to adopt Ogg. Refusing to support a superior codec would be a disservice to your users in years to come. Why hold back the majority of your users because 2% of your users are on niche OSes?
One of the reasons Firefox 1.0 gained traction against IE6 was the many features it had which visibly benefited the end user. Its pop-up blocker, extensions, and tabs, all gave users a better web experience, which was sorely lacking in IE6. When Firefox 1.0 was launched in 2004, we weren’t told, “use the experimental 1.5 branch, that one is better than IE6”. At version 1.0, Firefox was able to gain 10% of the browser market in one year after its launch as it was simply a better browser.
This brings me to Theora. Unlike the average Firefox user (or reviewer) who can easily demonstrate the many ways Firefox was better than IE6, reviewers,
developers and content providers are hard-pressed to demonstrate the benefits of using the the royalty-free Theora. Few people can demonstrate that Theora is a competitive alternative. If the same had occurred with Firefox, its uptake would have stalled. Imagine Firefox having no features that differentiated it from IE6; no tabs, no extensions, and no pop-up blocker. What incentive would users have for trying it? Because it was built following W3C standards? Because you could view its source code? Because it was free to distribute? All irrelevant to the average end user. Theora will not be used on video sharing sites in the next 5 years if its only benefit is that it is royalty-free. It must earn its space on the content providers’ harddrive. It must become better than H.264; not ‘competitive’, but better. Forcing websites to adopt an inferior format will not increase its usage. If a royalty-free format increases your bandwidth bill, you are not going to use it. Until engineers at the various video sharing sites can demonstrate that Theora is better than H.264 for daily usage, one-shot demos are not going to convince anyone.
With the rapid adoption of Windows 7, over 80% of all personal computers and laptops in the next 5 years will be capable of decoding H.264 content. The end user will have paid for the decoder. This provides Mozilla and Opera an easy ‘back-door’ to decode H.264 without paying a license fee. In fact, Opera has done so on Linux, with Mozilla planning on doing something similar for Linux in the future.
For Mozilla to claim that it is looking out for the Linux users in developing nations such as Brazil or Kenya is political posturing, seeing that the majority of Firefox users are on Windows. Had Mozilla restricted itself to Linux users, it wouldn’t be earning $50+million/yr or have as much clout in the direction of HTML5. How many years have we been reading ‘this will be the year for Linux on the desktop’, yet its active desktop marketshare is still less than 2%? Are we to let 2% of web users influence the web experience of the remaining 98%?
What experience has streaming Theora provided users over the past year? Low quality video and crackling sound is the response of leading Theora video sharing site, Dailymotion. Is it any surprise that Dailymotion also uses H.264? Whether we like it or not, every benchmark has shown that H.264 is earning its harddrive space. Streaming video on the internet is a business, and businesses are prepared to pay the cost for distributing content. Until Theora provides the same video quality as H.264 whilst using a fraction of the bandwidth, it will not gain traction. With the H.264 being used on smartphones, video game consoles, and other internet accessible devices, H.264 is set to be the de facto video standard of the web.
Whilst most users are waiting for an alternative to the patent-encumbered H.264, Theora 1.0 or 1.1 is sadly not it. This does not mean that Theora 1.2+ will not be credible alternatives in the coming years, but at the moment it has a long way before it can soundly gain traction amongst video hosts. Until paying royalties are hurting video hosts’ pockets moreso than the bandwidth costs of similar quality Theora file, they will continue to use H.264. Theora developers have to work on bringing an equivalent of Firefox 1.0 to the IE6 fight and not the Mozilla Suite.
About the author:
I am a web developer who has been following the H.264 vs Theora debate from the sidelines.
It’s getting rather tiresome to keep hearing about Theora vs H.264 debates. Of course, it is important to understand what is at stake: freedom of information, publishers and their viewers. But then on the other hand you have ease of use and quality.
This latter point is what makes things so rough for Theora: H.264 is supported in Windows and OSX and they together are the most widely used desktop operating systems, and obviously companies wish to reach the widest audience possible.
And the quality argument.. well, I actually went so far as to build SVN/GIT versions of FFmpeg, libx264 and libtheora and used them to encode a few clips and constantly the quality of Theora clips were just plain horrible compared to the x264 ones. If it’s just bad defaults, I don’t know, but then again if you need to be an expert at twiddling with the settings in order to gain viewable content then it’s still no better.
I know, but as we’ve been hearing the virtues of Theora for the past few months; we must accept the reality that it isn’t bringing to the video format wars in magnitude what Firefox brought to the browser wars.
True, but that hasn’t caused the world to migrate away from Windows to a free OS. It is good to have an open source alternative out there. Maybe Brazil or China will develop it into something that is a credible against H.264. Necessity is the mother of invention.
Currently, developed countries will simply accept the costs of distributing H.264 content.
Currently, developed countries will simply accept the costs of distributing H.264 content.
I personally am hopeful of Dirac becoming a major contestant some day in the future. I would have tried to encode the video clips I had with Dirac, also, but I couldn’t make the libraries compile and had to leave it out of my tests.
Big business might, but non-profits, small business, organisations, groups and individuals won’t. They won’t be able to afford it.
Here is an example:
http://openvideoalliance.org/2010/03/lets-get-video-on-wikipedia/?l…
http://videoonwikipedia.org/
There comes a point (even for big business) when they finally realise they are paying for something that they needn’t pay for. Sites like wikipedia adopting Theora (and getting perfectly good video quality) will hasten that realisation.
Dirac is nowhere near competitive with Theora Thusnelda unless you go to very high quality video, say 1080p. That is too much bandwidth required for video over the web. Theora Thusnelda is very close in quality/bandwidth to h.264, and both of those are the best options at this time (from a purely performance point of view).
Theora Ptalarbvorm (which is still experimental) is considerably better again.
http://people.xiph.org/~greg/video/ptalarbvorm/
It appears as though both h.264 and Dirac will be noticeably behind Theora in quality/bandwidth comparisons when Ptalarbvorm is released.
Edited 2010-03-27 14:33 UTC
Ok, I’m so sorry, but this was just too funny not to comment on.
Greg used parkrun, a common test clip, so I was able to encode the same video with x264, and here are my results:
http://kuukunen.net/misc/theora_x264_parkrun/
Apparently he downscaled the video to 360p and dropped half of the frames so it would be easier to encode. I also made a 720p version of the x264 video. (That’s four times the pixels, btw, with the same bitrate, of course.) And upscaled the Ptalarbvorm pictures for easier comparison and simulation of full screen viewing. You will see x264 doesn’t even need the downscale to look good.
So check the videos and pictures and ask yourself: Which one would you rather watch?
PS. The “LOL” pictures are from the first frame with actual video in it (there’s a frame of gray in the beginning, confusing some encoders’ rate control) and shows a total failure of rate control, but it could also be because Ptalarbvorm is not finished yet and has some bugs.
You don’t need to be an expert with the settings … if you don’t use FFmpeg.
http://www.mirovideoconverter.com/
http://www.firefogg.org/
Everyone can use either of these to get excellent quality Theora video.
I also make the observation that one can make a bad cake even if one uses good quality ingredients. It is always possible to muck it up. However, the convese is not the case … one cannot make a quality cake with bad ingredients.
Now the author of this article has fallen foul of this. There is an admission by the author that there are excellent quality Theora video around. I have seen many myself, and using one of the tools linked above, I have made some of my own. It was easy, it required no more than a few clicks, anyone can do it.
The quality of Theora video clips CAN be indistinguishable from h.264, even at significantly smaller file sizes. This is indisputable, because such video clips do exist.
On the principle, then, that one cannot make a quality cake with bad ingredients, the fact that excellent quality/bandwidth Theora videos exist means that the Theora codec itself is not a bad ingredient.
This does not mean, however, that there are no bad cooks around …
You don’t need to be an expert with the settings … if you don’t use FFmpeg.
http://www.mirovideoconverter.com/
http://www.firefogg.org/
Everyone can use either of these to get excellent quality Theora video.
The discussion was about Theora versus H.264 at _equal_ bitrates, and Mirovideo doesn’t allow one to specify such. Of course it’s easy to throw out great quality video if you just set everything as high as possible and don’t constrain bitrate at all.
Oh, but don’t worry, I’ll make a comparison with Fireogg next using the same bitrates as I used for my x264 encoded ones.
I made a Theora video using Firefogg by setting it to use two-pass encoding, and I also set the quality level to ten (10) instead of the default of five (5). I fully expected the resulting Theora video to be quite large, and my plan was to bring down the quality level setting in steps until I had about the same size output file as the source h.264 video, and then compare them.
It didn’t turn out like that. At the highest quality setting (10), Firefogg produced a Theora video file which was indistinguishable from the original h.264 video, yet the Theora file was only 72% of the filesize.
PS: I couldn’t have made this quality of cake if the ingredients weren’t up to scratch.
Edited 2010-03-27 15:14 UTC
So you admit you were comparing apples to oranges and nothing you said has anything to do with what I said?
I just tried to encode the video clip I use for testing to Theora with Firefogg, and while it worked and the quality was good there was one hitch: the H.264 video takes only 2.17 megabytes and the Theora one takes 13.7 megabytes. That’s a huge difference.
I actually blame Firefogg in this case: either it does something screwy or it doesn’t honor the video bitrate I set.
Is the 72% an average value? How many test videos did you try? What kinds of video sources, what kinds of footage?
p.s. Video encoding is tricky and complex. You can’t make a generic conclusion here just based on a single test case.
Edited 2010-03-27 21:35 UTC
I’m not a video expert … which is more or less my point. There was a quality setting … so I set it to the highest value, and I planned to take it back down in steps to come down to an equivalent filesize.
I didn’t have to back off the quality setting at all. At even the highest quality setting, Theora’s filesize was significantly smaller.
So what can I say? Even as a video novice, that is the result I got … deal with it.
I realise that video is problematic. However, the source h.264 video I used was professionally produced (it was a trailer for Avatar). The Theora video I made was made by a novice (me). The quality of the two videos was equivalent, and the Theora filesize was 72% of that of the h.264 one.
If Theora was a bad-performing codec as is often claimed by some in discussions such as these, I would simply never have been able to get such a result. That is a conclusion I can very validly make.
Edited 2010-03-28 06:10 UTC
This comparison is quite useless. It does not really say anything.
Firstly, H.264 and Theora are formats, not encoders, and quality largely depends on the encoder. Even if you use a vastly superior format, you can still get a much worse result if you use a bad encoder. A crappy H.264 encoder can indeed give worse results than a good Theora encoder. And even if you use the best encoder for the format available, you can still get an extremely low-quality result, because good encoders are highly configurable and there are many settings that can totally destroy the quality (for example, in an H.264 encoder, you can turn off all advanced features that help the compression tremendously). So yes, it is perfectly possible to encode H.264 video with a vastly inferior quality/size ratio compared to a Theora video. What exactly does it say about the quality of the two formats? Nothing.
Secondly, re-encoding an already encoded H.264 video is not fair. Because the lossy H.264 compression already “cleaned up” the original video, that is, spatial and temporal details have been lost, which made the job easier for the Theora encoder.
Thirdly, compressibility is yet another factor. For example, you can encode H.264 video at 8 megabits per second, and it will look great. Especially if the video is highly compressible. So, then you re-encode the video to Theora using a bitrate of 6 megabits per second, and it will still look great. So what? Does it mean Theora is better, because it needs less bits per second for the same quality? Of course that’s total nonsense. It only means 6 megabits per second is superfluous for most videos, and you could probably encode the original H.264 video at 1 megabit per second or less and it would still look great.
That’s why all these “tests” showing how good Theora is are completely bogus and that’s why all serious audio/video quality tests are made by encoding files from the same source, using encoder settings that are generally considered optimal. Only then you can make any meaningful comparison.
Edited 2010-03-28 07:34 UTC
But that is not my point. My point was that if you have a good result, neither the format not the encoder can be bad.
Therefore, given a good result, Theora is not a bad format, and Firefogg is not a bad encoder.
Actually, it does. I used a professionally-made source video. If even professionals cannot get h.264 decent, compared to the efforts of novices (namely, me), then h.264 can’t be a good format. I can hear yout protests now, but the fact remains that if it ha been the other way around, that would have been taken as proof positive that Theora was no good.
Oh dear. Oh dear of dear.
1. Lossy codecs do NOT “clean up” original videos.
2. It is the easiest part of compression to throw data away. What ever h2.6 threw away could still have been used by the original encode, but this data was no available to Theora in my test.
The test that I did penalised Theora (as the SECOND lossy codec applied to the video) and not h.264.
I’ll say it again so that you might be able understand it: the h.264 was produced by professionals (it was a trailer for Avatar), and the Theora video was produced by a novice (me).
Well that is true, but I didn’t have any uncompressed source, but anyway such a test would only give more advantage to Theora than my test gave it.
My point is that with a better format and encoder, you can get an even better result. Possibly much better.
You probably don’t now too much about H.264. Professionals could simply do what they could given the restrictions they had to work with. H.264 has many profiles. When you’re encoding H.264 video for Blu-ray players, for example, you can only use a subset of the H.264 features. When you want it to be playable in QuickTime (and the Avatar trailer is a good example, as trailers are traditionally made for QuickTime), you may have to use an even smaller subset of features, as the H.264 support in QuickTime is atrocious. So you have to use crippled H.264 with inferior quality/size ratio. When you want the H.264 video to be playable on mobile devices, you have to throw away basically everything that makes H.264 worthwhile. So the compression ratio will be very sub-par, even if it was “made by professionals”. This, again, does not say anything about the quality of the H.264 format.
“Professionally-made” video is simply an empty phrase, just like “digital quality” etc. It does not say anything about the quality at all.
Yes, they do. They reduce spatial and temporal details, which is basically what spatio-temporal denoisers do. People use spatio-temporal denoisers as pre-processing to increase compressibility when they’re encoding videos. Video with reduced spatial and temporal details (especially temporal, because of P and B frames) is more compressible.
That’s why the test is bogus.
The test penalised anyone who would like to know anything. The test simply does not say anything. I could perhaps agree with you that it says Theora is not extremely bad. But that’s pretty much the only thing it can say.
This is highly questionable, for many reasons. But until someone makes a real, serious test, then any further discussion is useless.
Edited 2010-03-28 09:54 UTC
It does however say something about the suitability of h.264 for use as a codec for video on the web. No-one is saying that Theora should be used in Blu-ray players. Theora should, however, be used as the video codec for the public access web, because in that role it performs as well as h.264, and unlike h.264, Theora actually IS public access (in that anyone may use Theora without restriction).
This is the point that you studiously ignore.
It does not say too much about the suitability of H.264 for video on the web, and it certainly does not say Theora in that role performs as well. When you’re encoding H.264 video for the web, you don’t have to limit the features as much as you would if you targetted a specific device for example. Because software decoders/players are usually more advanced than hardware players. So you can take advantage of the powerful features H.264 offers. And this is not specific to H.264 – the same thing applies to DVD players: again, they can only play MPEG-2 video that conforms to a limited set of features. MP3 players may not support the advanced features available in software MP3 players. Hardware MPEG-4 ASP players cannot play video with all advanced features that software codecs like DivX Pro Codec or Xvid offer, and so on. When you’re encoding for software players, the quality/size ratio can usually be better. I can’t^A see why this could not apply to Theora, too. It’s a general thing. This, BTW, also explains why audio/video encoded by “amateurs” is is often better than audio/video encoded by “professionals”. Amateurs have the full arsenal at their disposal. The best encoders, and the best encoding features.
Yes, you may still have to take QuickTime into account, as H.264 playback on a Mac is still handled by QuickTime. But QuickTime can improve, too. Or you can just ignore QuickTime. And yes, when you want it to be playable on a less powerful (mobile) device that does not even have a hardware H.264 decoder, it cannot be some state-of-the art quality either. But HD video encoded in Theora may require a powerful CPU, too. Yes, Theora may be less CPU intensive than full-featured H.264. But then, more and more new devices come with hardware H.264 decoders built-in. Plus, sites like YouTube already offer several different versions of the H.264 videos.
Edited 2010-03-29 05:02 UTC
Okay Lemur, I’m going to be very blunt here – because you seriously bore the tits off of me with your repetitive regurgitated hyperbole. Theora MIGHT be a good codec, BUT,, and this is key so bare with me, NOT in an OGG container. It doesn’t matter how many million hours you bash on about Theora, until we lose OGG as the container it is wholly NOT, NOT, NOT suitable for use in any kind of streaming. End of story. Please move on to a topic WORTH advocating – i.e. a different container format. Otherwise it is the same tedious argument over and over again.
Thanks for this well-formed response to a lot of the ridiculous assertions.
Even if that is true — the point of the article is: people won’t care if your solution is just as good as the already established solution. You need to be significantly better.
Just like Firefox was much better than IE6. People didn’t care about FireFox being open source etc… FireFox was just a lot better. If it would have been simply on par with IE6, (almost) nobody would have cared.
Edited 2010-03-27 19:24 UTC
I remember when Firefox was Phoenix and you know what? It wasn’t that remarkable. It certainly wasn’t groundbreaking. And it definitely didn’t make many IE users drop their coffee and utter “WOW!” It was a browser. It had tabs. It was hardly distinguishable from Navigator. Take it from one of the few nerds who got religion early on and tried to push it on all of his friends: It wasn’t the instant hit you’re portraying it as.
In essence, it was on par with IE, and almost nobody cared.
Then, something crazy happened. Lots of people got involved in the project (because it was a hell of a lot simpler than Seamonkey’s codebase), it was adopted by the community, and it improved at a lightning pace. The impetus behind Firefox’s rapid and dramatic improvement was not that it was immediately better than IE, but that it was transparent, that it was collaborative, and that it was the new community standard.
It grew into a product much better than IE, much faster than the IE team could adapt.
So please don’t come in here and spout revisionist history. Firefox wasn’t a slam dunk and this isn’t, either. This will be another pitched battle between the proponents of level playing fields and the proponents of the proprietary status quo. In 5 years’ time, some uninformed newbie will pop up on these boards and claim that “Theora was demonstrably better, and that’s why it killed h.264” while we’re arguing about the next open standards battle.
Same shit, different day.
Amen!
Mozilla is tri-licensed.
http://www.mozilla.org/MPL/
Mozilla Public License, version 1.1 or later
GNU General Public License, version 2.0 or later
GNU Lesser General Public License, version 2.1 or later
From the GPL v2, “b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.”
H.264 requires a royalty. One of Mozilla’s tri-licenses is the GPL. Their hands are tied, they have no choice about not using H.264.
IE, Safari, Opera, Chrome can pay the royalty because their browsers are not freely redistributable. With those browsers you have to go to the vendor’s site to download them. The downloads can be tracked and the royalty paid. The GPL requires free redistribution. IE, Safari, Opera, Chrome don’t come in your Linux distribution and Firefox does.
Either H.264 has to give Mozilla a royalty free license, or Mozilla has to get several hundred developers to agree to remove the GPL from the tri-license. If one developer says no, then the license can not be removed.
Let’s hope Google releases their competitive, royalty free codec soon and immediately switches all of YouTube content to it.
Edited 2010-03-27 12:48 UTC
I don’t have access to hard numbers for these, but I believe about 100M copies of Firefox are downloaded each year. The public number for H.264 royalties is $0.75 a copy. That’s $75M/year in royalties from a company that has $50M in revenues.
Edited 2010-03-27 12:56 UTC
I mentioned using the native decoder instead of bundling a decoder as Opera is doing?
There are native decoders available on Mac/Windows. You pay the royalty when you buy the OS. You can’t build a native decoder into to a Linux distribution, the GPL will stop you.
Flash has a H.264 decoder in it, that’s why you have to go to the Adobe site to download it. They track the numbers of downloads and pay the royalty. They recoup the royalty expense by selling SDKs to make annoying Flash ads.
Yes, Linux doesn’t have a H.264 decoder. Neither does it ship with libdvdcss. So even when your computer comes with a DVD drive, you aren’t legally allowed to play the DVDs you bought. So what? That hasn’t stopped anyone from downloading VLC or similar to get it to work. DVD is 12 year old technology, and Linux users are playing DVDs just fine. Aren’t they able to download the relevant gstreamer ‘ugly’ plugins and get their H.264 content to play?
No group of Linux users is marching with pitch-forks against the various DVD formats that refuse to play on their OS; they simply wait for the hack, and use that. Isn’t that illegal?
Unless, from your argument, we should ban all video discs that use patented video formats, or encryption.
My Linux machine does. It is embedded in the video card hardware, which I paid for, so therefore I have a license for it.
Under what insane logic shouldn’t I be allowed to use it?
Be warned that this is a trick question.
Well, seeing that is the case, why aren’t Linux distros shipping with x264 installed? Are media players (e.g Totem) able to decode H.264 video simply by using your video card’s drivers?
We can all argue that you bought the DVD/Blu-Ray disc, thus you have a right to do whatever you want with it, including decoding/decrypting it. I’m simply stating that a Linux PC in its default configuration (unlike Mac/Win7), is unable to decode H.264 content without using software that hasn’t been included in the default configuration.
Edited 2010-03-27 20:11 UTC
No, it^aEURTMs more like 2M _a day_
http://www.7is7.com/software/firefox/downloadguesstimator.html
So where is Mozilla supposed to come up with $500M/year for royalties? Of course there is a volume deal for that many licenses. But the cost of the volume deal will probably still exceed their annual revenue.
I read an article breaking down the costs of HDTV sets. $27 of the price tag is patent royalties.
Edited 2010-03-27 13:12 UTC
by using the media systems installed by the os rather then reinventing the wheel?
“Even if we were to pay the USD 5000000 annual licensing cost for H.264, and we were to not care about the spectre of license fees for internet distribution of encoded content, or about content and tool creators, downstream projects would be no better off,” Shaver explains.
http://www.osnews.com/story/22787/Mozilla_Explains_Why_it_Doesn_t_L…
But I bet they could get the price even lower by negotiating. They are a non-profit after all.
MPEG-LA does RAND licensing, “reasonable and non-discrimatory”.
They’d discriminate if they’d give a non-profit special terms after negotiation.
MPEG-LA might establish a generally available “non-profit” list, but then you can count the days on one hand that it takes Microsoft to establish a non-profit “Windows Media Foundation” and donate their media player (and codec set) to them.
It’s in the financial interest of MPEG-LA to have Firefox include h.264. They could create a special license for non-profit decoders which means they would still be selling the commercial decoder at a fixed rate.
I’m not saying that Firefox should be fine with their situation but they do need to consider what is in the best interest of their users at this point. They could end up driving users to other browsers which would result in less revenue.
Sure, but this means that they’d have to give _everybody_ (or at least every non-profit) the same terms to stay RAND.
For this, websites would have to _really_ pick up the video tag first. So far, all we have is tech demos, and flash is still the incumbent.
And thanks to IE7 and IE8, it will stay that way.
Until websites really start dropping flash in favor of the video tag (see IE7/8), Mozilla isn’t forced to do anything with Firefox – they can adopt a wait-and-see attitude (or continue to promote their choice of a video format) without the user noticing.
Proposing that Mozilla shouldn’t take the stand on this issue basically means to cave in long before there’s actually any need to do so.
Should there really be no way around h.264, they can still consider their options in a year or three (or more likely: a couple more years as web developers will get used to their workarounds). In 3 years, we’re already awfully close to the end of MPEG-LA’s promise, so let’s see how they wiggle and cringe while weighting the benefits of driving away users or finally cashing in on their patents.
Telling MPEG to shove it (by having a large and growing market reject their offer-you-can’t-reject) can turn out to be a real advantage: With a bit of luck (and the replacement of a couple of dinosaurs) they might reconsider their licensing approach in the future.
Web developers might be pissed off, but they were for the last 10 years and IE6 is still going strong. They’ll do whatever is necessary (ie. provide Flash for the time being) to keep users on their site: The first thing users blame when pages don’t work is the page, not their browser – that’s why IE6 fallbacks are only deprecated now, and IE9 is already announced.
Nobody is going to drop flash while it is the best way to do cross platform video. IE and Safari do not support ogg, and FF only supports ogg. That means to be cross platform, your best choice is flash, and will continue to be until one side backs down. Considering the investment that apple and ms have put into h.264, I really doubt it will be them.
there is also the option of dropping the GPL versions.
If you dynamically link against a shared library which is proprietary it doesn’t imply that you break the freaking gpl. Firefox can use codecs that are already on the system.
However, even if Firefox will not oficialy support H.264, someone will write a plugin for it.
Who is going to pay the royalties on the H.264 libraries that Firefox links to? How are non-technical users going to figure out that they need to download them from countries without software patents? Geeks can handle that, normal users can’t.
No one, it’ll just be hosted in a patent-free country. Over time, it’ll become common knowledge where to get it. Most non-geeks didn’t know where to get Firefox either, heck a lot of them still don’t. They ask their geek friends to install it for them.
Microsoft is paying for it: H.264 is bundled with Windows 7. Nvidia is paying for it since you can use Nvidia’s Purevideo to play hardware accelerated H.264.
Interesting thought – if you’ve got both Windows and Linux installed on your OS, can Firefox (on Linux) grab the H.264 codec from Windows? I don’t know if that’s technical possible or legal, I’m just asking.
In theory it’s possible, yes. That’s basically how we used to have to do it before the likes of FFMpeg & Mplayer came along: you’d install AVIFile and a codec pack containing a bunch of Win32 codecs. It worked, to a fashion, but it’s not something we should really go back to.
And Apple is paying for it on OS X.
It’s only the open source OSes where the user has to download the codec. Many Linux distros stick H.264 codecs in a non-free repository, and ask the user to download them when it becomes an issue. OpenSolaris, annoyingly, requests that you buy a codec from Fluendo, or you can add a third-party repository manually (that’s where it begins to get newbie unfriendly) to get codecs. Not sure how the BSDs work.
mr. author, you will have to mention qualities you are looking at when mentioning h.264 as “superior”
A good fraction of the web was build around activeX not that long ago and “pragmatists” said firefox should support activeX because “users dont care about these stuff and they just want websites to work” and to use your words “Why hold back the majority of your users because 2% of your users are on niche OSes?”
i say firefox, hold your ground and stick to your principles ..if things gets too tough, implement a plugin architecture and load the support on demand at runtime as a last a last resort
its not over until the fat lady sings and she is not going to start singing anytime soon.
You’re comparing two completely different situations. First of all, ActiveX locked out Mac users, whereas H.264 is supported or easily installed on any operating system. Second, the alternative to ActiveX was a proprietary plug-in system derived from Netscape’s, which already had an entrenched user base, so the decision not to use ActiveX was largely pragmatic. And the outcome? We now have two versions of plug-ins to choose from on Windows. ActiveX is still alive and well. So Mozilla didn’t accomplished all that much after all.
I guess Mac users were already included in these 2%. Besides, comparing H.264 to Theora is just as valid as comparing ActiveX to Netscape plugins.
I’m perfectly OK with Firefox not supporting H.264 – I’ll simply watch these videos using Flash (or perhaps some other proprietary plugin), just like I am doing it now.
Theora will slowly gain traction, as there will always be content providers that can’t afford H.264 license or the content’s license itself is inherently incompatible with it. Traversing the article, once 98% of the users have the Theora decoder installed, why would anyone worry about remaining 2%?
Not quite sure what you mean. Worldwide, Mac use is 4+%, and 8% in the USA.
It certainly wasn’t that popular when Firefox 1.0 struggled to convince people that they can live without ActiveX.
Besides, 4+% of PC users worldwide? Can you please quote some statistics. I’ve been in several different countries and my personal experiences don’t line up with your statement.
Who uses activex? Online games? I’ve always killed it in inet.cpl when i used ie(6) (iirc in both the checkboxes in security(?) tab and the addonslist), and i’ve never noticed activex in prefs of other browsers.
anyway, all I’ve ever read about activex is that it was a security risk.
i do same with java options, except i occasionally come across an applet, so temporarily switch on java.
Web devs don’t get it. They never have ..
Nah, people who build them web are morons. Its the people living in their parents basement, trolling message boards that are the true geniuses.
I appreciate you adding to the discussion of this topic, but I had to stop reading here:
You need to get out of your bubble and get a clue. Access to technology is deadly important to developing nations. Their ability to govern themselves with low-barrier technology makes an indescribable impact on people^aEURTMs lives.
Throughout Africa, people are using technology in empowering ways that is far in advance of the services available in America. The equipment might be old, slow and clunky but the services are cutting edge^aEUR”telephone banking and money exchange (poor people being able to actually use their phone as a bank, and pay even street vendors with SMS). I^aEURTMm not talking about ^aEUR~phoning your bank to transfer money^aEURTM^aEUR”I^aEURTMm actually talking about using phone credit as _money_. They use the mobile phone to run their lives and their businesses in ways that the developed world hasn^aEURTMt even begun to catch on to yet.
H.264 means that the average person in Kenya can^aEURTMt just freely start a new world-changing service (like Ushahidi) because a licence needs to be paid and they don^aEURTMt have that money. Having a licence to view H.264 because you have Windows doesn^aEURTMt give you a licence to then encode too. And even FCP doesn^aEURTMt give you a licence to encode for commercial use either.
Free and open software that anybody can use, adapt and mash is changing the world in developing nations. Free and open software did more to help after the Haiti earthquake than the closed systems (OpenStreetMap became _the_ map for ground workers). The developed world, busy trying to sue itself to death, is going to shocked by entirely new economic models rising out of the innovation happening in India, Kenya, Ghana and so on.
I^aEURTMll accept that your article sees H.264 from the opinion of lazy over-privileged consumers who use their computer for porn and youtube videos.
Do you know what China, the leading developing nation, does when it does not want to pay for Western technology?
It writes its own.
http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=189601…
Let the developing nations solve their own problem.
Do you know why? Because they are making use of what little they have. Should we run a campaign against their use of patented technology? Any chance that their cell phones are storing video as 3gp or H.263? Are you going to encourage them to convert their videos to Theora before distributing it? All irrelevant.
The point is that the MPEG-LA (of which Microsoft and Apple are a part of) want to make sure that developing nations can^aEURTMt develop their own technology and stay stuck the America^aEURTMs way of doing things, with America^aEURTMs technology. Sure, Microsoft and Apple want developing nations to see the benefits of modern technology, but they want to make sure it^aEURTMs only on Windows PCs and iPhones/Macs^aEUR”that^aEURTMs just business 101.
Yes, it is business 101. But do you expect that they became large companies because they have slaves (this is subjective) for employees so they can give away their products for free? Didn’t VP3 originate from paid employees? Wasn’t it because VP3 wasn’t making On2 anymore money that they decided to give it away to the community? Do you think that if VP3 was more powerful than H.264, On2 would have released it to the community, royalty-free?
If a developing nation chooses to adopt Microsoft or Apple software instead of a FOSS solution, is it Apple/Microsoft’s fault for marketing the ‘virtues’ of their products better than FOSS advocates? Do you believe that the developing nations’ IT Heads didn’t do a cost/benefit analysis to determine if there weren’t any cheaper alternatives?
How many companies would survive because they gave away their R&D? The last I’ve read, 70% of the Linux is written by paid programmers. Aren’t the companies behind the big Linux distros making money via Customer/Technical support? So if a developing country decides to adopt a (European) Linux distro, aren’t they still paying for support/training, especially if the staff is unfamiliar with the distro? Nothing is stopping them from customizing their distribution as the Brazillians have done.
The reason people pay for things, is due to convenience. We can all argue that Microsoft’s and Apple’s products are too expensive, but if it simplifies the users’ experience, someone worked to make that happen. They are running a business, not a charity.
Edited 2010-03-27 19:12 UTC
Yes, but we are talking imaginary property here (and this is coming from someone who is not a fan of the GPL or FSF ). Suppose that I independently discover/invent the very same techniques or algorithms that H.264 uses, I have to pay patent royalties, while I *never* used any of their resources. It is property that artificially exists through bad laws, where the one who comes first gets a monopoly.
Added to that, the work on H.264 is preceded by and based on decades of work by other researchers. How come that they can collect money for it, while it seems to rely on the common body of knowledge?
Copyright protects investments well enough, and represent protection of real stuff (actual source code that somebody wrote, and that can for any major program not be written exactly the same by someone else). Software patents are just intellectual colonialism, and extortion methods for troll companies. They need to die, sooner than later.
I think that’s entirely true, and I think that software patents is holding the software industry years behind what could be accomplished if it weren’t software patents.
In some fields, no one is going to risk implementing x, y or z software or algorithm because either: a) they know is patented or b) they don’t know but some troll might come with some hidden patents and ruin years of hard work.
By the way the discussion being about H.264, Apple is the number one patent troll.
You may be interested, or perhaps not as you seem a bit dismissive about the whole developing nations thing, to note that Theora’s cousin VP6 was supposed to form the basis of China’s answer to the DVD:
http://en.wikipedia.org/wiki/Enhanced_Versatile_Disc
Contractual difficulties killed the project but if Google is serious about freeing VP8 then nations such as Brazil, Russia, India and China could be instrumental in making it a global standard.
Did you read about their HD DVD format, China Blue High-Definition (CBHD)?
Instead of adopting HD-DVD or Blu-Ray, they decided to develop their own format, using input from the DVD Forum. The result:
As I have said many times, people pay for things out of convenience. Developing countries have a choice whether to use patented formats or not. If they don’t want to pay royalties, they are free to choose royalty-free formats, or develop their own. Believing that outside forces are the answer, whether it be Theora or Linux, is short-sighted.
And since when is China a developing nation?
We’re talking about poor countries like you see in Africa and such. At this point, China is closer to the US than it is to those countries.
So you are correct, it is in transition. However, it is a pity that people paint Africa with one brush. Africa is not Somalia. Anyways, my point is, countries didn’t develop on hand-outs; the people worked to get there.
Developing countries have shown that it is possible to live without the many luxuries we take for granted. Thus, if a large corporation enters a country and says, “use my patented product, it’ll make your life more convenient”, you can’t blame the corporation if the country accepts. Nothing is free.
Edited 2010-03-28 12:30 UTC
They may use a digital bartering system but that isn’t more advanced than what is used in America. Or perhaps you would like your savings in the form of cell credits that are not secured or backed by the federal government?
You remind me of a guilt ridden anthro professor I once had who would talk almost daily about how African tribes had superior lifestyles because they didn’t have the same expectations as Westerners. She also would talk about how they will let Westerners live with them for years at a time. But did she ever leave her oppressive Western lifestyle for an African tribe? Of course not. She was a textbox case latte liberal that would never give up her Mercedes for tribal life.
She also would talk about how they will let Westerners live with them for years at a time. But did she ever leave her oppressive Western lifestyle for an African tribe? Of course not. She was a textbox case latte liberal that would never give up her Mercedes for tribal life.
Admiring something and being able to do it yourself are two very different things.
Nothing was stopping her from living with them for at least a year. She had no children and few friends. People hated her in fact.
She was just another guilt ridden Westerner who laments the Western lifestyle while talking on an iphone at Starbucks. Some Westerners should really just blow their brains out and save the earth if they think their lifestyles are so awful and destructive.
You know of us can appreciate Western comforts while at the same time acknowledging that simpler cultures come with their own rewards. It doesn’t have to be a dichotomy where you are either on the good or bad side.
Maybe theora needs a little push to get the ball rolling. More money, manpower, and knowledge would be a good start.
aye
For Mozilla to claim that it is looking out for the Linux users in developing nations such as Brazil or Kenya is political posturing, seeing that the majority of Firefox users are on Windows. Had Mozilla restricted itself to Linux users, it wouldn’t be earning $50+million/yr or have as much clout in the direction of HTML5. How many years have we been reading ‘this will be the year for Linux on the desktop’, yet its active desktop marketshare is still less than 2%? Are we to let 2% of web users influence the web experience of the remaining 98%?
Translation: We are bigger, stronger, and we outnumber you. Now shut up and we might let you exist. If not we might transform you into a bloody pulp on the ground.
Ah humanity, such a long way to go…
As a side note: I now know why many idealists become bitter cynics.
When has humanity ever followed the minority? And if they did, wouldn’t they become the new majority?
This is how the world has been since Adam was a lad, and altruism isn’t going to change anything. If following the herd didn’t have benefits, people wouldn’t be doing it.
As many have noted, we are all paying royalties on our PCs that come with H.264 decoders, either in software or hardware. Why not give us the option to use them, especially when the platform provides APIs to access this functionality?
usually i agree more or less with the authors writing on Osnews, this time I don’t.
I don’t believe that theora is as bad as author of article sais it is. There are big companies using a lot of money trying to make people believe just that.. My experiences from the format are pretty good, and I like the idea of a free standard that’s open to everybody. and that everybody can help improving.
Making H264 the new web standard will just brew a lot more pirates and I’m sure that the world will not benefit from it, while a few big companies getting paid the royalties will.
I am a fan of open standards and most people that are into security are aware that “security through obscurity” is not the way to go making a secure web standard. Also bug usually stay around for a much longer time than on open standards. I’m not a fan of adobe at all, but I’d rather depend on adobe than on apple!! Adobe have been lazy fixing bugs, but apple are evil b*st*rds.. Nobody should feel safe when apple is in charge..:p
Just about everything on my home computer is open source and I think it should be a human right to be able to use the internet and my own computer without having to pay “taxes” and depend on greedy and lazy companies thinking only of money and power, and not the best for the users..
This comment is just to note that the article was kindly written by OSAlert user Preston5, and not the staffers like Thom, David or myself who have written about H.264/Theora in the past.
We always encourage that if users have a problem with our articles, to sound off in the comments, or even better^aEUR”write their own article and put it to the community to discuss; such as in this case.
I never said that it should be the new standard, but honestly, if IE9+ supported Theora, do you think videophiles will start using it in their video files? Btw, have you noticed that the container format used by video pirates, MKV, is now gaining more hardware support than Ogg? As much as you may not agree, large companies watch what the pirates are using, and make it convenient for them. If the pirates were using Vorbis instead of MP3, I’m sure there would have been more Vorbis players on the market.
Do you count your ISP as one of the greedy companies that only think of the money they collect from you every month? As a human right, one can argue that it should be free.
No man is an island, and as much as you may disagree with them, big companies have as much place in this world as the little ones. If the herd follows the monopoly, they do so out of convenience. It is up to you to show them that going open source isn’t as much trouble as the big companies are portraying. As it stands, Theora is less convenient that H.264.
Edited 2010-03-27 16:07 UTC
You do know that mkv is a *container* and not a *codec* right? Virtually any codec, video or audio, can go into an mkv container file. Guess what codec typically ends up in a patent-free mkv container? H.264, funnily enough. The audio, however, is very often Vorbis.
Do not fudge the issue by referring to the ogg container when you’re actually referring to the codecs. The ogg container, imho, is crap for anything more complex than a Vorbis audio track. Matroska, however, is well-suited to just about any use you can think of and you can literally put just about anything in them.
I know what I’m referring to, which is why I specifically said container formats. Do you know that the proposed HTML5 standard preferred Theora and Vorbis in the Ogg container? I won’t harp on the flaws of Ogg, but we shouldn’t think that the reason it isn’t adopted in droves is because of a carefully orchestrated plan by big companies. If the format makes your life miserable, you would be less inclined to use it.
Edited 2010-03-27 17:40 UTC
Who cares what HTML 5 *prefers*? Their recommendations have so far meant exactly zero, to judge from the continued use of H.264. Unlike H.264 though, if mkv became the dominant HTML 5 container format, browser makers couldn’t claim licensing as the reason for not implementing it. It doesn’t matter what the standard recommends, since nobody listens anyway. Standards shouldn’t recommend, standards should mandate and those who do not follow the standard are not compliant. A recommended standard is one that will be ignored and screwed with.
I just realised something – we won’t be able to block annoying ads when made with HTML5.
Yes we will. Blocking using CSS is very easy, it^aEURTMs how adblock works. Not least, that these HTML5 ads will more than likely be inserted by JavaScript anyway.
The good thing about HTML/CSS is that you can veto any of it, where as with Flash, it^aEURTMs all or nothing. You can interpret HTML/CSS any way you want. Switching to HTML/CSS ads will provide users _more_ control over ads then they^aEURTMve had before.
But, and this is what I always think when you throw this one in, ads are “all or nothing”. Ad blockers generally block individual ads, be they flash or not. This really is not a valid argument.
It^aEURTMs valid, because you can turn HTML/CSS content into anything you want. An animated visual ad, with fancy fonts can be transformed into plain text by the user; just as you can with web-pages now http://lab.arc90.com/experiments/readability/
But an ad is an ad. It is entirely possible to do the same with Flash, it just is not something anyone has bothered to do.
I would also put forward that CSS is entirely as evil as Flash with regards to excessive memory usage (albeit at the initial parse) and is extremely loosely interpreted by the different browser engine implementations. t’s quite easy to have a page that looks awesome in one engine (say, Webkit) and utter crap in all of the others (mostly Trident, Opera and Gecko.) I’ve seen many a heinous crime in the various CSS files I’ve inherited during my dalliances with Web design (joys of working in a large company that often farms out parts of the “design” to agencies and then gets in house developers to maintain the end results.)
You cannot personally choose how Flash renders a given piece of content. You can with HTML. If a website is made entirely of Flash, you can^aEURTMt pick and choose what colours, fonts and sizes are used to your liking.
Yes you can. Flex supports CSS styles. You can restyle everything on the website at a whim just by swapping out a file, just like you would with HTML/CSS.
Really? I tried looking up how, and all I found were ways to use CSS at compile time. That doesn’t help the user who wants to set all fonts bigger, or ensure contrast. If there is a way to have the browser override the CSS, it’s drowned out in search results by all the pages of instructions for compiling CSS in. There’s also a question about it on StackOverflow, where the answer was that it can’t be done.
You do have a point, it’s not 100% exactly the same as with HTML, because generally you do have to recompile each time you change the CSS. I have heard there are ways around this but I never tried. However for the majority of websites I don’t think this is the biggest problem (assuming you own the Flex code yourself and don’t simply have a black box you got from some contractor).
I just did some more research and it turns out it is possible and pretty easy to swap out CSS at runtime, however that CSS has to be compiled to an SWF “style file” first.
We are talking from completely different ends of the spectrum. You are a web designer. I am a programmer. The web is just a script, a script that is interpreted (compiled) and rendered (executed). That is all. If you want to believe it is more complicated than that, you are welcome. Me, I see a bunch of code.
Flash is an element in a tag. So, sure you can change the way that section of HTML is rendered, I don’t get why you are trying to make out that isn’t the case. What you mean is “I can’t change the Flash content” and I’m going to counter – nor could you dynamically re render a movie clip in a Video tag. Flash is (though Adobe/[Macromedia] attempt[ed] to alter this) a “movie” based format. It renders in a plugin, same as a movie file will. It thinks in terms of Frames and clips and such internally. Having said that, even if you look at the “repurposing” in real terms, it is just doing the same as Java/Silverlight/etc. A general purpose “virtual machine”.
My point is – Flash is fully scriptable – in that a programmer can define parameters (much like Java and other plugins) that can be used to affect the file’s playback. Programmers are generally lead by example – no one is currently doing this all that much in Flash. Stating that “Flash can’t be changed” is a bit like complaining about your web page because of the length/content of a video on Youtube you embed or the colour palette used in a PNG you use.
When Dailymotion stated that one of their reasons for adopting the video tag was:
I was left to wonder … how can playback of a binary file/format do all of the above. What differentiates the binary containers flv/mp4 from ogg? I promptly dismissed their rationale as nonsense.
I hope Opera will add options to disable HTML5 audio/video/etc stuff to its F12 quick menu. Actually, I wonder why they haven’t already…
I actually clicked on a Ford ad here the other day and spent about 10 minutes there.
You owe me a beer Thom.
XP and Vista don’t have H.264 codecs, which seems to be a common misconception. So that’s 60% of Firefox users on a “niche” platform (that’s what 180 million people?). I’d imagine if Mozilla claim to be looking out for folks in developing nations they’ll be talking about XP users too (unless Linux adoption is doing much better than I thought).
The basic argument that free codecs need to get better, like Firefox got better, is fair enough, except the author is taking the role of the many, many people who said Mozilla was rubbish. That said Pheonix was rubbish, Firebird was rubbish, Firefox 1 was rubbish etc. etc. A hard-core of people with a bit of vision needed to be using Firefox and it’s ancestors, and developing sites that used standards in order for them to develop in to what they are today.
There appears to be just enough people with vision (e.g. Wikipedia and Mozilla, Dailymotion and maybe Google) that royalty-free codecs will survive and possibly thrive. You can’t really expect the people who would happily switch their site to all Flash or Silverlight, or code only for IE to give a toss, but they never have.
It’s also worth noting that as soon as video comes up people start talking about giant centralised sites like Youtube. If I was talking about using patented text then folk would ask about individual blogs and websites not Wikipedia, if I mentioned patented image formats then people would think about their holiday photos, not Flickr. Yet you talk about video and everybody assumes you’re talking about Hollywood movies in 1080p being served by some giant conglomerate. Why is it so crazy to think that individuals might want to put videos of their children dancing on their own sites without having to worry about Youtube being hit with DMCAs because there is a Prince song playing in the background?
I don’t understand. Are you saying that the misconception is that they do have h.264? My article was focusing on the coming years with Windows 7, which is bundled with H.264 codecs. I’m sure that the same way Linux users are able to view H.264 content, XP/Vista users are doing the same now, via the various codec packs.
I had originally written bloated, but took it out. The point being, the average user wanted a browser, and not a software suite (browser,mail client,WYSIWYG editor). It was fine for developers, but not the average user.
I’m glad there is a royalty-free alternative; all I’m saying is that the reason most videophiles don’t use it, is because H.264 does the same at a fraction of the size.
I’m getting a little sick of this lie. If you have a nVidia video card, it’s bundled with purevideo… if you have flash installed, it comes for free with flash (and it’s h.264 codec appears to native install as well – at least for MPC), If you bother to install DX9c, it’s part of the DXVA spec.
It’s available – and is no more difficult than ***SHOCK*** installing flash.
Excellent article, thanks.
I agree — it’s exactly what’s holding back other things such as “desktop linux”. Being 90% as good doesn’t help. Even being 100% as good not. Why should I do all the work and change to something else when I gain nothing or even loose 10% goodness? You have to be clearly better to make people change their ways and switch to your solution.
Few realize that this is the reason free OSes are still a niche market. Until the licensing costs are so prohibitive that you have no choice but to go with the 90% solution, you will pay for the mainstream solution; it simply is less risk.
I thought your point about Theora advocates not being able to show a clear advantage was dead on. It was always disputed as we saw here on OSAlert and that type of controversy will not rally the support needed to challenge a well financed opponent. If everyone agreed the quality was equal it would have made it much harder for Google and Apple to argue the use of h.264. I think the most compelling argument for Theora was within the scope of the purpose of the W3C. Arguments about quality should have been regarded as irrelevant to the overall purpose of open standards and it should have been pointed out that proprietary codecs like h.264 would still be used in plug-ins like Flash.
However I don’t think the same point about near-quality applies to Linux because I don’t believe it is as close to Windows as Theora is to h.264. When Linux was expected to just provide a browser on netbooks it still caused problems for OEMs. I suspect support costs are actually higher because you cannot just set a Linux install to auto-update and trust it to never require command line assistance. Linux has been very successful where it is frozen (cell phones) or has an admin nearby (servers). Google is using their own system update for ChromeOS that is not affected by kernel changes (the whole system is wiped) which I think says a lot.
Oh please, here you go again. This thread has zero to do with Linux. Got that? Why is it you always try to divert these threads?
I was responding to the parent.
Why argumenting about h.264 against theora and not “h.264 against x.264” as codecs instead?
H.264 is the specification for a video codec. It’s not actual software, it’s just specs that others can use to implement a codec. x264 is an implementation of the H.264 spec (or at least of parts of the spec).
Because X264 *is* an H.264 encoder and decoder? It implements the H.264 standard and, in countries where software patents screw everyone, X264 is not legal just as any other unlicensed H.264 encoder is not.
Ok, thanks.
I thought the x264 was an opensource version of it, as “legal” as divx or xvid is compared to mpeg4.
This is a very common confusion. It all boils down to understanding the difference between a format and a software product. x264 is not a version of anything, it is a software product. BTW, it is not an encoder and decoder, it is only an encoder (in free software, decoding is done via the FFmpeg H.264 decoder). While the term “codec” means “enCODer/DECoder”, that is, a software or hardware product that encodes/decodes data (to/from some format). Which also explains why H.264 is not a codec, even though it is often erroneously referred to as a codec. H.264 by itself does not encode/decode video, it only describes how to make (software or hardware) implementations that actually do the job. In other words, it is a specification, a format. (Theora is a video format, too, and libtheora is the reference software implementation.)
As for DivX/Xvid vs. MPEG-4 – firstly, MPEG-4 is not a specific format, it is the whole standard that contains many parts and individual formats for audio (AAC), video (SP/ASP, AVC…), container (MP4) and other things. Like Part 10, Advanced Video Coding (AVC), which is technically the same thing as H.264, and Part 2, Advanced Simple Profile (ASP), which is an older, less efficient video format used by many previous-generation codecs like Xvid, DivX Pro Codec, 3ivx, FFmpeg MPEG-4 etc. Which means they all use the same video format (MPEG-4 ASP). That’s also why they’re compatible (in other words – video encoded with the Xvid codec is not “Xvid video”, you don’t encode “to Xvid”, you don’t create “Xvid files” or “play Xvid”, your player does not support “the Xvid format” etc.) The DivX codec is “legal”, as it is a commercial, proprietary product made by DivX, Inc., and the DivX company has paid the MPEG-4 license. But open-source MPEG-4 ASP codecs like Xvid or FFmpeg MPEG-4 may not be legal to use in countries where the patents apply.
Edited 2010-03-28 02:33 UTC
I’m not sure if I can stand reading another argument about quality settings between lemur and werecat.
You can’t win at this point by pushing it as an exclusive option. It was already at a huge disadvantage by not having YouTube support and IE9 pretty much seals the deal. Arguments related to quality or licensing flexibility don’t matter at this point.
If you are serious about getting Theora adopted you need to forget about pushing it over h.264 and work on getting it into HTML5 as a backup codec or even under its own specification. Propose a compromise that includes both so for-profit web publishers can use HTML5 without having to pay royalties.
I’m not sure if I can stand reading another argument about quality settings between lemur and werecat.
Hah, point taken I shall refrain from posting about that topic anymore then, I doubt anyone is interested anyways. I merely wished to point out that atleast _I_ am not seeing Theora being as good as H.264 no matter how much I like open standards.
I said that to lemur WAYYYYYYYY back in the first one of these articles, and I stand by it. It doesn’t matter what you, me, mozilla, or the w3c think. As long as google says ogg is unacceptable for youtube, the argument is effectively over.
No – they absolutely should not. IE will likely do this, it is a single OS browser built by a vendor who also builds the OS it is run on. Safari does this, and it is a dual OS browser built by a vendor who also builds one of the OS it is run on (and builds an entire media playback framework for the other). So what does safari on Windows use – Quicktime of course. Chrome is a browser built by a major content distributor (and OS developer) – it supports H.264, but not on its open source builds (i.e. Windows only (maybe OSX, not sure, but definitely not Linux). All of these browsers support H.264 out of self interest – they have an agenda. All three of these companies distribute video in H.264 format to one degree or another for purposes of financial gain.
Mozilla and Opera are NOT in the same kind of position. They both make browsers – that’s its. They don’t use their browsers as market leverage – making browsers is their main business. They don’t care about leveraging their browsers to promote some other proprietary product – they care about making browser that people want to use.
Sure, they both care about making money to some degree – I’m not claiming they are totally altruistic. Far from it – just saying their main business is their respective browsers. It makes no sense to them to tie their browser to platform specific media frameworks – because they are both cross platform browsers and it is much more straightforward technically to just implement the stack in the browser.
It is not only simpler – it is BETTER because it allows the browser to have low level control of the stack and therefore have tighter integration with the browser itself. It also makes behavior more consistent between platforms.
You have the problem completed turned inside out. Why can’t MS and Apple do what Google does? They support Theora ALSO. It would cost nothing for MS and Apple to support Theora – there are no licensing fees involved. However, it DOES cost money for Mozilla and Opera to support H.264, and at least in Mozilla’s case they actively dislike H.264 because they believe promoting a patent encumbered codec is generally a bad idea.
I have no issue with the arguments that H.264 is better technically. I don’t really care about that. I have no interest in forcing anyone to use Theora – if they prefer H.264 I have no issue with that. What I DO care about is that the standard service everyone – particular those that just want to be able to publish a video on the web and not have to pay anyone to do so. The only codec that is currently in a position to serve that need is Theora. It costs the browser makers nothing to support it. So my question is WHY WON’T THEY?
Everyone knows why – they have financial incentive in seeing H.264 become the standard delivery mechanism for video. They all have licenses for it already. They all have built content libraries using it that they want to cash in on. The standard for internet video should not be decided based on these kinds of considerations.
The standard should be based on the same criteria required for other parts of the W3C standard – universal accessibility, royalty free patents, open development process, etc. Theora meets all of the criteria for adoption by the W3C, H.264 does not. Its that simple.
I am a web developer. I make websites. I would like to distribute video using a codec that I know doesn’t have any muddy legal intricacies tied to it. I want my videos to work on as many browsers as possible. I want the standard to support me doing this. I therefore MUST conclude that Theora is the best choice currently, because there simply isn’t any other choice at the moment.
Maybe not. I don’t care really.
I agree with that – it does need to get better to complete with H.264 on the technical merits. But again I don’t care.
Who is saying anything about forcing it on content distributors? I’m certainly not. I don’t really give a crap about that – they can continue using H.264. The only people taking about forcing something that is financially burdensome is you. You want to force browser makers to either purchase an expensive license, or neuter their product and give up most of the advantages of having a video tag by loosely coupling it with the platform specific media framework (which is roughly the equivalent of just using a plugin).
All I advocate for is having a default codec that is acceptable to all users, not mandatory. It certainly should not be the ONLY option – MS and Apple and Google implementing H.264 in the manner they do now is perfectly acceptable to me. Theora will not remain the only game in town, as soon as other royalty free codecs become stable enough to become included in the standard they should also be implemented. The point is I think for the standard to be truly useful – it should mandate a least common denominator in the user agent. That doesn’t mean it should be the only supported codec, and it doesn’t mean the standard should require content producers to use it, it just means that the user agent should be required to implement it. Since it is by definition royalty free, what is the harm in that?
Let H.264 compete against THAT if they truly think their product is worth it. Let them try and convince Mozilla and Opera they should implement it. If they are successful, so be it – but the standards process should not be corrupted by its bakers to forward an agenda.
But … they are already doing this on Linux (Opera at least). I linked to a bugzilla entry which shows that Mozilla is planning on doing the same thing. They intend to or already use GStreamer. My aim was to get them to do it on the ‘non-free’ OSes.
They both implement Canvas and SVG using platform-specific technologies. If they chose the OpenGL for all platforms, they would’ve had hell on Windows. Mozilla uses GDI+ on Windows. Opera states that it uses both OpenGL and Direct3D, but any developer who follows up on 3D graphics technologies on Windows knows that Direct3D drivers exhibit less bugs than OpenGL. Royalty-free or not, OpenGL isn’t worth the hassle. Both vendors intend to use Direct2D in the future versions of their browsers. GDI+, Direct3D and Direct2D are all Windows-specific technologies. Nothing is stopping them from plugging into Media Foundation (DirectShow replacement) to implent the video tag (and H.264 playback).
How about having less code to maintain? If H.264 decoders are already working in their OSes (Mac/Win), why add another library to their browser? As one x264 developer stated in regards to existing video tag implementations:
It is interesting that you highlight all the things that don’t matter to you, then berate me for what matters to me (and many others). The Cortado applet has been around for years, even before Flash adopted H.264. I’m sure if Theora took up half of the bandwidth as H.264 whilst giving the same quality, either 1) Flash would have adopted it, or 2) More video sites would required users to have Java installed. No one is forcing you to use H.264. If you had reservations about using Java before, it is free now: use Theora.
All fine and dandy, but what is the universally accessible, patent and royalty-free image format for HTML4? Is any image format less accessible than the next?
Pity. I strenuously went through the arguments why Firefox rose to prominence due to its quality, rather than its freeness. I guess Firefox is the only ‘free’ product that FOSS advocates can point to as upsetting a monopoly due to its quality.
Edited 2010-03-28 01:13 UTC
Just because they might choose to use GStreamer doesn’t make it platform specific. They are free to use GStreamer specifically for the same reasons they would be free to use Theora – it is available to them to use without patent concerns. And GStreamer is not platform specific anyway, it can be implemented on ANY platform…
GDI+, Direct3D, etc. are implementation details for Canvas and SVG – they are means to an end. Mozilla doesn’t have to pay Microsoft to be able to use GDI+, and GDI+ is certainly not required to implement either. On the other hand, a license for H.264 IS required to implement it and someone has to pay for it…
I think we are obviously coming at this from completely different angles. I don’t care about H.264 support, so I really don’t want to concern myself with how easy or difficult it might be to implement it. I want support for a codec that I can actually use to publish a video without legal concerns… I don’t see the same problem you see. The problem I see is that there is no default and mandated codec in the HTML5 standard. I do not see a problem with Mozilla not wanting to implement H.264 – because H.264 is by its very nature not a valid choice for a default codec (it is patent encumbered and requires licensing fees)…
The problem of pushing H.264 content to users is mostly solved – use flash. That is NOT the problem the HTML5 standard is trying to solve…
And they will NEVER be consistent if they rely on using the platform’s media framework because guess what, that isn’t consistent either…
I can use Theora just as easily as you use H.264. The point is NEITHER is globally implemented and the standard as it currently exists doesn’t mandate that either ever be. I want the standard changed – I want a default and REQUIRED codec. Theora is a candidate, H.264 is not. Theora costs nothing to implement but time, H.264 requires royalty payments. I really don’t see how this isn’t obvious to you.
There is no and has never been a standard for image formats… There was never a need because originally there were only 2 obvious choices, GIF and JPEG – and neither were believed at the time to have patent issues and did not require royalty payments. GIF did end up having patent issues (for encoding at least), and was largely replaced by PNG for that very reason.
Look, you seem to be entirely missing the point. The problem that HTML5 is trying to solve is making it possible to distribute video without relying on proprietary technology (i.e. Flash). I have nothing against proprietary technology per se – H.264 is VERY nice technology. But it IS patent encumbered. That makes it unsuitable for standardization. I do NOT have a problem with it being used, far from it – what I have a problem with is that the HTML5 standard as it exists now doesn’t solve the problem it was meant to address. It CANT unless it mandates a default codec – because three of the major browser makers have vested interests in a proprietary technology and they won’t back the standard willingly if it mandates anything but their pet codec…
I want at least one default and mandated codec. It doesn’t have to be Theora – but that seems like the best choice at the moment. It doesn’t have to be the best codec either, but it does have to be royalty free. I am not an end user… I want to be able to create video content, I want it to be universally accessible, and I don’t want to have to pay for the privelage of being able to do so. That is what the internet is all about, and that is WHY we have standardization.
Neither do they have to pay to use DirectShow or Media Foundation …
As patent-encumbered as H.264 is, that hasn’t stopped video hosts from using it, and they are the ones that the video tag is intended for, no? They know the risks; we aren’t saving them from something they don’t already know. You want a mandated codec that few have even heard of or have incentive to use. Just as GIF users moved on to PNG when they found the costs prohibitive, aren’t video hosts going to move to Theora when they find the cost of H.264 prohibitive?
As I said before, you can require that your users on browsers who don’t support Theora to support Java. Everyone knows that all video sites won’t drop Flash over night, but it is a transistion phase; they’ll still use the object/embed tag inside of the video tag. You can start by doing the same with your websites, and simply give people a hint that for a better experience, they can use one of the browsers that support your format natively.
Edited 2010-03-28 08:10 UTC
Ok. So that addresses Windows 7. Fine. If it makes you feel better, I personally don’t have an issue with Firefox or Opera supporting H.264 by using DirectShow/Media Foundation if they decided that is a good solution to the problem. That addresses one version of one platform they run on. But regardless, Ill state again that while I am fine with this approach being used to solve the problem you are harping on, I personally don’t care because H.264 support is not the problem _I_ want solved.
Please explain to me again why it is so burdensome for MS and Apple to support Theora? I think I ask that question every time I post and no one seems to answer it…
That is where I violently disagree. The video tag is intended specifically for those that would NEVER post a video in H.264 format, because they cannot afford to… They point of creating the standard is to level the playing field to allow anyone to do this.
Saving them? Those that use H.264 can continue to use it – the last thing I want to do is save anyone from anything… I want the standard to address those that cannot for financial or moral reasons use H.264.
The codec DOES NOT MATTER. I am not a proponent of Theora. I just want a royalty free codec, whatever it may be, as long as it is reasonably usable. Theora is close enough for now. The fact that it is royalty free is incentive enough for the rest of us.
Yes exactly. Except that can’t happen this time because 2 (maybe 3) of the major browsers makers are hostile towards supporting the royalty free format. The only way to facilitate that happening is to MANDATE they support the royalty free format in the standard.
Sure I could do that, but I would rather the standard be changed to mandate support for a royalty free codec so that is what I will argue for.
I never said that video should be posted exclusively in H.264 format. I’m saying that if a machine already supports it, expose the functionality. Is it the users’ fault that they have an H.264 decoder installed on their machine?
That’s fine. However, locking out a widely used proprietary format is no better than mandating rarely used royalty-free format. Don’t reject a proprietary format out of altruism. That would be akin to browser vendors holding a boycott against GIF because its patents hadn’t expired, even though they don’t have to pay royalties. Who would benefit? Certainly not users.
Firefox didn’t get 25% of the browser market because it complained that most websites were coded for IE. Theora advocates shouldn’t complain if their users don’t have a Theora decoder installed. You can’t expect a format to come from obscurity and become a mandated standard, when few websites used it. If 1/4 of the video sharing sites were already using Theora, there would have been credible ground for it to be mandated as the standard.
Related tangent: it is akin to when Haiti asked France to pay it US$21B in reparations back in 2003. Can Haiti, one of the world’s poorest nations, get a G8 nation to pay them 21 billion dollars? Riiiight.
Even though Firefox is arguably better than IE, after 5 years it only has 1/4 of the browser market. That should speak volumes about how the world works. If you have video content that is worth viewing, your users will either be willing to: (1) view it in the browser that you recommend, or (2) install the required plugin to view your content. Plugins haven’t been removed from the HTML5 standard. From the coder’s perspective, it doesn’t matter which tag is used to add video to a site, whether it is embed, object or video; no one uses a plain text editor to build websites anymore.
Edited 2010-03-29 01:48 UTC
This is where me and you seem to land with this…
1. YOU – Firefox and Opera should support H.264 on Operating Systems that already have it installed as a system codec. I’m fine with that if they can work out the technical details of doing it to their respective satisfaction. I am not anti-H.264 as I have said many times before…
2. ME – Microsoft and Apple should support Theora in the same manner as Google, Mozilla, and Opera – meaning “out of the box”, i.e. it just works on their browser, no bull crap about it working “if the codec is installed”… If the browser is HTML5 capable, it should support Theora with no user intervention required.
If BOTH happened, I think we would both be happy… Agreed? The problem I see is that 1. is definitely possible, even likely eventually… 2. will happen sometime well after hell has frozen over – not because it is terribly difficult to do or would cost anyone a considerable amount of money or time – simply because greedy bastards don’t want to put their cash cow in jeopardy.
And so it goes the way it always does (as least in real life) – Goliath kicks David’s ass. Same shit different day and so on.
I have a question… You say you do web development. Do you use PNG format? How long did you wait for Microsoft to finally actually fr*cken support it properly… It took about 10 years if I recall correctly. 10 years! And they didn’t even have a horse in that race – they just didn’t support it out of sheer indifference to what the vast majority of professional web developers actually wanted. That is the kind of shit open source solutions to software problems are up against – indifference.
So I’m sorry if I seem indifferent to the plight of H.264 – it seems indifference is contagious.
Yes, that is still an issue, considering that 20% of the market still uses IE6. I have to use GIFs in cases where I need portable transparency.
Edited 2010-03-29 05:48 UTC
The MS comparison is a good one. Microsoft held the web back with its proprietary technologies, just like the closed and patent-encumbered H264 will hold the web back.
First, IE WAS better than Firefox at most things simply because many sites were designed with IE’s quirks in mind. Firefox became popular because it was light and quick, but even when it first came out I frequently found myself having to revert to IE.
Theora is important because of what it represents, something unencumbered with license fees.
Edited 2010-03-28 00:00 UTC
It wasn’t mentioned in the original article (as I thought I had limited space, but that was an Opera textbox quirk), but I was going to state that when Firefox came onto the scene, it had to support proprietary features such as document.all if it wanted to work with existing websites. Getting web developers to update their code when they moved on to the next project wasn’t going to happen.
Microsoft tried to take ove the internet with Frontpage and IE (write mshtml and read mshtml)
H.264 is patented and therefore IT. IS. UNACCEPTABLE. End of discussion.
Nice thought and I agree. Sadly, the big guys don’t and it is they who ultimately decide such things. We, the users, can use Theora and push it all we want, but if the big content providers don’t want it they won’t adopt it, and it is the most asked for content that ultimately decides the standards whether they be official or de facto. We need only look at how Flash rose to such a position to see this.
GIF was patented as well, by 3 separate companies each claiming to own a patent on it. We ignored it and used GIF anyway.
Maybe we should fight against software patents in general not refuse to use a software that is much better than it’s competition and remain in the stone age of computing just because some stupid activists say so.
I’m not Stallman to use some stupid old technology because it’s free, run my e-mails in console because it’s “free”, write my texts in Emacs because it’s “free”, hack to see ugly ogg/theora in console because it’s “free”, play stupid text games because it’s “free”.
I use something because is “best”, not “free”, not even “better”. I would prefer it being “free” and open source, but if it isn’t , I’ll still use it being the “best”.
Define “better”, please. BETTER on the web means free and open. H264 is neither.
The web exists because of free and open standards. Using H264 would be the same as going back to IE6 as the dominant browser. It messes up the web.
Well, “better” might mean that, but “best” means best performance. That’s why 90% percent of people still use Windows, that’s why 90% percent play real games not Frozen Bubble on linux. That’s why people use Photoshop, Gimp. That’s why real coders use MS’s Visual Studio not Anjuta. That’s why people use Office and don’t use Open Office. That’s why people were beginning to use FF over IE, and that’s why people start to use Chrome over Firefox.
Performance and quality comes on 1st place, license comes on 2nd place, maybe even on 10th.
After 10 years of trying, and trying, and trying and hoping that it may become better, and using Open Source I’ve realized that I simply could do something better with my time than using, hacking and fixing open source software and ending up with poor performance and quality.
Is that also why people chose YouTube over other online video services with much better video quality?
Is that also why people choose the Wii over the technically superior PS3 and Xbox 360?
Is that also why VHS won over BetaMax despite the better video quality of the latter?
Is that also why MP3s became massively popular despite usually having worse audio quality than a standard CD?
Is that also why SACD failed to replace the CD despite the superior audio quality?
I see.
Ok, so Wii is the #1 home console, why?
But this isn’t about open source. H264 is open source. But it’s still a closed technology.
And the point was, free and open is the foundation of the whole web. You are saying that the web should be a closed invite-only network.
Of course I’m not saying that. I’m saying that HTML 5 should implement a damn tag and not care about actual codec. W3C should not recommend any codec and let instead content providers and users decide what suits them best.
I disagree. There should be a free and open baseline codec. Video is too important to be owned by a cartel. Just like the rest of the web is too important to be dominated by Microsoft.
The web needs to be free and open. That’s what allowed it to get where it is today. And if it hadn’t been for Microsoft, it would have developed much faster.
Ok, I have some time to kill…
I think by the time people noticed the services with better video quality, it was too late, Youtube was already too popular.
This one’s way too complex. There are so many things that affect console sales, like the game selection, brand, novelty factor (for wii’s control scheme), etc. Video formats don’t have as many factors.
This one’s been talked to death, but Betamax wasn’t really as “superior” as people tend to think.
* Apparently video quality wasn’t THAT much better.
* “The original Sony Betamax video recorder for the NTSC television system could only record for 60 minutes, identical to the previous U-matic format, which had been sufficient for use in television studios. JVC’s VHS could manage 120 minutes, followed by RCA’s entrance into the market with a 240 minute recorder.” This matters to people A LOT. Look at LaserDisc too.
* VHS cassettes were cheaper
* VHS players were cheaper and there were more choices
* I think one reason often mentioned is that Sony tried to control the format too strictly, so in this way you might be onto something
* Then there’s of course the real reason: They say Sony wouldn’t allow porn on Betamax
Now we get to the more relevant ones. But comparing MP3 to CDs doesn’t work. They’re totally different technology used for different purposes.
Superior audio quality? People pretty much can’t hear the difference after CD quality. Why would they need another format? It doesn’t matter if it’s in theory better quality. If it doesn’t sound better, why bother?
Because… Wii is Open Source and Free?
Umh, I wouldn’t say H264 is open source. The specs are avalable and some of the encoders are open source, but there’s no “H264” source to open.
…It wouldn’t cause it to be a “closed invite-only network”
Ok, so basically you’re both wrong. Of course technical performance isn’t everything. But in the case of video… it sure means a lot. Let’s go back to MP3 for a sec.
You compared MP3 to CDs, but what you should have done is compare MP3 to Ogg Vorbis. Vorbis IS technically better AND free. (As in liberty, unlike MP3) But people still prefer MP3. Of course in this case MP3 was the one that became popular first, so it was supported by hardware etc. But there’s one thing…
Comparisons like that are usually useless.
In the case of audio, for example, the quality difference doesn’t matter THAT much. Bandwidths and HDDs have been big enough long time ago for people to not care too much. Encoding video is different. It’s MUCH harder. Standard CD can hold 80 minutes of uncompressed audio, or 800 Megabytes, but 80 minutes of uncompressed 30fps 1080p 8bit 4:4:4 video will take 896 Gigabytes.
With video, there is more things to play with, which helps with compression… but this causes much bigger differences between video encoders.
And once you get to the “near lossless” quality, audio or video, getting small differences starts to result in huge increase in bitrate. So 128kbps MP3 isn’t that much worse than 128kbps Ogg Vorbis… Most people won’t notice the difference, especially if they don’t try to look for it. With the video bitrates of internet videos, the difference WILL be obvious and people WILL get annoyed. And/or the files need to be bigger and then people AND the companies will get annoyed.
Unlike game consoles, video formats/encoders don’t have too many factors, let’s check a few.
Current popularity: H.264 wins (Popularity creates more popularity)
Hardware support: H.264 wins hands down. I know we’re talking about web here, but this affects popularity.
License issues. Theora wins. Although it’s not as bad as some people tend to think.
Video quality: H.264 wins by such a wide margin it’s laughable. I’ve seen comparisons where Theora loses to MPEG-2 ( http://x264dev.multimedia.cx/?p=102 http://x264dev.multimedia.cx/wp-content/uploads/2009/08/quality_cha… ) Not to mention the one I just did on the earlier comment ( http://kuukunen.net/misc/theora_x264_parkrun/ )
Oh and here’s one thing people seem to miss: Loads of people are talking about things like “Next year they will develop Theora so much it will overtake H.264!”
WRONG!
H.264 is a video format, it doesn’t change.
Theora is a video format, it it doesn’t change.
The most popular encoder for Theora is called libtheora, so very often people use “Theora” to also refer to the encoder. I guess that’s fine, even if slightly wrong. But the encoder has to make a stream that’s compliant to the standard. Meaning, unlike software, you can’t just add more and more features and change everything. You have a set of tricks you can use, defined by the standard, and you’re limited to them. Otherwise it wouldn’t be Theora, but some other format.
And because Theora is based on an obsolete standard, VP3, comparing it to H.264 isn’t even fair. It’s like
trying to make a house out of straws and try to make it better than a house made out of stone. It won’t happen.
So what you are saying is that quality didn’t matter.
So what you are saying is that quality didn’t matter.
But all things considered, you are again saing that better quality didn’t matter. Other things mattered much more.
So, again, what you are saying is that other things than quality decides on usage.
So what you are saying is that quality doesn’t matter.
Nice dodge. You know that what you are really saying is that quality doesn’t matter.
There are certainly open-source encoders and decoders.
Why yes, that is exactly what it would cause. When you rely on locked technologies you need permission from the owner of said technology to use it. For example, to use the web to its fullest just a few years ago, you needed IE.
And the same is the case for video, as YouTube showed.
You just admitted that quality didn’t matter. Why are you changing your story?
It doesn’t need to. It only needs to be good enough.
So there is no Theora 1.1 or 1.2?
Theora 1.1 and 1.2 didn’t change everything.
I went through them mainly because I was bored. But my main point was that you were both wrong. Quality does matter, but it’s not always the most important point. Why was Blu-Ray chosen over HD-DVD? I’d say quality mattered there, but again wasn’t the only reason.
Saying “quality doesn’t matter [at all]” is totally missing the point.
And in the case of CD vs. SACD, my point was that there basically WAS no quality difference.
People like Youtube have already been shifting towards using H.264 instead of Spark for the Flash videos even though at the same time not everyone has new enough player and decoding requirements go up, thus causing problems. They sure seem to care about the quality.
Again with the useless comparisons…
Don’t compare H.264 to IE. With H.264 you have plenty of choice for browsers and decoders and encoders, also free ones.
If you HAVE to compare it to something, compare it to GIF. I remember when GIMP was unable to output GIF due to licensing issues.
People didn’t really care and used it anyway.
With GIF vs. PNG, the situation was different though… PNG was better and supported things like alpha channel. (Ie. PNG was better. Higher quality, etc)
And as I said, they’ve been moving towards H.264 now. The sucky quality was besides necessary for the HUGE amount of bandwidth. They had more videos than others and more viewers and as far as I know, they weren’t really getting too much money from advertisement in the beginning…
No I didn’t. I just went through the stories YOU chose where technologies of arguably better “quality” lost and explained that there are ALSO other factors that come to play.
Exactly. It isn’t.
Yup. You must be thinking about libtheora again. (Just check Wikipedia.)
The specs have been tweaked a bit to fix bugs etc, (Same as H.264.) but the bitstream format was frozen already in 2004.
And now you know why.
And you other people, would you please stop with the IE vs. Firefox comparisons.
Sure, it does show that open source can get popular if it has quality, but that’s the problem with Theora.
Like the commenter that dragged me into this mess, I’ve seen so many open source evangelists shooting uneducated guesses about the quality. Sure, I’d love it if Theora was better than H.264. It isn’t. Not even if you wish really, really, REALLY hard. It just isn’t better. And unlike Firefox, for the reasons I’ve mentioned, it will never be. It. Just. Won’t. Happen.
PS. Speaking of VP8 and stuff, this might be interesting: http://x264dev.multimedia.cx/?p=292
Though the bitstream is the same, improvements in encoders can (and have) lead to improvements in video quality. Could it theoretically one day be superior to h264? Yes. It probably won’t, because h264 encoders could be improved, too.
And this is the fly in the ointment. If everyone stopped developing video codecs for 5 years, yeah, Theora could own the market. The truth s, something else will emerge. Most likely commercially backed and as patent laden as h.264. The point it definitely, Theora needs to literally be better that h.264 tomorrow, not next year, not even next month. Tomorrow. It then needs to steadily get better over the next 6 months and needs to make h.264 look absolutely pathetic and old school. If it doesn’t, it will be an also-ran.
No, it doesn’t need to be better. Quality doesn’t matter. Other things dictate success. Otherwise the Wii would have flopped.
No. Quality doesn’t matter.
But bandwidth bill does; that is why the YouTube engineer made his infamous statements last year. As I’ve pointed out, as many in here have already experienced, Theora looks like crap at low bandwidth. Content providers aren’t going to provide their users with a crappy looking video that buffers all the time, when the equivalent H.264 file doesn’t buffer at all, and may even look better. It has to earn its space on the hard drive.
So quality might not matter (to you), but what about user experience? Theora would lose on both counts. Or would you condition yourself to say: “it looks like crap, but its freedom “?
Ok, this starts to be really rubbing it in, but I guess there’s no choice. Besides, I’m bored.
“Could Theora be superior to H.264?”
No.
As I said the formats themselves don’t change.
So the real question is “Could libtheora be superior to the best H.264 encoder?”
No.
You can quote me on this. Even if magically all development on H.264 encoders stopped right now, Theora would never be able to touch even the current level. The encoders already use such a huge array of features that are impossible for Theora.
But what deserves a special mention:
Could? COULD?
I mentioned it before, but somehow people seem to think Theora is the only thing under development. Let’s see how true this is…
http://mirror01.x264.nl/x264/changelog.txt
vs.
http://svn.xiph.org/trunk/theora/CHANGES
Yes, they use slightly different granularity, but the libtheora changelog is detailed enough to mention even “minor build fixes” and “Clean up warnings and local variables”.
So let’s assume one line on the libtheora changelog is equivalent to a whole revision of x264. Since the start of 2009 there has been 42 changes. (2009 because it seems to have been very active compared to the earlier development.)
What about x264? Quick calculation gives me 440 revisions since 2009.
*440* !
That’s approximately one revision each day! And most/many of those include multiple changes in one revision.
I’ve had the privilege of following x264 development very closely. It even includes some of my code. Which is why all these uneducated claims seem so funny to me.
For example:
“Theora Ahead of H.264 In Objective PSNR Quality”
from http://news.slashdot.org/article.pl?sid=09/05/07/2352203
(The test was bugged)
“This might not pose that much of a threat to H264, sounds like another OGG or FLAC. Superior in a lot of qualities but largely ignored by the majority”
from the slashdot story. scored 3, Insightful
“It appears as though both h.264 and Dirac will be noticeably behind Theora in quality/bandwidth comparisons when Ptalarbvorm is released.”
From “lemur2” in this very thread!
I think this says the most:
“While I’m still not entirely sure about which one is better from a pure quality standpoint (a lot of contradicting reports on that one, and I’m not skilled enough to perform my own tests).”
from: http://www.osnews.com/story/22812
…what?
The free software evangelists’ FUD sure seems to be working…
Edited 2010-03-29 15:47 UTC
I’m glad you wrote all that as I tend to want to believe people that actually contribute to projects over hopeless fanboys
It’s sad that people on both sides of this argument (mainly Theora, naming no names) seem to believe repeating the same information over and over again makes their standpoint more valid.
Edited 2010-03-29 15:54 UTC
“The free software evangelists’ FUD sure seems to be working…”
Amusing comment, since x264 is GPL’d Free Software.
Heh, yea, I know. I was juggling around with different wordings, even edited it once, but I couldn’t figure out a good one so I settled with that one. But I hope the message got through and people understand which group I’m talking about.
And they all showed that you are wrong.
No, it’s a fact that quality is the least important part.
That’s irrelevant to the discussion, which is how YouTube established its dominance in the first place, and it was not through quality.
You do not have a choice in codecs. You have to pay their extortion fees, full stop. H.264 is closed and patent-encumbered. Just like IE, it will hold the web back.
No, other factor dictate the outcome. Not quality.
I’m thinking about Theora 1.1 and 1.2.
x264 is open-source, but that’s not what the web is about. The web is about open standards. H.264 is a closed, patent encumbered technology.
It doesn’t need to be. Quality is not important.
Hooboy, here we go again.
H.264 is not a codec.
It’s a video compression standard that defines a video coding format. (“format” would be a one nice word.)
http://en.wikipedia.org/wiki/Codec
says: “A codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal.”
Some people tend to stick with the interpretation that codec is COder AND DECoder.
I know lots of people use the word “codec” incorrectly. Actually so many, I try not to use it myself to avoid misunderstandings. People tend to mix up codecs with any combination of 1) decoder 2) encoder 3) video/audio format 4) container format
So in case of H.264, you DO have a choice in codecs. There are free and open source encoders AND decoders. Like x264 for encoding and libavcodec for decoding, which is used in projects like mplayer, VLC, ffdshow and ffmpeg (which is the project that includes libavcodec) for example.
Even if H.264 was a codec, what the hell do you even mean “no choice”. If Theora was a codec, and you used Theora, then… you wouldn’t have a choice either…
So any more misunderstanding? Oh here’s one:
http://en.wikipedia.org/wiki/Theora
So since you didn’t check Wikipedia and still talk about “Theora 1.1 and 1.2”, you’re either totally ignorant or a troll and I have no idea why I’m writing this.
Well, you might enjoy your blatant video compression artifacts, but other people don’t.
Here’s another way to look at it:
In the end it doesn’t really matter what the end users think, if all the web sites are using H.264.
bitrate = bandwidth and storage space = money
So, this very simplified version of how the companies like the video sites think:
income = people using the service – bandwidth costs – storage costs – license fees
Dropping quality means less bitrate, so less money lost, but it also means that some people caring more about the quality go to other sites. So they have to find a balance between the two.
The sites don’t actually compare the quality of two videos with the same size, they compare the size of two videos with the same quality. (Or find a balance between the two)
So in the end Theora costs them more money than H.264 even WITH the “extortion fees”. (That are currently free for internet video anyway…)
Sometimes storage space can be more important than bandwidth too, which is apparently why Youtube has (some/most/all? of) their H.264 encoded with a sucky profile so they would be compatible with iPhone and other hardware. This way they avoid having an additional file with the iPhone profile, trading bigger bandwidth for less storage needed. This wouldn’t even be possible with Theora because there really aren’t hardware for it. (Which is another reason why hardware decoders that don’t even support video tags affect internet video.)
Codec, whatever the hell you want to call it. The simple fact remains that it’s completely closed. If H264 has a monopoly, then you have no choice because you have to pay their extortion fees.
As for Theora 1.1 and 1.2: http://en.wikipedia.org/wiki/Theora#Encoding_performance
Ok, I’ve been mostly talking about the video quality point of view, but since that has to be brought up anyway, let’s delve into the magical world of “I Am Not A Lawyer, but…”
Funnily enough, tiny fraction of the people who talk about the licensing issues and patent-encumbering probably have probably actually read the licensing terms. One reason might be that they’re not readily available. MPEG LA only provides the summary:
http://www.mpegla.com/main/programs/avc/Documents/AVC_TermsSummary….
However it even says: “The licensing terms summarized below are for informational purposes only. They are not an offer to license and may not be relied upon for any purpose. The AVC Patent Portfolio License provides the actual terms of license on which users may rely.”
MPEG LA (LA stands for License Administrator) is a firm that handles the licensing of technologies such as H.264, MPEG-2 and FireWire. It doesn’t actually own the patents, but it’s a lot easier to talk with MPEG LA than licensing each patent separately: there’s apparently ~25 companies with more than 1000 patents that are essential for use of H.264.
By the way, the patents are from loads of countries, so you’re not safe if you’re from Europe or outside USA.
So… The AVC Patent Portfolio License is actually an agreement between a Licensee and MPEG LA that you have form.
Luckily with a bit of Googling, I managed to find a copy of the agreement between DivX Inc and MPEG LA from one of DivX’s regulatory filings that is publicly available:
http://divx.client.shareholder.com/common/download/sec.cfm?companyi…
The summary seems to be pretty good for explaining the license terms, so let’s go through the cases, where you might have to pay royalties to MPEG LA (And therefore to the companies holding the patents.) A summary of summary, if you will.
Basically there are two groups of licensees and two sublicenses for each case:
(a) encoder and decoder manufacturers
(a.1) sublicense for branded encoder and decoder products Sold to End Users
(a.2) sublicense for branded encoder and decoder products Sold on an OEM basis as part of an operating system
(b) video content or service providers
(b.1) where End User pays for each title or pays for subscription
(b.2) where remuneration is from other sources (like advertising)
I’ll be focusing on a.1 and b.2 for obvious reasons.
But this is where the legalese starts kicking in. The license redefines lots of words and According to the Portfolio License:
“Sale (Sell) (Sold) (Seller) – shall mean any sale, rental, lease, license, copying, transfer, reproduction, Transmission, or other form of distribution of an AVC Product or the Transmission by any means of AVC Video either directly or through a chain of distribution.”
So even providing free download to a decoder would fall under the license, because it’s “Selling”. (With capital S)
What’s worse, even the Consumer is included in the article 2.1 of the AVC Patent Portfolio License.
And furthermore, widely circulated news about the (b.2) part for internet video being free doesn’t mean you don’t have to license it. You still have to contact MPEG LA for a license, but you don’t have to pay royalties.
So that leaves us a.1. The royalties start only after 100,000 Units Sold, so for small businesses, that’s not really a problem, although I guess you still have to do the licensing.
********
Ok so that was Part 1: the theory. Now let’s to go Part 2: the practice.
“AVC Product(s) – shall mean any product or thing in whatever form which constitutes or contains one or more fully functioning AVC Decoder(s), AVC Encoder(s) or AVC Codec(s). AVC Product(s) shall not include OEM AVC Products.”
So if we assume a source code of a H.264 encoder or decoder is not a “fully functioning” decoder or encoder, this might explain two things: 1) why projects such as x264 and ffmpeg are not under too much threat 2) why those projects don’t have an official binary distributions
On the other hand, there are projects like VLC and ffdshow, who do provide the binary and haven’t licensed anything. So why aren’t they sued and pestered?
You see, MPEG LA aren’t stupid twats that go around suing everyone. VideoLAN is basically bunch of guys who don’t really get huge monetary rewards for their work. Suing projects like that wouldn’t help anyone, because 1) It wouldn’t even kill the project, they could just stop providing official binary, go underground or host it in another country. 2) It would mean lots of people wouldn’t be able to decode H.264 video made by royalty paying licensees. 3) They wouldn’t get any money out of them anyway and would look very evil for trying.
Mozilla is sort of a special case, since they’re pretty big. They get $50 million dumped on their doorstep each year by Google, so they might be a real target for MPEG LA.
Here’s a of tip for Mozilla or Opera, though: Use the decoders from the OS. Opera is already using GStreamer, so I don’t know why they couldn’t add DirectShow support, so pretty much everyone would be able to decode H.264 videos on Windows without including a decoder in the download. (On Windows Opera ported a GStreamer implementation that only decodes Theora, but on other systems, it could handle all the formats GStreamer normally can.)
Or would it be bad that all browsers can’t decode all video-tags? Well that’s how it is already, just that there’s no way to fix it.
In any case, in theory if you as much as think about H.264, you have to form a license, and if you ask the lawyers, everything is patented, maybe. In practice, you don’t have to care and unlike PresentIt is saying, I don’t have to pay any “extortion fees” for using the encoders and decoders.
There have been “patent-encumbered” video formats before and I haven’t worried one bit.
********
So now, Part 3: how does this all relate to Theora?
You might be thinking: “Aaarrghh! Patents! Licenses! Loopholes! Fear of litigation!! This is why I want Theora!!!” So even if in practice, you have nothing to worry about, in theory, you could get sued, so you want to use Theora to be safe.
Wrong.
If you Google a bit, you can find the interview where MPEG LA CEO talks about Theora being patent-encumbered too.
So let me teach you about patent portfolios, and especially video compression patents: They suck.
There is no such thing as “patent-free” video encoder. Look at VC-1 for example: Microsoft claimed it was royalty-free, but suddenly dozens of companies jumped out of the woods claiming patents on it.
If Theora becomes a popular and of commercial value, same thing might happen. After decades of careful legal struggling companies have managed to patent such a large array of very general algorithms and methods it’s practically impossible to make a video encoder or decoder without breaking them.
Theora is based on On2’s VP3, which they assume is not under any patents any more. They added their own modifications to the format and with each new version of libtheora, they add code that might be, and probably is, patent infringing. It’s just practically impossible to check.
According to an interview, Xiph’s only defense is that they have made their algorithms and methods publicly known asking any company to contact them if they’re using something patented. But why would the companies tell them what they think is patented? Not to mention it would be a HUGE task to go through all the patents and all the source code and it wouldn’t help the company one bit. It’s a lot better to wait for a format to become popular and THEN say “we have patents on this!”
Logically, also MPEG LA’s official response to all this is “No comment”.
So would it matter in practice? Probably not. In theory? Definitely. Exactly the same situation as with H.264.
In regards to your last point, Theora advocates have used the defence if such a company exists (that holds patents on Theora), they would find Google to be an attractive target to sue.
Again, I don’t really know how these things go, but I’d imagine the worst case scenario might go something like this:
1) Theora gets popular, used in most of the video service sites as the standard HTML5 format (be it a real standard or not)
2) The relevant companies go through their patent portfolios and find patents that are related
3) They start publicly claiming patents on Theora
4) They form a patent pool together
5) They start selling licenses and are prepared to sue anyone using unlicensed Theora, starting of course with the big ones like Google
The patents might be US only, but would that help? It would still effectively prevent Theora from becoming the common standard.
By the way, just to give some kind of taste of what kind of patent minefield the (video) compression field is, look at something like arithmetic coding. Arithmetic coding is an old technique for entropy coding, everyone knows how to implement it, but it’s also heavily patent-encumbered.
As far as I know, the patents on the basic methods have pretty much expired, but patents on all kinds of optimizations still stand and you can run into them accidentally, unless you’re very careful, and even then you should be prepared for possible litigation.
This is how the Dirac people handled it:
http://lwn.net/Articles/272973/
Another way is to use a specifc implementation of range coding which is known to be patent free. According to Wikipedia, this is just what some people have done:
http://en.wikipedia.org/wiki/Range_coding
The method used in H.264 (CABAC) is of course patented. Probably because when some kind of big standard is being planned, companies will try to push their own patents into the standard as much as possible. Meh.
As far as I know, Theora uses Huffman (which causes yet another 15% inefficiency.) so this doesn’t specifically touch Theora, but I took this just as an example, because it’s a small, but important part of modern video encoders and patents around arithmetic encoding are somewhat known.
While looking around, I also ran into a link that among other things lists patents on arithmetic coding:
http://www.faqs.org/faqs/compression-faq/part1/
But there I also another example explaining the situation:
So a method simple enough it can be reinvintened, not once, but twice, that is (maybe) related to a $120 million lawsuit? Sounds nice.
And somehow even the patent office fails to notice there is already an existing patent on it? If THEY don’t notice it, would a bunch of open source software people have the legal resources for going through the (tens of?) thousands of patents that they might be accidentally infringing with each new optimization?
But, as I said, I’m am not a lawyer, and you might dismiss this as “I dunno, but what if?!” Yet, the patents in this area REALLY are that stupid and hard to avoid. Like the patents list for arithmetic coding on that faq says, “The list is not exhaustive.”
Edited 2010-04-01 11:00 UTC
For goodness sake Google release VP8 under favourable terms so I don’t have to wade through any more of these endless Theora vs. H.264 debates.
So the article author does not see the value of a web built on free and open technologies.
He wouldn’t mind the entire web being nothing but commercial, proprietary stuff, I gather.
Who needs open standards, eh?
Fail.
You can argue as much as you want about quality and everything. There is one thing you can be sure off:
Users don’t want to pay (directly) to view media on the web. When you want to charge them to use youtube, another site will take over and become the new youtube. If that involves an easy step to play the movies, users will do so. Look at it like installing flash nowadays.
I really don’t care which codecs gets chosen, as long as it is not restricted to certain platforms.
I am sick of all the trouble the free software people bring to the table all the time. They can keep the darn politcs out of the equation and make decisions based on the merit of the technology.
I know that is something FOSS people cannot conceive. If they could they rather we all be using a form of open source VHS tape format instead of Blu Ray.
Technolgy advances come second to them, their ideology/religion is their altar!
You can keep politics out of the equation but you cannot keep the law out. Legal issues must be weighed in any technical evaluation.
Let me see if I can explain with a traditional Car Analogy:
You are an engineer in an automobile company and you are trying to decide which engine design to use. One design is old and won’t cost you an extra dime, but it only gets 25MPG. The other design can be had for free and gets 30MPG, but there’s a company that asserts the right to sue companies using that design and collect a fee for every car that has such an engine.
Now, technically, the second engine is better because it goes further for the same money, but there’s an additional risk involved. You put yourself and your company in legal jeopardy if you adopt the second engine. As part of your evaluation you must ask yourself: Is that legal risk worth it for an extra 5MPG?
The analogy is flawed on several levels. For Mozilla, for example, the second engine design is not free but in fact costs more than the net profit made on the sale of each car.
I’m having a problem with the argument that supporting <video> tag is somehow very meaningful and important for the common end-user. I don’t think they’d notice a difference between Flash and native video support.
There’s really nothing “legacy” in Flash from that point of view. Flash offers HW acceleration and a nice GUI for the video. Firefox’s current implementation of the (Theora) video player isn’t as good, it feels like a bad Flash video player.
They won’t even understand what it means to have a native support for H.264 in the browser. Nobody probably even tells them they’re not using a Flash-based player.
Edited 2010-03-29 12:24 UTC
Do you remember the first generation of flash video players? Compared to the ones available now there’s vast improvement.
The <video> tag is programmable and only at the very beginning of its adoption. Improved control interfaces are being developed and will be common soon.
http://www.theregister.co.uk/2010/03/29/brightcove_html5_announceme…
Why does a Flash video company branching out to use the video tag hurt Theora adoption?
Previously they would have needed to add another codec to their transcode backend *and* develop an HTML5 delivery system. Now they just have to add a new codec, which means they are one step closer to open video.
It’s debatable whether they will, though their CEO seems to have a fair grasp of the issues:
“What few people realize is that while H.264 appears to be an open and free standard, in actuality it is not. It is a standard provided by the MPEG-LA consortsia, and is governed by commercial and IP restrictions, which will in 2014 impose a royalty and license requirement on all users of the technology. How can the open Web adopt a format that has such restrictions? It can^aEURTMt. Google will make an end-run on this by launching an open format with an open source license”
from: http://techcrunch.com/2010/02/05/the-future-of-web-content-html5-fl…
(On a similar note John Gruber recently announced that TED talks had “switched” to H.264 when I’ve been watching them in that format for a long time. What they’d actually done was switch from needing an iPhone app and/or Flash player to deliver that H.264 video as they now have a video tag implementation as well. Again, a step in the right direction.)
Edited 2010-03-30 15:34 UTC
Given that publishers don’t want to lose out on iPhone support, they’ll either have to be willing to use H.264, or develop an iPhone app that’ll play their videos in some other format, whether it is flv or ogg.