The debate about HTML5 video is for the most part pretty straightforward: we all want HTML5 video, and we all recognise it’s a better approach than Flash for online video. However, there’s one thing we just can’t seem to agree on: the codec. A number of benchmarks have been conducted recently, and they highlight the complexity of video encoding: they go either way.
I won’t claim here that I know anything about encoding video, and what options yield the best possible results. All I can do is rely on the expertise of others, but in doing so, it becomes quite hard to figure out, as a layman, how all these various codecs compare to one another. Let’s look at three (relatively) recent encoding benchmarks.
Jan Ozer from the Streaming Learning Center. Ozer compared the results from Theora (1.1) to H264 at various bitrates, and they show that H264 simply performs better. “These tests are very aggressive, but purposefully so – at very high data rates, all codecs look good,” Ozer argues, “In particular, YouTube encodes their H.264 video at 2mbps, about 2.5X higher than my tests. So my conclusion isn’t that Ogg is a bad codec; it’s that producers seeking the optimal balance between data rate and quality will find H.264 superior.”
Ozer readily admits that this is his first foray into Theora encoding, so he asked for input from the community as to how to yield the best results. Xiph’s Greg Maxwell took up on this, and noted that several of the setting employed by Ozer weren’t optimal. “The bigest reason why the 1.1 clips have surprisingly poor quality is because these are stricly buffer constrained encodes,” Maxwell notes, “The buffer constraint means that user with a very bandwidth limited connection can playback without stalls and excessive buffering. So it’s a good thing to have if you’re streaming. But – its absolutely brutal on the quality and it’s not a restriction that your h264 encode has been subjected to.”
Maxwell performed his own benchmarks last year, and he drew two conclusions. First, that Theora is better than H263. No surprise there. Second, that Theora is competitive compared to H264. “In the case of the 499kbit/sec H.264 I believe that under careful comparison many people would prefer the H.264 video,” Maxwell concluded, “However, the difference is not especially great. I expect that most casual users would be unlikely to express a preference or complain about quality if one was substituted for another and I’ve had several people perform a casual comparison of the files and express indifference.”
So, we have two benchmarks going either way. Luckily, we also have a third benchmark, by Martin Fiedler, which places H264 firmly in first place. “The H.264 format proves that it’s the most powerful video compression scheme in existence,” Fiedler writes, “The main competition in the web video field, Ogg Theora, is a big disappointment: I never expected it to play in the same league as x264, but even I didn’t think that it would be worse than even Baseline Profile and that it’s in the same league as the venerable old XviD which doesn’t even have in-loop deblocking.”
To further prove just how incredibly difficult this whole video encoding business is: Maxwell commented on Fiedler’s benchmark, and together they found out that Fiedler was using the wrong Theora branch, which negatively affected Theora’s performance. It turns out that the branching scheme for Theora isn’t particularly intuitive.
All this illustrates two things: first, it takes experts to do this stuff. Video encoding is hard, with all those options and tweaks you can apply – made worse by the different encoders you can choose from. Second, that the people behind Theora need to make it clearer which version or branch produces the optimal results. Of course, this is also an inherent problem with Theora: always the moving target.
In any case, this all overlooks the big, massive advantage that Theora has over H64: it’s unencumbered by patent nonsense. That alone is a massive win for Theora, and something H264 doesn’t have an answer for.
Why would beating Theora make something the “the most powerful video format in existence”? I thought there were several formats that were better then both.
No. Nobody except On2 ever claimed that.
What about VC-1?
VC-1 sucks against H.264 but it may be better than Theora.
http://forum.doom9.org/showthread.php?t=128498
Nobody really cares about VC-1, as it is a direct competitor to H.264 in terms of licensing scheme, has similar licensing costs but is less widely adopted and performs poorer.
Other codecs may strike against H.264 because they are free, cheaper or easier to implement.
Edited 2010-02-27 19:47 UTC
Wow, you learn something new ever day
I thought most Blu-Rays were VC-1, and that meant it was probably a better codec.
It is true that many BluRay’s use VC-1. This is a question of industry backing and MS has important ties to the content industry.
Some early BluRay releases even used MPEG-2. The adoption of a format in this segment does not tell much about the format’s performance.
There is also one important detail to note (and you can also see this in the benchmark figures regularly). The best codecs really shine in the low bitrate segment. If the bitrate is high enough, the relative quality differences today become very narrow. BluRay disks have a high bitrate, up to 32 MBit/s. On that level, mastering quality and tuning have a far more significant impact than general format performance. On the web however, you have low bitrate content and H.264 is a clear winner (also in adoption) against VC-1.
Edited 2010-02-27 20:07 UTC
Maxwell’s comparison is flawed as well. He compared Theora at its best to H.264 Baseline profile – it does not even use B-frames not to mention more advanced features like B-pyramid, different modes of motion estimation or CABAC. There is nothing to brag about if Theora cannot even beat that. Now let’s look at Theora features ( http://wiki.xiph.org/Theora ). “intra frames (I-Frames in MPEG), inter frames (P-Frames), but no B-Frames (as supported in MPEG-4 ASP/AVC)” – seriously? It does not matter how long you tweak Theora encoder when it is generations behind H.264 .
Well, it is free but it is still dead end.
Sure, H.264 has more potential than Theora, but it comes at the cost of complexity: current encoders don’t even use half of the features provided by the standard. It will only get better, just like MPEG-2/H.262, but the Internet community want something right now. Theora is good enough right now for the web and it’s free.
That said, I do agree that it’s a dead end. I am definitely more interested by Dirac.
As for the benchmarks, I’m not sure it would be wise to use an High Profile for a web encode, as many handheld devices cannot decode them.
x264 has quite good support for advanced features these days. And is developed way faster than Theora.
‘Right now’ Google could flip some switches on their h264 encoder.
Me too, at least it is not yet another variation of DCT+MC+some tweaks theme.
As opposed to wide Theora support?
Considering his *cost*, I fail to see why Theora support will not be widely available on any handled device as soon as supporting it would become a must-have, if such thing ever happened.
On the contrary, I can see why some royalty-bound codec support could not be widely available on handled device but only on big manufacturers’s ones…
It’s not just a quality vs quality debate. There *is* other criteria to considerer. I’m glad to have Vorbis support in my pet-operating system *and* digital music player for isntance, and it’s totally due to Vorbis being patent+royalty-free, nothing else.
I’m not ready to trade everything just for the best quality ATM.
Are you?
I don’t really care, in the end, what codec gets used so long as it’s a completely open codec. Be it Theora, Dirac, Snow, or another one I’ve not heard of yet, so long as it can be freely implemented by anyone with no patents or royalties and will remain that way forever it is fine with me. H.264 is a dangerous path to walk, the MPEGLA hasn’t removed the danger just delayed it by five years. We don’t know what they’ll do when the licensing terms come around again in 2016, and if web video is entirely dependent on H.264 by then we’ve effectively handed control of online video not to the open web as it should be, but to a set of corporations. Think where we’d be if HTML wasn’t an open standard.
Well, except that Theora is probably ‘covered’ by many of the same patents. In video codecs, there is barely patent-free, just ‘not patented by its designers’ or ‘unclear which patents exactly cover the codec’. In the end, patent-wise using Theora or H.264 via ffmpeg probably does not make much of a difference, except that for H.264 OS X and Windows users at the very least have a license to use those patents in client software. Once a Theora implementation becomes popular enough, companies will come after their patents.
If patents patents bug you, kill software patents!
Let’s just add the possibility to embed videos in HTML, and have popularity decide. If you prefer Ogg, make sure that you offer Ogg.
MP3 is a good example of when software patents make sense. It was created in a lab with teams of engineers and the parties that funded the research deserved to be compensated. Intellectual property laws exist to create a market for this type of development where the product cost is in the R&D, not the duplication.
That being said as cheaper and viable codecs came along like WMA portable music companies were still obligated to pay the MP3 license due to its popularity. Everyone had music collections stored in MP3 and a lot of the music services only sold MP3. Sure you could convert your files or you could just pay 10 dollars more for an mp3 player and not have to mess with multiple formats.
So there is a legitimate concern with being locked into a commercial codec. An open source codec makes inclusion within hardware and software products much easier which in turn benefits consumers.
Edited 2010-02-26 22:45 UTC
How long should that investment be protected, and what if that investment is not anything especially remarkable?
The current system has these sorts of issues.
In 2016 the next batch of h264 patents expire. Some patents already expired.
Your conclusion “it takes experts to do this stuff.” is quite flawed, sure if (say) YouTube choose a codec for its video, it’ll be a team of experts who will select the best mode for the codec, but do you think that all the users which will encode video will hunt the latest codec in the branch foo?
Now they won’t, they’ll use the latest packaged codec with its default options: if Theora developers aren’t able to provide a good packaged codec then Theora is (currently) truly inferior to other one which are properly packaged.
YouTube did not get where it is because it offered the best video quality…
In my own test (downloaded the PNG/FLAC data for 360p BigBuckBunny), Theora 1.1 was plenty good enough, just a hair short of H.264.
The next version of the Theora encoder appears equal, if not better than H.264.
What matters is the tools to work with Theora formats, and they’re hardly great. The QT plugin is a year out of date and doesn’t include Theora 1.1 meaning that individuals exporting using their regular tools on the Mac are not getting anything better than H.263. The QT plugin also provides too few options. You are almost forced to use the command line, and that’s frustrating when you want to just get stuff done.
One test. A month ago I nplayed around with reencoding ben-hur from a 3GB xvid at 800×288. I used avidemux to encode h264/aac to 800MB (96kbps audio).
Then I used handbrake to do the same with theora/vorbis (again 96kbps audio).
The opening scene shows the nativity with the star surrounded by a thin halo travelling through the night sky over bethlehem, then shining a light beam down on the manger. The h264 encode the detail on the star is very acceptable.
The vorbis encode: the star moving through the sky is visibly blocky, the shining beam is also a block fest. Both the star affects actually “snap” into being sharp 3-5 seconds after being displayed. Other scenes with dark natural backgrounds have no detail, the are very blocky. I re encoded the vorbis to 1GB file, and although better, the star scene was still no match for the h264.
So you encoded from an already compressed source, and you^aEURTMre complaining about quality? (And using Handbrake which has depreciated Theora support anyway and it^aEURTMs massively out of date).
Sorry Mario, the princess is in another castle.
I was thinking the same thing but it doesn’t change the fact that H.264 performed better in his tests.
But nonetheless if Theora 1.1 is just slightly worse than H.264, I don’t see why it can’t be adopted as the reference standard codec for HTML5 video. Imagine the mess if HTML were in the same patent hell…
No, but it does call the validity of his test into question. If you use a current H.264 encoder versus an outdated Theora encoder, it’s not surprising the results are skewed. It doesn’t change the results of his test, but it does invalidate the test if you’re going for a fair comparison.
I built this handbrake myself from latest subversion sources. It absolutely uses theora 1.1 as the encoder. I used avidemux on the h264 because the default settings are more aggressive than handbrake’s.
Comment about already compressed input:
A 3GB xvid at 800×288 is very high quality already (greater than 1400kbps). It’s obvious to see the degradations to vorbis and far less degradation to h264.
I’ll see about clipping out the small offending piece.
More information about my background: I’m totally against using h264 because of it’s patent problems. However I’m not impressed with the theora folks IMHO over selling their technology.
Someone who doesn’t have a dog in this fight, does encoding and lots of it should get lots of samples and do a thorough test. They should try to limit the subjectiveness.
Edited 2010-02-26 17:31 UTC
Show me a source video uploaded to youtube, etc, that isn’t already compressed.. you are completely missing the point. From the same source, the results must be similar to prove that the codec is an acceptable alternative. I concede that he might not have used a recent enough Theora encoder, but his results echo my own observations on Daily Motion. Theora under preforms on the same clip vs the H.264 version. I assume they are using the original source when re-encoding. If not, that in itself is another reason not use use Theora. You absolutely *must* re-encode from the same original source clip, else it negates everything.
I’ve cut out just that scene and am playing with some encode settings. Sadly this *is* just a cherry picked scene and not a battery of tests. This type of encoding might be more difficult because the camera is slightly shaking left/right (probably not uncommon with these old movies).
I don’t have a youtube account, any good suggestion of place to dump a <10MB file?
Edited 2010-02-26 18:45 UTC
deleted
Edited 2010-02-26 20:05 UTC
Eh? If the input is the same, then why not compare the relative quality of the output?
Because mathematically speaking, some codecs fare worse with certain inputs than others and the existing compression may be triggering off an algorithmic limitation of Theora. H.264 is equally susceptible to bad re-encodes and I^aEURTMve seen it make a mess of some videos based on the original encode.
His example is too narrow to be a fair test.
Are those issues that occur when the source video is highly-compressed? I would think that, at that point, it would be difficult to distinguish those effects from plain old generation loss. At least (again, FWIW), I’ve never seen those issues when using high-bitrate source video.
A few weeks back, lemur2 mentioned the same thing in response to a post I’d made. Admittedly, I’d never heard that before, so I did some some quick tests with video files I had on hand. I used a 500MB DV file (SD) and a 175MB MPEG2 file (encoded from the DV file @ 35mbps) as the source files – then I encoded h.264 and Theora files from each of the two sources. FWIW, I couldn’t see any appreciable difference (I can send you the URL of the test page, if you like – it even uses VFE).
Yeah, just from a basic statistical standpoint, a single test is much too small a sample-size to draw wider conclusions from.
does it have to be the best of the best?, i mean even if Theora isn’t the best of all the encoders it is completely free and probably good enough, most videos on youtube look like crap anyway, there few HD videos that actually look good.
It is actually up to provider of the information what is best codec. It could be done by checking what codecs the client support, if the server does not support the codecs installed on the client it sends a “install right codec” message to the client.
Why should the browser hard code the codec? Some sites might want to broadcast live HD TV while others want to put on short clips in low quality, one codec does not suit all!
Edited 2010-02-26 14:59 UTC
Half the point of HTML5 video is so the browser can play basic video without having to install a codec, especially one that is wrapped up with all kinds of other crap that has nothing to do with playing videos.
Some old lady who occasionally watches pbs news video in sd shouldn’t have to Flash; not just for convenience but also security reasons.
HTML5 + h.264 would still be an improvement over the current situation but I think Theora would be better choice for its flexibility.
The comparison is looking at the wrong criteria. *Everyone* knows that H264 is better than Theora, nobody disputes that.
The *only* problem with H264 is the patent licensing. Because of it, it’s *impossible* for all browsers to implement H264; the problem is greatest in freely-licensed browsers where derivatives would not be covered by the patents even if the authors paid for a license (Firefox, Seamonkey, Chromium, Midori, etc).
If you consider this, that single problem disqualifies H264 *completely* as a codec to be used with HTML5 video. Currently, if you use it, only a minority of HTML5 video-supporting browser users will be able to view it, and you will *never* be able to support all your users.
This means that if there ever will be a codec that will work on all HTML5 video-supporting browsers, it will *not* be H264. The question is, what could it be instead? Currently the best candidate is Theora. The quality is worse than H264, but so long as it’s good enough, it’s acceptable. The most important feature of Theora is freedom from patent claims, maning that *all* browsers could implement it freely if the authors wanted to.
This does not mean that you should never host H264 content. You can always provide it as a first choice, and fall back to something that is universally supported.
The issue is, it would be nice if there were one codec that worked *everywhere*, making it easy for content publishers. As you can see, this *cannot* be H264. With what’s currently available it *should* be Theora, until someone comes up with something better.
(edit: chromium, not chrome)
Edited 2010-02-26 15:56 UTC
The img tag supports multiple codecs, shouldn^aEURTMt the video tag too?
If the img tag only supported the one format that was prevalent and patent-free at the time then we^aEURTMd all still be viewing XBMs.
OGG is not the be all and end all of free formats, and that^aEURTMs what most annoys me about the HTML5 spec. This insistence that a war _must_ be waged so that only one may win.
The difference is that every operating system can read jpeg, bmp, png, and so on, while not every operating system and/or browser can read the same video files.
Legally, should have you added.
Which is the actual issue here. Anyone caring that much about best quality already knows that today internet bandwidth don’t allow full HD best quality. The others, the largest set, can’t really tell what a codec is…
Are the days with proper PNG support in Internet Explorer (transparency!) so soon forgotten?
Now maybe, not always. GIF used to not ship with a lot of browsers – BeOS for example, required a third party plug in for Net+. Until recently PNG was poorly supported in IE.
Edited 2010-02-26 17:28 UTC
And the usage of png came from the patent issues with gif to or something such, didn’t it? And still gif was more popular ..
I also assume many of the current web videos use audio encoded as mp3 which isn’t open either.
Bullshit. Back in the days not all OSes (or browsers?) could render PNG, and Internet Explorer for instance couldn’t render transparent PNGs.
Worked just fine anyway.
Personally I think it should just be as local videos is, that is if you got the codec good, if not well then that’s up to you to fix .. Just load the freaking video no matter what format.
There will most likely become a defacto standard that way anyway depending on what most people can see and encode with, and it will most likely be H264 no matter what … Sure it may be sad for the Linux people who only want open stuff but then on the other side how many mac and Windows people got support for Theora in their OSes?
And who are the majority?
Great, we’re back to 1999 again. And how many viruses are going to come with the codec packs this time? Doesn’t anyone remember what it was like when we had codec hell to deal with?
codec installed successfully from ivancodecs.ru server.
thank you and have a nice day.
Edited 2010-02-26 22:49 UTC
Troubles? Yes.
Viruses? No, not afaik.
Anyway, the people who knows anything would probably upload in H.264 so no-one would complain, the nerds in Theora just because they could and the people who don’t know shit just in whatever format it already has, such as motion JPEG.
Every operating system could not read all those image formats, back then they started getting used. Do you not recall browsers not loading PNGs, and Windows not having any included PNG viewer? I do.
They got to be ubiquitous because people were using them anyway, and that helped to create client demand.
“The most important feature of Theora is freedom from patent claims, maning that *all* browsers could implement it freely if the authors wanted to.”
But they won’t, Apple said, the will not add it to Safari. Although it’s share isn’t really big.
Yes they will. They just need Theora to become popular and supported in the major browsers, then they have no choice by to support it.
Unfortunately, reality is that h.264 has greater support since it is already supported in most browsers via Flash.
The same h.264 video can be played in Safari & Chrome using the video tag, and in all other browsers using Flash, and it can also be played on most smartphones including iPhone, Symbian and Android phones and I’m sure Windows Phone 7 will support it (since the Zune already does) etc.
Theora support is currently limited to Firefox, Chrome & Opera.
Something drastic needs to change for Theora to be supported in all browsers.
Well, mandating it or another open codec for HTML 5 could very well qualify as that “something drastic.”
Apple don’t need to add it to Safari. All that is needed is a Quicktime codec and it works. Install the (outdated) Xiph codec on a mac now and Safari plays Theora vids just fine.
Except on the iPhone, iPod Touch, and iPad, where end users don’t have the option of installing additional codecs.
All things being equal (input source, encoding settings, etc), and at typical web video sizes/quality settings, I highly doubt that most users could tell the difference between h.264 video and even MPEG1 – let alone h.264 and Theora. Maybe the gulf widens at higher resolutions/quality settings, but the output of the two codecs is largely-indistinguishable for web video.
It also seems that the existing benchmarks focus mainly on technical analysis, which is interesting from a technical standpoint – but doesn’t tell us much about real-world factors like subjective perception of visual quality. I think that some simple double-blind testing would give results that are much more relevant to real-world usage.
It’s retarded to speak of “web video”, things won’t be as they are now forever. Hardware get more powerful, bandwidth wider, storage space cheaper and so on.
Youtube standard format looked like shit, forced high quality for the iPod way better and now they offer HD. Vimeo offered HD before that.
Consumer cameras used to record low-end 320×200, now kind of all of them record HD.
Only fools would keep on comparing freaking low-end images with no details between each others and say “omg I can’t see much of a difference!”, get some quality material and compare on that.
If shitty encoded low-bandwidth 320×200 is fine for you, then fine, keep it. Personally I’ve had 10/10 mbps bandwidth since 9 years ago and 100/10 mbps the last couple of years and I would definitly both upload and view HD content when given the option. So will everyone else, in some time.
Edited 2010-02-26 17:56 UTC
At the risk of stating (what should be) the obvious, going into “attack mode” and tossing out juvenile insults is not really the most persuasive way to present your argument.
It also doesn’t help that your first sentence is both a non-sequitur AND a false dichotomy at the same time.
Increases in the quality of web video will not magically make it something other than web video. “Web video” is a colloquialism that merely describes the typical way video is used on the web. If, tomorrow, the typical web video settings became 3500kbps @ 1920×1080, it would still be web video.
In other words:
“I have observed that those who disagree with me on the next point tend to be unsophisticated, and those who quickly recognize the validity of the point to be more educated. The point is…” –The Guide to Conversational Terrorism
Why would you assume that I’m talking about 320×200 video? One, in 20 years I’ve never once encountered a video file with that resolution. Two, that’s not a proper 16:9 or even 4:3 aspect ratio – perhaps you meant 320×240? And third, that’s less than half the dimensions of typical videos even on youTube, which is generally considered to be the lowest common denominator as far as web video goes.
The last time I put a 320×240 video online, Bill Clinton was still the US president.
The key words being “in time.” Even on the production side, there is still a large amount of SD content being professionally shot & produced. If you want to know why, just take a look at the price difference between Panasonic’s DVX100B cameras, and their comparable HD cams.
320×200 is the resolution of the Commodore 64 (in monochrome mode)
The Commodore 64 has all its 16 colors in 320×200 mode.
Yeah dipshit but in this case it’s obvious with “bla bla works ok for web video” means that it works ok for the low quality crap they currently watch, if not it wouldn’t be necessary to add “web” to begin with.
Go look at a photo camera with video capability instead. People won’t be shooting their clips with dedicated movie cameras, atleast not the huge majority of video clip uploaders, that is private persons.
Classy. Maybe you should just go back to 4chan and leave technical discussions to the grownups.
Too bad you have a ridiculously-outdated notion of what “web video” typically means, as you’ve already demonstrated
I’d point out the tautological nature of that statement – but as you’ve displayed the maturity of a junior high schooler so far, I doubt you know what the word “tautology” means.
…so your contention is that, because amateur/consumer video is shot with lower-quality cameras, it should therefore be put online using higher-quality formats? Ever hear the old robot expression “does not compute”?
An ugly person can have plastic surgery.
An ugly personality is much harder to change.
In like fashion, Theora can be improved and would evolve quickly if it got the focus of being the html5 prefered code. H.264 has it’s ugly personality to contend with even if it is prettier; patents.
Relying on a patented and closed bit of code for something meant to be completely open and platform agnostic like the Internet is just madness. I say H.264’s owners move the patent into the OSS patent partnership or bugger off with there “first hit is free” drugs.
I’ve always like the way Winston Churchill put it:
“I may be drunk, but you, madame, are ugly! And in the morning, I shall be sober.”
ha.. never can claim Mr Churchill didn’t have a way with words (or a stick to carry when walking softly).
http://cache.gawkerassets.com/assets/images/17/2010/02/ogg-v-h264-b…
I keep hearing people say that Theora is Patent Free. How do we know that? No one is suing because there is no monetary damage to prove or money to collect.
There are so many algorithms in video codecs, almost one part or another (especially new techniques like optimal bit coding, motion vector search mechansims, etc) are patented by many universities and companies. Microsoft had to go through this process when they tried to standardize VC-1 in SMPTE.
I personally would like to have both codecs supported and let the market decide based on needs, applications, cost, and quality. I am sure there is room for both.
The best general outcome would be for the market to decide. The problem here is that we have patent issues to content with so large parts of the market are legally banned from being involved in that decision. This is like calling a diplomatic vote fair while discounting votes from women and minorities.
See my comment title.
Also please both low-end such as 480×320 30 fps (320×240 is kinda lame now) and high-end say 1920×1080 60 fps.
Edited 2010-02-26 17:40 UTC
Agreed. Talking about Theora and H.264 would be fine except that many people believe VP8 will be the 800lb gorilla if Google make it available to use on favourable terms.
If we are talking about the future, the current development branch of Theora is called Ptalarbvorm (the previous development version which has now become Theora 1.1 the codename was Thusnelda). Weird names, I agree.
http://en.wikipedia.org/wiki/Theora
Like Thusnelda before it, Ptalarbvorm is optimisation of the Theora encoder only. Current Theora 1.1 players, such as the one embedded in Firefox 3.6, can play Ptalarbvorm-encoded Theora videos.
Anyway, the current experimental version of Ptalarbvorm is very promising, and initial results are finally getting better performance for Theora than h264 as currently used on the web. Theora Ptalarbvorm (experimental) at 376 kbit/s is approximately the same subjective quality as Youtube’s current h264 implementation at 499 kbit/s.
If we are talking about the future, it may well not be either h264 or VP8 that yeilds the best performance for web video. It could well be the next version of the free and open Theora, already playable by current player software, that beats all comers.
Edited 2010-02-28 23:50 UTC
Very possible. To quote a well-known little green alien: “Always in motion, the future is.” Don’t forget though that other codecs are also going to be concentrating on improvements. This is as it should be. Don’t expect the rest to stand still as Theora improves.
Well, Theora 1.0 was a fair way behind the state of play. To some extent, Theora carries some of that reputation still.
In Jan 2009, Mozilla funded a project to improve Theora. The resulting Thusnelda project began to achieve good results in October 2009, and the resulting release of Theora 1.1 had all but caught up to h264 as used over the web. This was a very significant improvement indeed.
It was achieved without having to change the format or update the player. Thusnelda was an improvement to the encoder only.
Now, the new Ptalarbvorm experimental branch is rumoured to be even more of a jump that Thusnelda was. It too is an improvement to the encoder only, involving no change to the format or the player. Existing Theora players will still work.
That is an incredible improvement in less than a year (after admittedly many years of not much improvement at all). It is amazing what a little bit of funding and support can achieve. Mind you, having said that, Theora was starting from a low standard in the first place, so it required a huge improvement to even catch up.
Meanwhile, h264 was “design by committee”. Apparently there is a list of the patents involved, it is 47 pages long. To get the better performance out of h264 there is endless complexity involved, and much of it is of no use for delivery over the web anyway. I personally can’t see any useable easy-to-achieve performance gains for h264 that haven’t already been explored, but I could easily be wrong about that.
Edited 2010-03-01 10:54 UTC
As pointed out in the comments, the KeyJ article didn’t use Theora from the new development tip. I don’t think that Theora developers contend it should beat x264 everywhere, but it’s quite a bit more competitive that his article indicates. Naked eye test here shows as much:
http://people.xiph.org/~maikmerten/youtube/
I think a blind test would be really cool. A bunch of videos using both encodeing head to head (at random location of the screen) and the user should pick witch one he thinks looks best. Check the results and you will have a fair winner. Of course the enconding should be done at the same conditions.
well except one of them will need a desktop or high end laptop to play while the other can be hardware decoded on just about anything.
Hardware will inevitably follow the demand. If a codec were mandated for HTML 5 and everyone began to use HTML 5 video, you can bet the hw companies would start producing hardware accelerator chips for that codec. To do otherwise would just be a bad business decision. We have a lot of h.264 hardware right now because that’s where the trend was at the time, if the trend shifts the hardware will shift along with it. Remember when most devices expected Mpeg 4/DivX?
The ARM SoCs I work with have a codec accelerator subsystem. It’s basically a coprocessor that you initialize with some opaque piece of (non-ARM) code, which does h.263, h.264 and VC1 decoding and encoding (to various degrees).
The total size of this driver (that you upload) is about 80KB, so I guess they’d have the space to add another codec in there (given that address spaces are usually in powers-of-two).
Once there’s a demand, this coprocessor will gain an updated firmware (that is uploaded on boot), and supports whatever codec the demand is for.
I just looked for a number of ARM codec support options, and all I found are advertized with “Upgradeable with new codecs” and similar statements.
With that in mind, “hardware acceleration” is a non-issue for new contenders in video.
Other than with MP3 players which actually used special MP3 decoding chips, video decoding is too complex to be done “in silicon” – I’d guess it’s always software.
I saw something about MP3, which someone says it’s not open ? Of course it’s not, but its license is slightly more forgiving than H264’s. People, why you are comparing a piece of software that it’s paid and one that’s free and open source ? Hence, my question – if tomorrow HTML5 gets implemented by Google’s Chrome and they want you to pay money to watch videos encoded with the H264 codes, will ya ? Personally I won’t – I will want the same free video playback as it was with Flash. So, bottom line – H264 wants money – they don’t care about where they will come from – the users or the provider.
It’s just like VHS versus Beta, you need to get the Porn sites to adopt it… Of course they would only adopt it if they are actually paying royalties in the first place. That’s pretty much the only way Theora could ever take off.
Edited 2010-02-26 20:57 UTC
In this case YouTube is the equivalent of the porn industry.
No, I believe right now that there are no royalties if the content if free to end users, the porn sites do not fit this rule and therefore must pay royalties.
My point was that YouTube has the biggest hand to play.
Even if IE9 ships with a dozen codecs the only one that will matter will be the one that YouTube uses.
Web publishers are not going to keep video files in multiple codecs. Even if 80% of users have codec A installed it won’t matter if YouTube uses codec B since B will have a higher install rate.
Youtube has no reason to change though, they could either spend money on a system that can do both, or reduce quality of already low quality video by re-encoding. Google’s looked into it with their resources, the licensing terms for h264 being free for their use is set for more than a decade, Asking youtube to change over when there is no benefit to them is just a waste of time.
Also a bunch of talk is nothing compared to success stories, case studies, etc. that would be generated by smaller companies and Google might listen to those.
“All this illustrates two things: first, it takes experts to do this stuff. Video encoding is hard, with all those options and tweaks you can apply ”
what you talking about ?
whats hard about clicking http://x264.nl/
downloading the current latest version there to your videos dir , opening a cmd and typing
x264.exe –help or
x264.exe –fullhelp
x264 -o output.mp4 input.whatever
or
x264 –crf=20 -o output.mp4 input.whatever
nothing thats what, and providing your actual input.whatever video file isnt totally shite to start with theres not any problem.
crf= a number between 18 and 26 Constant Quality rate is all you really need today.
if AVC/H.264 is good enough for your world cable/sat/terrestial and google providers etc, than x264 is good enough for you, and OC you have the advantage that current x264 AVC defaults are already set for high profile, auto level, quality settings so You dont have to set them.
google could tune to that ‘high profile’ higher quality too if they didnt have to downgrade their realtime x264 encoding backend to the lesser capable baseline etc only capable HW PMP’s and phones etc to keep it simple for support.
if you really ,really cant do without a GUI then use this http://forum.doom9.org/showthread.php?t=151272
good as any
simples.
and dont forget to keep checking the http://x264.nl/ every other day sometimes as they are adding new and improved options to it all the time seriously.
Edited 2010-02-26 21:42 UTC
SPEAK FOR YOURSELF!
Honestly, declaring any one codec as what is ‘officially supported’ or the ‘default’ is complete and utter BULL, since again that defeats the entire point of HTML… Device and capability independence!
What codec is used in the tag should NOT be dictated or hard-coded into the damned browser. You want a codec supported, have a method for installing it (like say… a plugin) just like this existing tag which works JUST ***** FINE called ‘object’.
OBJECT, which was supposed to replace IMG, APPLET and the proprietary IE ‘EMBED’ property!
But as I wrote in my “Why I hate HTML 5” rant
http://my.opera.com/deathshadow/blog/2010/01/09/why-i-hate-html5
That’s a hefty part of my problem with the whole HTML5 video thing, is it shouldn’t be a new tag and they are undoing ALL the progress STRICT doctypes were supposed to give us. They are adding all these new unnecessary tags to do things we can already do.
“We want to make audio and video easier” – so we ride the browser makers case about having object work right? **** no, instead we introduce two new tags undoing the POINT of STRICT (simpler code with less tags! Less code you use less there is to break) and then hardcode support for our pet project codecs instead of just CALLING WHATEVER CODECS ARE INSTALLED/AVAILABLE ON THE HOST OS!
Whiskey Tango Foxtrot!!!
Makes me want to bitch slap the people behind HTML5. Across the board it’s all this unneccessary extra bullshit that results in MORE code, not less.
Take “NAV” for example – which by HTML5 you end up wrapping UL with like the retards who wrap DIV’s around their UL’s for no good reason… Here’s an idea, how about instead of making a new tag to wrap your UL for god knows what reason, you undeprecate MENU, a tag that works just LIKE a UL with a semantic meaning on it – that way boom, browsers going back to HTML 2.0 would have support since it works JUST LIKE A UL. NO extra tags needed, no changes to the browser engines needed.
I guess that would be too easy.
Worse, most of the nimrods already churning out stuff in these DRAFT specifications, especially HTML5 are vomiting up the same crap they did when they didn’t embrace strict – apparently simple ideas like only having a single H1 on a page, not skipping heading levels going down (I see one more H1 H5 pairing I’m gonna barf) images off graceful degradation, putting lists around lists (like say a pagination menu), and then they often STILL end up resorting to a div with a class on it inside tags like ‘article’ at which point what the hell was the point?
The saying when tables were abandoned for layout was that the people who made crappy endless nested tables for no good reason now just made crappy endless nested DIV for no good reason. Now I guess it’s endless crappy extra allegedly semantic tags for no good reason.
More stuff changes, the more it stays the same.
Edited 2010-02-26 21:41 UTC
Well said, although embed is Netscape isn’t it?
The problem with object is of course that IE didn’t implement it correctly and so we ended up with COM GUIDs being stuck in attributes to state the exact plugin that must be used instead of the mime type being used to decide how to deal with it.
This horrible mess is not going to go away either. Now instead of the ugly javascript to insert the appropriate tag, or the equally ugly object embed tag nesting we will now get video object embed nesting or yet more horrible javascript to handle things when we had a workable standard in HTML 4.0 back in 1997
It isn’t as if making Theora the default codec would forever ban h.264 from the web.
In the cases where publishers are selling streamed video they can require a plug-in like Netflix does with Silverlight.
It really seems to make more sense to have the default codec be license free for the flexibility and let people who want to stream 1080p video use a plug-in.
I dont know but HTML5 Videos on youtube loads faster than flash videos. I am from third world country. My average download speed during peak hours is 30kbps
Getting the most out of a given codec aught to take some work, and that’s fine. But, if getting good results takes more than a, “-vcodec theora” added to the ffmpeg command line, then it has a very long way to go. H.264 is there, and does a good job. A two-pass encode, with fairly basic options, works great with both ffmepg and mencoder. If you even have to know about the existence of dev/release branches of the software, it is, at best, beta quality. That is on top of screenshots showing the same things I’ve found, myself: that the differences are not actually subtle, authors of the comparisons just want to push Theora, so they say that.
I don’t like the possible royalties-after-X-years hanging over H.264, but Theora does not look ready for prime time.
Theora can be the best web format (for that, it dies not need the video quality performance of H.264), but it needs usability first and foremost, with sensible defaults for good quality all-around. If the current stable Theora included for 3rd-party encoders/transcoders is not good enough, then Theora, as a whole, is not good enough. The version that matters is the one I get from the stable repo with my package manager (Linux), or with the stable version installer (Windows). That, in itself, is a major part of usability that many FOSS projects don’t think about.
I think it’s silly to dictate a single format for HTML 5 video. While I don’t have the quite the vitriol, I generally agree with deathsadow. But, if you want Theora to succeed, you should change its development process and goals. All the talk about needing to mess with what should be minor details tells me that Theora is currently broken, regardless of its potential, or of version XYZ’s performance. If Theora XYZ (or better) is not in every stable repo of every popular desktop Linux distro, and every popular Windows transcoding package, it is completely irrelevant. H.264 has had that bit taken care of for at least two years, now. If the Theroa devs and maintainers do not make that their number 1 priority, Theora will not be ready for prime time for years to come. I don’t care if they move to an always-in-dev system like ffmpeg, or a merge-stable-stuff-every-few-months like LLVM, but they need to make it where Theora is Theora is Theora, and everyone’s Theora is not too far behind the most advanced development versions.
Edited 2010-02-27 02:25 UTC
It’s called setting a standard for publishers to depend on and browsers to follow.
Multiple codecs will just result in Flash being used.
What I think is silly is that people have forgotten that HTML5 Video was supposed to rid the web of Flash for basic video.
With multiple browsers supporting multiple codecs web publishers will look towards Flash as way of avoiding competing codecs. Flash would once again win by default (cue back to the future theme song).
Which is a major change in direction for the web, and a good way to get this long-term tug of war going on. It’s a pissing contest, pure and simple, and undermines the web’s history as much as the byzantine modular XML ideas did (which is kind of sad, as XHTML was one of the few excellent uses of XML, IMO).
A codec that can be freely used, with no risk of licensing costs, is the only viable option, if there is going to be one codec in the standard. It’s not really much of a choice. H.264 is a non-option. It can be allowed, but without a properly free option as an alternative, it is not a truly open standard, and, once again, this all becomes a MPEG LA v. freetards pissing contest, with no work being done. In this scenario, the freetard side is way more correct than they are wrong.
I contend Theora is not ready, but that is point for H.264 v. Theora quality and performance comparisons. For standards, it is not valid: whatever is chosen, H.264 cannot be it, unless there are clear licensing terms that exempt at least an overwhelming majority of uses involving the web, in perpetuity. “We’re not going to do anything for a few years, but after that, we could change our minds,” is simply not acceptable.
Edited 2010-02-27 07:57 UTC
Which is why there SHOULD NOT BE a codec defined as part of the standard.
Device neutrality, plugin neutrality – the HTML specification should not say one way or the other what codecs are valid, invalid or even default. JUST as it does NOT say how much larger a H1 should be compared to normal text, what the default text size should be, etc, etc…
Doh!
I’m actually less concerned about the choice of a standard codec now than I am about how we move forward, beyond the standard one. Because, obviously, video technology is going to advance, and we’re not going to want to use H264 or Theora forever. So, no matter how you slice it, we have to understand and get the versioning story for video codecs right before we make any choice.
Exactly why there should be no ‘default’ codec, and it should use the codecs installed on the host OS. Then people could have the CHOICE… You know – what freedom REALLY means.
GOD FORBID. But don’t look for freedom of choice from the FLOSS zealots who want to shove their choice down everyone’s throat.
No, that won’t work.
First, support for h.264 is far from universal. Most notably, neither Windows XP nor Windows Vista include an h.264 codec. Installable h.264 codecs for these systems are either commercial (the end user has to pay for them) or, thanks to software patents, not legally distributable in the US.
If web browsers only supported video codecs that were installed on the system, anyone providing HTML5 video would need to provide it in h.264 (Mac OS X, Windows 7) and WMV (Windows XP, Vista) formats, at minimum.
You need a baseline codec – something that you can rely on being there.
Second, what about other platforms? There are no (licensed) h.264 codecs for Linux systems, or for pretty much any embedded OS. Many embedded systems (think phones, PDAs, set-top boxes) do not have their own video codecs at all, so portable web browsers (everything except IE) would have to include their own codecs anyway.
Third, not all h.264 codecs are equivalent. Some support all base profile and high profile features. Others support only base profile. Others support base profile, and bits of high profile (but usually not the same bits). How can you fix a problem with the web browser not displaying a video correctly, when it’s the fault of the OS?
Fourth – security. Video codecs are not built with security in mind. What happens when a security vulnerability is discovered in the system codec? Any web browsers running on that platform are vulnerable until the codec provider updates the codec, and they won’t be able to do anything about it.
Unless you upgrade to the latest DirectX. It’s in the damned DXVA spec for DX9!
Oh you mean like Flash Player 10 – oh wait, you don’t have to pay for that, and it does H.264… at least for playback… and anyone doing video generation as anything more than a hobbyist isn’t going to bat an eyelash at paying pennies on the video when they drop over a grand on CS4 suite (or $800 JUST for Premier), a grand for Final Cut Studio, and three or four grand on the Apple to run them on.
Besides, SEE YOUTUBE. They don’t seem to have any problem using h.264 via flash or HTML5… So that’s pure FUD.
For free – god forbid anyone try to pay their coders.
Or use flash which works on both those platforms and many others. Much less I don’t know where you are getting that it’s not available on XP or Vista since it’s part of most every directX upgrader.
**** sake my PSP can manage H.264, half the damned MP4 players in pipe have hardware level H.264 decoder chips now.
You need a baseline codec – something that you can rely on being there.
… and yet VLC seems to play them just fine. Flash seems to play them just fine.
Hmm… My PSP does it. If a six year old game console can manage it out of the box… REAL companies like Sony aren’t going to have slightest problem PAYING for including something – it’s what they’re in business to do so why the hell would they listen to the FLOSS fringe whackos with their rabid ‘corporations are evil’ malarkey.
Fix the damned OS.
Now that on the other hand is a valid concern – but it shouldn’t be ANY concern of the browser makers as if there’s that much interaction between the codec and the browser, they didn’t sandbox their plugins very well.
Something Google determined they wanted from day one on their browser.
The anti H.264 retoric screams of the same ‘down with the evil corporations’ bullshit as all of the FSF’s outright lies. People expect to be paid for work – people who can do the same thing often get together to form corporations or work for them. Corporations who make products that people buy typically expect to be paid for their products so they can pay their employees, pay for the resources, so everybody is happy.
This naive pipedream dirty hippy bull about everything being ‘open’…
The coders of h.264 have been paid … years ago.
The coders have been paid years ago. Development costs of h.264 can’t have been more than a few tens of millions. Ongoing royalties being collected worldwide would amount to several times that per year. ROI would have been over 100% in the first year, and pure gravy (since there is virtually zero ongoing investment) every year after that.
You know, when I purchase a TV, once I have paid for it, it is mine. The makers of the TV may well have embedded some patented technology in the TV’s design, but despite that fact they still don’t ask me for an ongoing fee every time I watch the TV.
But can’t we have both?
Like, a default open codec implemented in the browser (theora, dirac or whatever, as long as it’s royalty free and implementable on every platform), and then the possibility of using other codecs available on the OS?
You would then have both the freedom of streaming your videos on any codec you see fit, and the possibility of using a codec that you can be sure will be available on absolutely every platform.
Agreed. While I understand the objections to completely deferring video decoding to DirectShow/QuickTime/GStreamer, it would be nice if they could still fallback when the browser doesn’t have built-in support for a particular video format.
Sun is down, good people at sleep, now here comes the troll.
Didn’t Linus Torvalds ones said that open source will rule because it’s better than commercial? I guess that doesn’t mean Theora. Well Olympics are going so bring your poor excuses.
Back to the cave.
When you talk about H.264 you should be aware that the specification is only about the input stream of the DECODER. This means that the encoder is left mostly open to various implementations.
So when you compare H.264 to other algorithms you are only comparing one possible implementation of it. There are all kinds of links to benchmarks and stuff, but they don’t actually tell you anything if you don’t know what encoder and what Qp value was used.
And also if some algorithm performs better than some version of H.264 encoder, it is practically always possible to write even better version of H.264 encoder. I’m not sure about other algorithms but when you do better encoder for H.264 you don’t need to change the decoder at all.