This, people, is a big one. Remember all the articles we had on Theora, H264, and which codec is better suited for HTML5 video? Well, it seems that Google has officially decided to put some serious weight behind… Theora. What they’re doing is a baby step, but one specific aspect of that baby step is very important: Google is openly stating that Theora is free of patents.
Google has decided to puts its weight behind Theorarm, an Ogg Theora/Vorbis decoding library specifically optimised towards the ARM platform. Google will fund Theorarm’s development in what is clearly an effort by the search and web giant to take away people’s fears about mobile devices not being able to handle Theora.
In the blog post announcing the financial support, Google sings praise for Theora. “The complexity of Theora is considerably less than that of many of its peers; other codecs often require dedicated hardware in devices to help achieve performance targets, but with careful coding Theora can be made to run without this,” Google writes.
After praising Theora’s quality and compression levels, Google states in no uncertain terms that Theora is patent free. “The overwhelming feature that makes it stand out from its rivals is the fact it’s free,” the company writes, “Really free. Not just ‘free to use in decoders’, or ‘free to use if you agree to this complicated license agreement’, but really, honestly, genuinely, 100% free. The specification for the stream and encoder/decoder source is available for public download and can be freely used/modified by anyone. Theora was designed and is maintained with the overriding goal of avoiding patents. No other codec can come even close to claiming to be as patent or royalty free as Theora can, whilst still holding a candle to the alternatives.”
This means that Google, a major company with a legal department the size of Texas, believes that Theora is not a patent threat. I, personally, have long argued that Google’s inclusion of Theora in Chrome meant the company believes Theora is not under threat, but this pretty much seals the deal: Google openly and officially stating what most of us already knew.
This means that Google has positioned itself directly against Apple. Not only is Google trying to solve the problem of Theora on mobile devices, the company is also giving a major vote of confidence regarding the patent issue. If Theorarm manages to deliver, H264 supporters (like Apple) will no longer be able to claim that mobile devices aren’t ready for Theora.
This is interesting.
Actually i think it just means that because they have a legal department “the size of Texas”, they can afford to bet on Theora, that doesn’t mean it’s actually in the clear, only that they’ve calculated the risk and decided they can go with it.
They’ve already licensed h.264, meaning they’re willing to pay for these things if necessary, so worst case they can license something from someone to make problems go away, because like their legal department they also have a huge bank account.
Their exact words:
“Theora was designed and is maintained with the overriding goal of avoiding patents. No other codec can come even close to claiming to be as patent or royalty free as Theora can, whilst still holding a candle to the alternatives.”
More info:
http://en.swpat.org/wiki/Ogg_Theora
http://en.swpat.org/wiki/Google
Does this mean they have some insider-info on Dirac?
…Or they’re just glorifying their choices for the sake of PR?
When I looked into Dirac a few years ago, it was focussed on very high-quality video for archival purposes.
Does anyone use Dirac for videos streamed or for the 5-200Mb range videos for Internet?
It all starts with a single step. While this kind of endorsement means a lot (I’m sure the folks at Mozilla and Opera just let out a collective sigh), the big one is going to be theora encoded videos on youtube. With all the inertia h264 has going for it, that’s a tall order even for google. Here’s to hoping though.
I promise to do my part and cross whatever appendages that permit crossing. That, and support web browsers that support an open web and open content delivery. I’m a team player.
Well, it would be more convincing if they put their money where their mouth is and indemnified anybody who used Theora, or more specifically this TheorARM they are funding development of.
Honestly, what is the problem with their codecs they bought with On2, why don’t they opensource VP7/VP8?
Edited 2010-04-09 23:26 UTC
Indemnify? Who do you think you’re dealing with here? Microsoft and Novell? (sorry, couldn’t resist…)
One problem with such indemnification is that it makes patent attacks more profitable.
The current situation is that if you want to use patents against Theora users, you have to take a whole bunch of people and companies to court. One part have no money worth suing for anyway, and the other part will form a large and wealthy coalition against you. It won’t be quick and easy.
Being able to go after Google for it all would give patent holders a simple way to attack, they would know that their victim can pay, and they would only have Google’s legal team as opponents, as opposed to taking on a large coalition.
(On the On2 codecs, I’m right with you. Here’s hoping…)
Ciaran wrote: “One problem with such indemnification is that it makes patent attacks more profitable.”
Well, that’s exactly what I mean by putting their money where their mouth is. If there isn’t really any patents that Theora violates, so what? Saying, “well we don’t think there are but we aren’t liable for all usages of Theora” suggests there just might be some violations. Relying on saying “attacking the small fry isn’t worth it” seems like a variant of security thru obscurity. Google is already shipping Chrome with Theora, so they’re already on the hook. Why not underline the message in bold by indemnifying anybody who wants to use it?
Darknexus wrote: “Given Apple’s investment behind MPEG-LA and H.264, it makes sense for Google not to put themselves in a position where they’d be dependent on Apple and required to pay Apple (even if via the MPEG-LA) for use of H.264.”
The whole point of MPEG-LA is that there is a standard licence available, so none of the patent holders can try and specially screw you over, say how Nokia wants to force Apple to licence back it’s own “touch patents” in exchange for the cellular licences Nokia licences to everybody else on normal terms. That some percentage of MPEG-LA fees go to Apple is not really going to sway the situation one way or the other. The amount of Apple’s profits derived from MPEG-LA patents has got to be pretty small, and Apple has NO leverage once they signed it over to MPEG-LA’s standard licence. Hell, it would be easy enough for Google to buy up patents within MPEG-LA and gain more $ than Apple from H.264 licencing.
Edited 2010-04-10 00:53 UTC
It’s one of the problems with patent law: it’s so grey. There are 29 patent holders who claim a total of over 900 patents required by MPEG h.264. Maybe none of them cover anything in Theora, but it all comes down to the opinion of a non-specialist judge (district), and if you want to appeal, it goes to a federal court full of patent lawyers (CAFC).
(Using the USA as an example because, no matter where the companies or developers are, the litigation usually happens there.)
I did think of one merit for your idea: by putting up an easy target, there’s be a strong implication of patent-freeness if no one did try to attack.
Theora _is_ the codec they bought with On2.
In doing so they simultaneously indemnified themselves and everyone else from bs theora-based patent IP lawsuits from a troll-ified future On2.
This move is now direct proof.
As for the VP7/8 alternative, you clearly haven’t seen thusnelda…
rzah wrote: “Theora _is_ the codec they bought with On2.
In doing so they simultaneously indemnified themselves and everyone else from bs theora-based patent IP lawsuits from a troll-ified future On2. “
What I’m saying is that the risk any rational actor would assess on shipping a Theora/Thusnelda product would be greater than zero based on the probability of VP8 overlapping patents that VP3 does not, besides the probability of VP3 itself overlapping patents.
That the software patent system itself (in the US at least) is f–ked is the basic assumption of all of this.
Edited 2010-04-10 04:12 UTC
If you liked the improvement in Theora that came with Thusnelda (and who wouldn’t), then from what I have read about you are going to LOVE the further improvements coming, which is codenamed Ptalarbvorm.
Maybe that they have to assess their new acquisition first?
It takes a while for Google’s various departments (legal, marketing, youtube, etc) to go through all the things they bought with On2 and look what they can and can’t do with it.
Honestly, what is wrong with Theora?
It works beautifully, and it is patent free.
So Google should indemnify it already.
I honestly can’t say I believe it’s 100% certain that Theora has not “violated” any other patents in the course of it’s developments post-open sourcing. Going on about how great it’s latest versions are doesn’t do much to convince in that regard. That’s how f–ked the US patent system is, right?
An altenate approach would be to get Theora on board part of the open source patent defence pool, which has enough other patents that if MPEG-LA or others try and mess with Theora or it’s users (companies with products using it) there is some more patents that THEY would be in violation of. Threaten enough members of MPEG-LA and no more problem with FFMPEG-LA.
Because it’s ugly.
You clearly didn’t check the benchmark I gave you last time you were spouting your uninformed propaganda, but here it is again:
http://kuukunen.net/misc/theora_x264_parkrun/
Sorry, I know it’s like reposting the same thing, but since you’re just saying the same thing over and over again, I guess I can too. (6 posts in a row, whoa!)
Oh and speaking of working, the “LOL” frame has some weird shift and blur because I wasn’t able to get it decode properly. (The other frames are from the original benchmark.)
Slightly ugly.
^aEUR| or nothing.
Slightly ugly.
^aEUR| or nothing.
Choices, choices!
So IF Google’s millions put into development get it to a point where it can compete THEN it will be ready. So for the moment at least, even according to what you have said here, the claims that Theora isn’t ready for mobile devices are true.
And you seem to think that someone has to support H.246 OR Theora, that they’re mutually exclusive. There are those realists of us who would love to see an open platform adopted, but also understand that businesses can’t and shouldn’t be expected to just sit around scratching their bums waiting for one that’s ready, especially if they’re willing to pay the patent fees. If Theora can prove itself, like H.264 has, and if it can be adopted as a standard, like H.264 has, then companies will have a reason to adopt it. Being free is a minor consideration compared with time-to-market…
Noting that Google specified only support for Theora on ARM, hopefully they will do so such that other platforms (e.g. IA32, AMD64) will also be able to take advantages the work in improving the codec in general.
Other hardware platforms are likely to have a GPU.
There is already a Google Code project to program GPU hardware (via shaders, I believe) to support decoding (and the rest of video rendering) of Theora.
The only thing missing, really, was ARM.
Ok it wasn’t too long when we saw this article: “Google To Bundle Flash with Chrome” http://www.osnews.com/story/23081/Google_To_Bundle_Flash_with_Chrom….
So….I don’t get it. What’s going on? Feel free to tell me I’m an idiot and that these things aren’t related.
Edit: I know that they are supporting and flash for desktop theora for mobile, but still. It’s weird.
Edited 2010-04-10 00:31 UTC
I think “Google To Bundle Flash With Chrome” is Google’s answer to the Mozilla team’s plan to have Firefox monitor plugin versions and auto-disable ones with known vulnerabilities.
(Google’s big on the whole “silently install updates” thing. Bundling Flash allows them to ensure that security updates get installed as soon as they’re available)
Edited 2010-04-10 00:22 UTC
Actually, they probably aren’t. Google are just covering all bases, and I doubt they’re supporting and funding Theora’s development for altruistic reasons. They’re bundling Flash now to make a seemless experience for Chrome users. They’re funding Theora most likely because, of late, Apple and Google have been moving more to compete directly with each other in just about everything. Given Apple’s investment behind MPEG-LA and H.264, it makes sense for Google not to put themselves in a position where they’d be dependent on Apple and required to pay Apple (even if via the MPEG-LA) for use of H.264. Therefore, they set up a competing solution and Theora is the obvious choice given its openness. Google position themselves against Apple, make themselves look like the good guys (and in this case they actually are), and get a solution that in the long run is more likely to be adopted by the majority of web video who cannot pay the H.264 licensing costs. It’s a win for Google and, in this case, a win for the open web as well.
Ahh good call.
Its a case where not being evil very possibly results in a win for google. Right now it looks like google is positioning itself well to replace imploding apple. All they have to do is not screw up and apple is making it easy for them.
No Google is just deceiving people like you into thinking that they are for an open web when nothing has prevented them from pushing Theora adoption through YouTube. Google and Apple are the ones that were pushing h.264 at the W3C in the first place.
Yup, thinking pretty much the same here, shifting weights around is what balance is all about, I (for the most part) welcome our theora-wellcoming overlord !
The article blurb fails to note that Google is shipping Theora in their Chrome browser, and showing it in Chrome OS demos. That’s a much more significant first step.
And that’s not even counting their employee putting it in the HTML5 spec, even if he later took it out again because Apple wouldn’t support it either way.
Having said that, I don’t think the author is a Google employee. I think he’s the guy that did the work to make Theora faster on ARM. But I’m guessing Google isn’t the kind of company that just posts any old nonsense to official blogs without screening it first. And he certainly seems to be speaking on Google’s behalf in the text. And the simple fact that they’re (partly?) funding the work is a big deal in itself.
f–k Apple. That might not be the most intellectual way of putting it, but I think anyone with half a brain can agree.
http://i.imgur.com/nUtZK.jpg
Patent-free is a positive term, if the format actually works well. If not, who will want it?
There is no reason that Theora and H.264 cannot co-exist, except stubbornness.
If Google put their brains up against Theora and get high quality and high performance out of it, we can be sure a lot of adoption will happen.
Regardless, let’s see Flash and Silverlight minimised.
It already has the performance. It is already as good as H.264 for video on the web.
This seems to contradict that
http://keyj.s2000.ws/?p=356
no?
Especially for lower bitrates, like on the web.
I like this comparison because it provides both subject and objective results.
It seems that they were forced to use CBR, rather than VBR, but you can’t really fault them: ffmpeg2theora gives no way to set a bitrate target and have VBR at the same time.
Edited 2010-04-10 08:32 UTC
From the comments:
So the test was probably designed to make Theora look bad.
It is trivially easy to get a Theora video that is as good as h.264 as used on the web at the same filesize. I have done it myself, and I am a rank amateur.
Some screenshots from my effort:
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_480p&Qif=shot0003.pn…
http://ourlan.homelinux.net/qdig/?Qwd=./Mov_480p&Qif=shot0003.png&Q…
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_480p&Qif=shot0004.pn…
http://ourlan.homelinux.net/qdig/?Qwd=./Mov_480p&Qif=shot0004.png&Q…
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_480p&Qif=shot0008.pn…
http://ourlan.homelinux.net/qdig/?Qwd=./Mov_480p&Qif=shot0007.png&Q…
I used one of these two tools:
http://www.mirovideoconverter.com/
http://firefogg.org/
Either one should give you good results (even if you are an amateur like me).
I’m sure that Google can do even better than me.
BTW, the filesize of the video clip for Theora for the screenshots above was only 70% of the filesize for the video clip for h.264 screenshots. I used the quality setting of 10 with Theora, BTW, but the filesize was still considerably smaller.
Here is someone else’s effort for Theora:
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_720p&Qif=SMplayer.pn…
Edited 2010-04-10 09:51 UTC
I doubt it.
How do you expect them to compare quantitative evidence (based on bitrate) if they can’t control the bitrate of Theora in the way they need to? Ie, compare both at 100kbs or whatever. I would even go so far as to say they were forced to do this with Theora. If they didn’t, people would cry that they didn’t compare like with like.
Unfortunately this doesn’t say an incredible amount about either Theora or h.264, other than Theora isn’t terrible.
Theora isn’t terrible, and it can produce videos as good as h.264 in a smaller filesize, and it is compliant with the W3C Patent policy.
http://www.w3.org/Consortium/Patent-Policy-20040205/
Therefore, Theora is the only codec suitable for use as the web video codec.
Agreed.
Edited 2010-04-10 14:23 UTC
Your second conclusion (can produce smaller file sizes) is true, but not useful. I imagine x264 could recompress the file down further as well, as there is probably some information in the current encoding that can be taken out of the original without noticeably affecting the output.
That is most likely what your Theora compression did already.
But no-one can use x264 for web videos. It is not licensed by MPEG LA.
Your point is therefore moot.
Edited 2010-04-10 15:01 UTC
What? I think you forgot what we were talking about: whether the evidence you have shown give useful conclusions about the technical merits of Theora.
Ok, so since you’re an amateur, let me teach a few thing about video quality benchmarks:
1) Either compare files with identical size OR identical quality. Otherwise it is impossible to tell which one is better. => FAIL
(Of course if it’s both smaller AND better, it’s only half-fail.)
2) Don’t just throw some random screenshots, if you don’t give the video, it’s very possible to have a frame that looks good while most of the frames look bad. (Say I-frame on one video and B-frame on one video, or just a rate control fail where one codec gives huge bitrate to one part and makes the other part suck. This happened to Theora in my last comparison.) => FAIL
3) WTF?! THE SCREENSHOTS ARE NOT EVEN FROM THE SAME FRAME => F-A-I-L
4) If you give them both too much bitrate, it’s hard to tell them apart, ESPECIALLY BECAUSE OF 3) It just means your files are too big. => FAIL
5) Also provide the source video so it’s possible to know what the source looks like. => FAIL
6) Tell what encoders and what settings you used. => FAIL
7) Use the best settings and best encoder you can get => FAIL
I could go on and on, but first of all, yes, of course it’s POSSIBLE to make Theora look better than H.264. Use Apple’s encoder for example, it’s almost as bad as Theora. Or you can give settings to x264 that make it self-destruct. Or you can just use different source video to begin with. Or you can use huge bitrate for both and they look the same unless you inspect them on pixel level. Etc. Etc.
Pfft.
If it is your contention that Theora is a miserable codec, then I simply would be unable as rebuttal to produce screenshots where the quality of theora frames from a smaller filesize were subjectively as good as h.264 frames. It wouldn’t be possible.
If it is my contention that Theora is capable of as good performance as h.264, then it still would be trivially easy for you to produce a Theora video that looked poor.
It is EASY to get a poor result from any codec, no matter how good it is. Just choose bad options for the encoding, and you can mess it up very effectively.
However, it is IMPOSSIBLE to get a good result from a codec if that codec is not capable of a good result.
If Theora was a poor codec, no-one could make a video such as this one:
http://jilion.com/sublime/video
Therefore, sorry my friend, but it is your argument that is a total FAIL.
NOTE! AT THE END OF THE COMMENT, THERE’S A CHALLENGE YOU CAN EASILY DO AT HOME! YAY!
Ok, I’m trying to understand what you’re trying to say, but…. No, I give up.
But I think it has something to do with screenshots.
FORGET SCREENSHOTS.
Give the video files or the whole test is pointless.
Except it wasn’t me who produced the Theora video on my comparison. As far as I know it was Greg Maxwell. And I used the same (as far as I know) source and then I made a video with x264 that is a lot better
HOW ON EARTH COULD I MAKE A COMPARISON LIKE THAT BIASED?
Wrong. You can make a video like that in MPEG-1 too. Just throw enough bits at it. With x264, it would just be a lot smaller at the same quality.
Ok, still not convinced?
I’ll give you a challenge.
In fact, ANYONE doubting me on the quality point of view can do this at home! Yay!
This is done in Windows, but as the tools I’m using are cross-platform and freely available, you can do the same thing on *nix too. In fact, even the command-lines are probably the same.
1) Go to http://media.xiph.org/video/derf/ and download the parkrun test video. It’s in raw format so compression artifacts wouldn’t bias the comparison in any direction. Save it in some directory.
2) Go to http://v2v.cc/~j/ffmpeg2theora/download.html and download ffmpeg2theora and copy it to the same directory. (Or somewhere along the %PATH% or $PATH if you’re on *nix)
3) Go here http://sourceforge.net/projects/mjpeg/files/mjpegtools/1.9.0/mjpegt… download. You need yuv4mpeg.exe from that package. Copy it to the same directory. Yes they’re command-line tools, but that’s how the best and most flexible tools in video encoding are. (For *nix you might have to go to http://sourceforge.net/projects/mjpeg/files/ , get the source and compile it yourself… or use the precompiled ones from there or your package manager.)
4) Now open your command-shell and cd to the folder where you saved everything. (You know, like type “cmd” in the “run” thingie and once in there, type: cd “C:\whatever\the\folder\is\” (On *nix I assume you know how to do this…..)
5) Run this to get a file ffmpeg2theora can read:
yuv4mpeg -w 1280 -h 720 -x 420jpeg -r 50:1 -i p < 720p50_parkrun_ter.yuv > parkrun.y4m
(ffmpeg2theora can’t read raw video, y4m just adds simple headers to the yuv file that tell ffmpeg2theora the frame size, colourspace and framerate)
6) Run this line to actually encode.
ffmpeg2theora –speedlevel 0 –two-pass -V 5000 parkrun.y4m
7) Wait. Theora is not only ugly, but also slow. (Speedlevel 0 for the highest quality and you want that, right?)
8) You should have ~6MB parkrun.ogv
9) Download the parkrun.mkv I made earlier from:
http://www.mediafire.com/?0ztmmqzndim
10) Compare the videos and see Theora completely self-destruct on the clip.
I haven’t encoded much stuff with Theora, so there might be some errors etc, but please, feel free to correct me and suggest something better.
——–
Now, you can improve the workflow however you want, but you will NEVER have 6MB Theora video that is even near the quality of my video. And since it wasn’t me who encoded the Theora video, there’s 0% possibility for me to cause bias. You can use any tools and options you want, I just gave you something to start with.
lemur2 will probably not even answer to this. If he does, he probably complains about how he can’t go through those steps for reason X. In the unlikely event that he DOES encode the video, he will not mention it or at least show his results for the fear of public ridicule.
Also, some people might wonder why I’m even writing this comment? Frankly, I assume lemur2 is a troll, nobody can be that far in the woodworks. Of course there are some other people here too, but mainly I’m writing this for myself.
I just get this warm feeling from giving a challenge people can’t win.
I’ll give it a go tonight when I have some time, but I’m only a novice, so I probably won’t get a good result. This will NOT mean that a good result is not possible. Given that I have seen videos on the net which are great results, entirely competitive with h2.64 videos, I may be able to get a good result.
I won’t be using ffmpeg2theora though … there are just too many ways to get a poor result from that encoder for people like me who don’t know what they are doing.
I won’t post my resulting video, because my poor little server and limited bandwidth buget couldn’t hack that.
BTW: There is a stroy on Slashdot now with speculation that Google are going to open source the VP8 codec.
http://tech.slashdot.org/story/10/04/13/0141208/Google-to-Open-Sour…
“HTML5 has the potential to capture the online video market from Flash by providing an open standard for web video ^aEUR” but only if everyone can agree on a codec. So far Adobe and Microsoft support H.264 because of the video quality, while Mozilla has been backing Ogg Theora because it’s open source. Now it looks like Google might be able to end the squabble by making the VP8 codec it bought from On2 Technologies open source and giving everyone what they want: high-quality encoding that also happens to be open. Sure, Chrome and Firefox will support it. But can Google get Safari and IE on board?”
If that happens, this discussion will all be moot. Theora was starting from a long way behind (and only significant tweaking has recently brought it to near competitiveness), but AFAIK VP8 isn’t behind at all.
If Google do the right thing and release VP8 under a similar pledge as On2 released VP3 for open source development, all bets will probably be off as far as future development Theora goes. It is unlikely that any more tinkering with Theora would yeild as good a result as working with an open VP8 could. Possibly Google might propose a VP8 codec implementation in a Matroska wrapper.
PPS: as for getting it on IE, Google would surely just make YouTube point to a suitable HTML5/VP8 plugin for IE, just as they currently do in the case of the Adobe Flash plugin.
Perhaps they may even point to their own Google Chrome Frame as the required plugin for IE.
Edited 2010-04-13 04:06 UTC
Ars Technica have picked it up as well, now.
http://arstechnica.com/open-source/news/2010/04/google-planning-to-…
I’ll say.
If Google back it up (put their money where their mouth is, eat their own dogfood, etc), and switch Youtube over to use VP8/HTML5, and also host a plugin for IE to render the same, that could be the game-changer right there.
I guess if that all happens, Wikipedia having to switch from Theora to VP8 wouldn’t be a biggie.
Edited 2010-04-13 15:25 UTC
Yes, of course I’ve read about that. But I wouldn’t hail it as a savior just yet.
1) From what I’ve seen, the ONLY source for these news seem to be the NewTeeVee guys. And they’re just ambiguously saying “we^aEURTMve learned from multiple sources…”. News like that aren’t too trustworthy in IT.
2) It’s still a long way until May 19, when Google is supposed to announce it…
3) There are really no freely available encoders, decoders, or tools for it. This might or might not be a problem, depending entirely on Google.
4) They talk about “open source”, but that doesn’t mean anything. x264 is open source and free software too. And you can get the H.264 specs. It doesn’t really say anything about license issues yet. It might mean only the encoder is open source. Or it might mean that you still have to license, but don’t have to pay royalties in most cases.
5) Even more so than with Theora, VP8 is vulnerable to patent ambush, because it’s actually based on modern technology. Sure you could say “Well Google has lots of lawyers! They wouldn’t make a move like this if they didn’t know it was safe!” But Microsoft has an excellent law department too, and look what happened to “free” VC-1.
That being said, I sure hope VP8 really does become truly free and some arsehole companies don’t try to patent ambush it, because it has some actual potential for competing with H.264 in quality.
Edited 2010-04-13 17:26 UTC
I think your problem could be with the -V 5000 switch.
According to the ffmpeg2theora manpage, this switch has the following effect:
Results with Theora at a constant bitrate (CBR) are poor.
http://lists.xiph.org/pipermail/theora/2009-July/002430.html
http://hacks.mozilla.org/2009/09/theora-1-1-released/
So I will be using the -v (lower case v) switch. It has this description in the manpage:
I will start with -v 10 and step it down until I get a filesize of about 6MB.
However, this may take quite a while. The parkrun raw video file that you want me to download is 666 MBytes long! You want a 100:1 compression factor.
My OpenWrt server is downloading this now, but it is very slow. It won’t finish downloading for over a day or so at this rate.
This is really stretching things a bit, I must say.
Edited 2010-04-13 10:47 UTC
There aren’t too many switches to tweak ffmpeg2theora…
From what I see on the command-line, the only one that really matter are the rate control method (passes, -v, -V, –soft-target) and the speedlevel. (You COULD play around with keyint and buf-delay, I guess, but I doubt they will give too much difference.)
Btw, look at x264’s options
http://mewiki.project357.com/wiki/X264_Settings
So much there to shoot yourself in the leg with.
But notice the –two-pass. It’s not CBR, it first does one pass to analyse and then another where it actually compresses it. The overall quality should usually be BETTER, as seen in this graph Xiph made:
http://people.xiph.org/~maikmerten/plots/bbb-68s/managed/psnr.png
But of course you can test all the options you want. That was the point.
That’s how video compression works. And that’s why we do video compression in the first place: uncompressed video is HUGE.
One minute of uncompressed 1080p at 30fps YV12 would take 5.6GB.
Good thing that’s only 10 seconds of 720p. (Although 50fps)
6MB for 10 seconds isn’t really asking much. That would be 2 GB for an hour. (The clip is a really demanding one, though.)
And test clips are uncompressed for a reason. If they were compressed, it would be A LOT easier to re-encode them because all the detail would already be lost. Another way would be to use lossless compression, but raw formats are more generally supported and thus easier to use. (And lossless compression usually only gets a compression ratio of around 1:3)
EDIT:
Oh right. Of course you can upload the video to http://www.mediafire.com/
No account required, no silly countdowns, and that’s why so many video encoding people use it for their clips.
Edited 2010-04-13 11:54 UTC
The –two-pass switch does nothing if you select CBR via the -V switch. You still get CBR. With Theora it will still be rubbish, compared with VBR.
I read about this somewhere, but I can no longer find the link.
BTW, almost everyone who trashes Theora has done so because they have accidentally managed to discover that Theora is broken for CBR encoding.
I have finished downloading the parkrun raw file, I’ll let you know how it goes.
Edited 2010-04-13 13:22 UTC
No. The whole idea of two pass encode is that you specify a bitrate and the encoder makes a VBR file while trying to hit that bitrate as well as possible. (It never gets it totally right.)
Search the internet. The only command-lines for ffmpeg2theora I found with –two-pass were with -V.
also stuff like: http://www.mentby.com/ralph-giles/usage-of-two-pass-parameter-with-…
You’re saying it “does nothing”, but watching it encode, it sure did do two passes.
In fact, ffmpeg2theora won’t even RUN if you don’t specify the bitrate for –two-pass. It won’t run if you only specify a -v with –two-pass. You HAVE TO give a bitrate for it with -V.
Oh right, I forgot to mention in the last comment, the main reason I didn’t even bother with single pass -v is that libtheora uses a silly “ratecontrol method” where it drops frames, making it impossible to make any kind of frame-by-frame comparison. And since the Xiph people are saying two-pass is usually better, I thought I’d be safe if I went with that.
I got it backwards, sorry. The –two-pass switch does nothing with -v [1-10].
My results:
(1) Because the bitrate is constrained, CBR yeilds similar filesizes at the same rates. At very low bitrates (i.e. very small filesizes, very aggressive compression), Theora CBR lost the plot completely.
(2) At less aggressively low bitrates, Theora CBR rapidly gets better, and closer to h264 in quality. It is a lot closer at 10000kbits/sec than it is at 5000kbits/sec.
(3) I used the following switches for best results for CBR:
ffmpeg2theora –optimize –two-pass –soft-target -V xx000 parkrun.y4m
best for VBR:
ffmpeg2theora –optimize -v x parkrun.y4m
(where x is variable depending on how aggressive one wants the compression).
(4) At low enough quality VBR, Theora can produce a filesize a small as constrained CBR, but again Theora loses the plot quality-wise when asked to attempt very aggressive compression.
(5) At a quality level commensurate with 720p video, Theora is considerably closer to h264 quality levels than it is at very aggressive compression levels.
(6) Theora really doesn’t like the parkrun video. It does a lot better on different material, but this particular video really upsets it.
(7) Theora is (still) better at VBR than it is at CBR.
Edited 2010-04-13 14:51 UTC
Sure did. And two-pass too. (TWO PASS IS NOT CBR)
But the whole point is to get similar quality AT THE SAME BITRATE.
If you allow Theora to go to 10000 kbps, then you allow H.264 go to 10000 kbps too, and it will again look a lot better.
Again, two-pass is not CBR. If something, it’s ABR (Average BitRate, but usually “ABR” is reserved for one-pass Variable BitRate video.)
Of course! That was my whole point! Theora can’t compress as well as x264!
They’re “aggressive” only for Theora. x264 handles them fine. I could watch that x264 encoded video online, but the Theora one is pretty much unwatchable.
Of course Theora is “closer” to H.264 at higher bitrates, but so is MPEG-1. If you just throw bits at it, EVERY codec gets close to H.264.
The trick is to compress efficiently. Meaning good quality and lower bitrates. That’s the whole goal of video compression.
And since we’re talking about internet video, bandwidth matters a lot and people want to compress popular videos as much as possible.
Of course it does a lot better on different material. I selected the parkrun video mostly because that’s what Xiph used for thusnelda vs. ptalarbvorm. It’s a demanding video, but it’s demanding for EVERY encoder, not just libtheora.
You could repeat the test with basically any other video, but the result will be the same: x264 gets watchable quality at lower bitrates.
Every encoder is better at VBR than at CBR. If it’s not, the encoder is buggy. Variable bitrate means the bitrate can fluctuate, but it doesn’t have to. Therefore if some video would have better quality at constant bitrate, the encoder should not change the bitrate, thus making it basically CBR.
—–
So in conclusion: As I said, I knew the challenge was impossible when I gave it to you. It’s because I’ve seen so many comparisons of Theora.
In the end Theora is based on an obsolete format (to avoid patent issues) and thus you can do the test with any realistic video and the results will be the same in every bitrate category:
1) x264 has better quality at same bitrate
or
2) x264 has smaller bitrate at same quality
—–
Oh by the way, in case you were wondering how Ptalarbvorm would do, I managed to compile the newest version of it and ran some tests.
This time I took the famous quote: “If [youtube] were to switch to theora and maintain even a semblance of the current youtube quality it would take up most available bandwidth across the Internet.”
I made some tests with best settings for Ptalarbvorm and settings for x264 that get similar speed and quality. And from what I’ve heard, metrics have always been nice to Theora.
I don’t have the video files on this computer, I can of course upload them somewhere if you really want, but I put the results here:
http://pastebin.com/mp6ENGQr
And summary: With much faster encoding speed and better quality, the x264 file was 30.5% of the Ptalarbvorm file.
You misunderstood this comment. Of course the idea is to compare the two codecs at the same bitrate.
At very low bitrates, Theora loses the plot, and h264 is considerably better. At higher bitrates (for BOTH), Theora is much closer to H264.
So what is a low bitrate, and what is more typical?
http://en.wikipedia.org/wiki/Bit_rate#Video
The -V 5000 parameter that you chose is typical for DVD quality … about 480p, 25 fps.
You tried to constrain a 720p, 50 fps video to that same low bitrate rate. A more commensurate rate for 720p, 50 fps would be 8 to 15 Mbits/s (-V 8000 to -V 15000).
At those more typical bitrates, Theora is much closer in quality to h264 than it is at very low bitrates.
You are unrealistic there. DVD quality is about as high as it gets for web video … 480p, 25 fps.
You can harp on all you like about really heavy compression of high-resolution video, but that is not what most web video is.
Edited 2010-04-13 23:34 UTC
Again, the whole point is to save bandwidth as much as possible. When you raise the bitrate, the visual difference will be less with any format. (But Theora will not catch up with H.264 even visually until much, much higher bitrates that H.264 wouldn’t even need to look good.)
We’re talking about INTERNET VIDEO here. Those bitrates you suggest are NOT “more typical”. I constrained a very demanding 720p 50 fps video to DVD bitrates and H.264 handled it. Theora didn’t. What more can you say?
And as I said, I selected that video because that’s the one Xiph used too. Of course I could’ve rescaled it too, but that would’ve added additional step to the challenge and it wouldn’t have mattered anyway. Besides, even Youtube has 720p content.
And the framerate… If you really want, you can imagine it’s 25fps at 2500 kbps. Doesn’t make a difference.
I’m being unrealistic?
http://www.youtube.com/watch?v=G2UqrlWZnXM
http://www.youtube.com/watch?v=4N2YWRJ-ppo <- 1080p!
http://www.youtube.com/watch?v=xPOAdhWoWxw
These say otherwise. Even Youtube is full of HD video. Not to mention other places like Vimeo.
Actually YOU YOURSELF LINKED A 720p VIDEO:
http://jilion.com/sublime/video
Theora just can’t handle it.
Are you saying people should stop making 720p video for web? Are you saying they should remove all the videos they have made and replace them with 480p versions encoded with Theora that would be both bigger and lower quality?
Web video is really heavy compression of any video. To sum it up: as you can see for yourself, Theora can’t, H.264 can.
You want to try it on a 480p video? Sure. Lets pick another random video, say, this:
http://media.xiph.org/video/derf/y4m/football_422_4sif.y4m
Apparently it’s already in y4m, so you don’t even have to convert it! You know the rest.
Here’s the H.264:
http://www.mediafire.com/?noygnmygdtt
The bitrate is 850 kbps. Pretty typical, no?
Theora at that bitrate looks like crap, tho.
Remember you can upload your video to mediafire.
Something worth mentioning is that all these comparisons tend to be Theora vs. x264. While that’s a reasonable decision, it’s not what you’re actually facing on the web. No major company is going to use x264 because of software patents, which leaves them with the various commercial implementations of h.264 from Apple, MainConcept, etc. which all tend to get crushed by x264 as well. Maybe not quite as badly as Theora does, but they’re really nowhere close to competing with x264.
Edited 2010-04-10 10:06 UTC
So there is two sides to the story. The first is that even though x264 is open source, that doesn’t mean that h.264 is not patent-encumbered. This means that using x264 only opens one up to get sued by MPEG LA.
The second point is that for making videos as used on the web, Theora is demonstrably as good as h.264 as used on the web (because most parties do not use x264, because they do not want to be sued). It is easy, anyone can make a Theora video with just as good quality/bandwidth performance as h.264 videos that are used on the web.
When you use Theora to encode your own videos, then you owe royalties to no-one. Use it as you like, put it up on a website if you want to, no-one can come after you asking for fees. And no-one can ask for fees from anyone viewing your video, on any platform they please.
Wikipedia realises this:
http://videoonwikipedia.org/
and wikipeida even tells you how to do it:
http://videoonwikipedia.org/howto.html
Theora/HTML5 = Web video for everyone. No fees. No risk of being sued. No expensive software required. Supported by the overwhelming majority of web browsers (say 92%), no plugins required except for IE.
Edited 2010-04-10 10:56 UTC
Um… seriously… WHAT?!
I can’t just believe the amount of misinformation on these video encoding threads.
Not use x264 because of software patents and use some other H.264 encoder?!?! IT’S THE SAME PATENTS!
The only “licensing issue” that x264 has that others don’t is that the one who’s selling the encoders has to pay royalty and there’s no seller for x264 that will do that. But they can just compile it themselves.
Besides, Mainconcept really isn’t that bad, they even slightly beat x264 in MSU’s comparison, but I’d avoid Apple’s like plague.
And speaking of major companies, apparently Youtube, Facebook, Hulu, and Vimeo are not major enough.
If someone uses x264 to encode a video, they have not paid MPEG LA any royalty which MPEG LA require from anyone who uses the h.264 format.
Therefore, MPEG LA can sue them.
No-one can sue anyone who uses Theora to encode videos.
I thought MPEG LA requires payments for playback, not for encoding.
In the case of encoders, it’s the company that sells the encoder that has to pay the royalties, and that’s only after 100,000 units.
The company USING the encoder doesn’t have to pay anything for that.
It’s when they start making H.264 videos and offering them to people they run into licenses.
Did you read anything I said before? There already are big companies using x264.
the benefit of h264 decoding chips is they use almost no power, there’s a reason the ipad can play 11-12 hours of 720p video.
even if they got decent theora playback on ARM, it will quickly drain the batter just like the flash on android demos.
These “h264 decoding chips” also decode mpeg4, realvideo10 or vc-1. It’s a chip function with specialized instructions suitable for common decoding tasks with some decoding software, uploaded on boot.
Given enough pressure, the chip vendor can provide an update that decodes Theora (or at least accelerates certain of its operations).
Theorarm is a first step, and a good one, for three (or more) reasons:
1. If you request a new video codec firmware, you get usable Theora on one chip family. Not “all ARM based devices”, but “Samsung”, “TI”, “Telechips”, etc. (and sometimes even only a subset of their chips). Theorarm gives a reasonable baseline decoder for all ARM chips.
2. Once you optimized Theora on ARM, you can benchmark aspects of the code and figure out what part of the codec should be offloaded to the chip to get the best result. That is, not offloading the entire codec, but only a certain transformation.
3. Theorarm makes Theora somewhat suitable for mobile devices in a short time frame, increasing Theora’s coverage. It’s a compromise to get out of the chicken&egg situation of not having the user base because the codec isn’t supported and lack of codec support (in the accelerator firmware, for example) because of lack of a user base, ie. market interest.
Firefox refusing to support h264, Cortado, Theora-on-Silverlight, Theora-on-ActionScript (ie. Flash, though that project might be scrapped now) are other such means to create sufficient coverage to get out of that situation.
Once all that is implemented, there’s no more reason _not_ to use Theora because “the user can’t see it”.
For hardware acceleration of Theora, specific hardware is not required. general-purpose programmable GPU shaders, or GPGPU, is fine for the task.
Can also be done by GPUs via GPU shaders or GPGPU without any support at all from chip vendors.
There is also Google Chrome Frame.
If we count Google Chrome Frame, then Firefox, Opera, Chrome and IE browsers can all play HTML5/Theora videos. Right now. Today. (IE is the only one that requires a plugin, but it requires a plugin for Flash also, so what the hey).
There is no reason not to use Theora right now, Today.
It has the performance, it has the support in the vast, vast majority of browsers, and it is patent free.
Today.
It’s fine for the task. sure.
Which ARM SoC (except nvidia tegra) has sufficient GPGPU capabilities?
This is about Theora on _ARM_, not on your average quad-socket dodecacore Intel/AMD box with 3 pcie busses each stuffed with 16lane pcie video cards.
Agreed, the topic is about Theora on ARM. Most ARM platforms won’t have much of a GPU at all, so it needs to be addressed. Google are investing on a solution for that, I believe it is called TheorARM.
GPGPU or GPU shaders would be available on most other platforms, however, even including netbooks.
So then, everything will be covered. There won’t be any “hardware acceleration” advantage for h.264.
There still is, and always will be, the “royalty free” advantage for Theora, however.
This is, after all, the whole aim:
Theora on everything. Video for the web solved.
Edited 2010-04-10 09:53 UTC
IE will finally support html5 and svg. What would it take for Microsoft to embrace theora?
Something that will make them more money than being a part of the MPEG-LA and having their finger in the H.264 pie.
http://cristianadam.blogspot.com/2010/02/ie-tag-take-two.html
It requires the user to install a plugin (which is a single click due to ActiveX), but this works without Microsoft’s approval.
If Microsoft ignore Theora, and you want to still use IE, then just use a plugin (just as you would for Flash).
Google Chrome Frame can do HTML5/Theora for you right now, today, if you want.
You don’t have to wait for Microsoft.
Edited 2010-04-10 07:57 UTC
Probably for Google to produce and support a DirectVideo Theora codec which IE would then use. I can’t envisage MS not being sensible enough to use their own media framework rather than taking the mozilla route of hard coding specific codec support into the browser.
Why wait for Google?
http://www.xiph.org/dshow/
Why wait for Microsoft?
http://code.google.com/chrome/chromeframe/
http://www.google.com/chromeframe
You can have HTML5/Theora in your IE browser today, and your Windows media system can play Ogg Vorbis, Speex, Theora and FLAC files today.
Enjoy.
Edited 2010-04-10 12:36 UTC
I thought the whole idea was that you WOULDN’T have to install plugins.
Flash is also a plugin?
There has been open source video player plugins for ages. 99% of IE users will never install Chrome frame, so I fail to see what’s your whole point on “IE CAN PLAY THEORA TOO!!!”.
They would if YouTube required it, and as you pointed out there’s not much difference between requiring one plugin vs another.
Edited 2010-04-10 20:45 UTC
You don’t need a plugin, for capable browsers (Firefox, Opera, Google Chrome … almost 50% of browsers in use worldwide is one of these). IE, however, is not even close to a capable browser.
Yes it is.
Nonsense.
99% of IE users right now go to YouTube, and the YouTube site says “you need a plugin to see the videos”, and Youtube gives them a popup which lets them install the plugin.
Today, that plugin is for Aodbe’s Flash. Most users don’t even know what Flash is, or exactly who Adobe are.
If tomorrow on YouTube the plugin required was for Google Chrome Frame instead of Adobe Flash, 99% of IE users would just go ahead and install that without batting an eyelid (just as they did for Flash in the first place).
Edited 2010-04-11 09:33 UTC
Google switching YouTube to Theora would do it.
With IE9 only going to Vista/7 it doesn’t really matter anyways. Flash will still be a much larger target due to being compatible with older versions of IE and Safari. There’s also portable devices like the iPad that have h.264 hardware decoding. So even if IE9 supported Theora major publishers would still ignore it.
What Theora needs is for Google to convert their YouTube videos to it otherwise you’ll have to wait a long time for it to get anywhere.
It’s pretty much hopeless at this point. Google is just feigning support so they stay in favor with the FOSS crowd. But actions speak louder than words and Google clearly favors Flash and H.264.
I don’t understand why Thom feels that the ‘endorsement’ of Theora is from Google. The blog post clearly ends with “By Robin Watts, Pinknoise Productions Ltd” which means that a Google employee has not written it. Granted that since they allowed it to be published on their blog, they agree with the stance, but that still does not mean they made the statements. Please clarify that in your article, Thom.
As has been said many times, the main problem with Vorbis/Theora/Ogg is the lousy Ogg container format. Hopefully Google will get smart and adopt something more sensible like Matroska.
Edited 2010-04-10 12:55 UTC
There is nothing wrong with the Ogg container format for the purpose of containing Theora/Vorbis audio/video and streaming it over the web.
The Ogg format is actually a little better than Matroska for the purposes of streaming.
Having said that, Matroska would be suitable also as a container format. Not a big deal either way.
Unfortunately lemur2 you are wrong about this too. Until recently (and maybe still?) ogg didn’t support indexing so to seek you had to do a binary search – very slow over the internet. There was a blog post on planetxiph about this; can’t find it now but I did find a page with many more technical objections to ogg:
http://ffmpeg.org/~mru/hardwarebug.org/2010/03/03/ogg-objections/in…
The real problem though is that Theora performs significantly worse than H.264 at realistic bit rates. At high bit rates like you have used (3 Mb/s I believe?) the difference is very hard to see, but at realistic bitrates (say, 200-1000 kb/s) it is obvious.
I wish Theora were as good as H.264 but I am willing to accept the fact that it isn’t. I wish you’d stop pretending otherwise.
Actually there is a funny hack that apparently does something like indexing and tries to patch some of the worst parts of Ogg:
http://wiki.xiph.org/Ogg_Skeleton
Of course supporting that is basically supporting a whole new format, so it isn’t going too fast.
Chris DiBona, open source and public sector engineering manager at Google, says that “If [youtube] were to switch to theora and maintain even a semblance of the current youtube quality it would take up most available bandwidth across the Internet.”
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-June/02038…
I believe that was from before even Theora 1.0 came out, and it was definitely before the current Thusnelda code or the upcoming improvements. And he was also torn to shreds by all the people noting how poorly the theora encoding options were chosen.
More to the point, obviously if a switch was made they would have to make a point about keeping bitrates approximately the same. So it shouldn’t be a comparison about what bitrates each require to have the same quality, but instead a comparison of quality between the encoders with the same bitrate. There is a significant difference between the two.
Edited 2010-04-11 00:10 UTC
Your link dates back to June, 2009.
The Theora codec that is currently competitive with h.264 came from the development effort codenamed Thusnelda, and it was not released until late September, 2009.
Sorry, but Chris DiBona was talking about an entirely different level of performance from that of the current Theora (Thusnelda) codec.