A new set of x264 and vpxenc encoder benchmarks have been published. The new benchmarks address many of the concerns raised in the comments about the methodology used in the previous article, such as using SSIM for quality measurement. Theora is also included in these tests.
I found this quite enlightening. It still doesn’t answer every question I might have, but then again, you can’t expect a benchmark to ever answer every possible question.
As for the results: vpxenc is still under heavy development but it’s very promising to see that as of now it matches x264 baseline in SSIM-quality. Considering how young the encoder still is this promises lots of optimization potential is still left, both quality- and speedwise!
What’s interesting is how poorly Theora performs. Many enthusiasts have been hailing Theora as the best open video codec for a good long while now, but it gets totally trumped on by both the contenders here even though the encoder is already quite well-established.
Anyways, I have to congratulate vpxenc devs on a good job and I hope to see more improvements coming soon. It is awesome to have an actually great open video codec that challenges H.264 quality-wise!
Exactly what these benchmarks mean is subject to interpretation. In reality, what they mean is that in order to encode WebM to the same quality as x624, it will take longer. Alternatively, they mean that if you are only prepared to devote the same period of time to encoding, then the video produced by x264 within your time budget will be a little better quality per bit than WebM.
In real life, since encoding is done only very rarely by the vast majority of people, the latter comparison doesn’t come in to play, and the former comparison is the only thing that has any practical importance.
For all practical intents and purposes, all this means is that it will take you a little longer (and cost you infinitely less) to encode your video clips in WebM to the same quality as you would have had if you were using x264 legally.
If you install a copy of the new Firefox 4 browser you can see for yourself that the WebM video clips themselves do not lack for quality:
http://www.mozilla.com/en-US/firefox/4.0/whatsnew/
You are absolutely right. But you know how it is, some “pro” content providers complain because they’re too lazy because of that little extra time or extra step in converting the output from their video editing software.
I say: who the fuck cares? As a software developer I wish I could choose the language/frameworks/tools for every project I work in (that would make my job way easier) but often I can’t. So what? That’s why it’s called a job.
As a content consumer I really hope WebM becomes the dominant format of the web. If this means professional filmmakers’ work will be 2, 3, 4 times harder, then so be it. To all of them: grow up, do your jobs and get over it.
Well, I think most “pro” content providers are wedding videographers. They could use h264 illegally with little chance of getting caught and save some time. Although if you can use it legally then you’re a lot smarter than I am at figuring out the licensing. I’m sure many of them do not realise that they are in strict violation of the licensing. Realisticly, if there is good tool support they won’t even notice.
It has been suggested that the bitrate used in these tests it too low for a fair comparison with Theora. E.g. bitrates below a certain point cause Theora to preform much worse than it would above that point. I don’t know if this is true for this test but it is possible Theora would be more competitive at higher bitrates.
Very impressive indeed, at how unimpressive this is.
By the time release a decent and competitive encoder of WebM, H265 will be out.
There is already successor for VP8/WebM in the works, too, so we’ll see.
Besides, even if H.265 actually did come out suddenly it still wouldn’t have a “decent and competitive encoder” yet anyway.. Oh, sorry if I ruined your trolling attempt.
What trolling you tool.
It’s hard facts, I apologise if this is too difficult for you to grasp.
WebM is inferior in most ways and it arrived too late.
H265 is a fact an in development for a long time now.
Would love some info on your part about a VP8 successor.
There is only one factor in which x264 beats WebM and that is encoding speed. Most people would use a video encoder once in a blue moon … so in practice this simply means that if you want your occasional video encoded in WebM at the same quality as you would have had with x64 you will have to wait a little longer for it to encode. Depending on the size of the video clip, this could mean a minute or so of your time.
Meh. Big deal.
If you encode the video clip in h264 instead and put it on a website, you could be up for thousands of dollars in license fees.
Worse: If you encode the video clip in h264 instead and put it on a website and you don’t get a license, you could be up for hundreds of thousands in fines and court costs.
Edited 2011-03-24 00:52 UTC
I like WebM/VP8. It’s software decoder on Linux is less resource intensive than H.264 with same quality.
But I can encode video to H.264 and give a big FFFFFUUUUUU! to MPEG-LA, because I live in a country where software patents are illegal.
you’re quite naive, there’s more to it than software patents
Here are non-US countries with granted patents, i’m kind of expecting you are living in one of those:
Europe: Germany, France, UK, Finland, Italy, Sweden, Belgium, Bulgaria, Liechtenstein, Austria, Czech Republic, Denmark, Spain, Hungary, Ireland, The Netherlands, Poland, Romania, Portugal, Slovenia
Asia: Japan, China, South Korea, Hong Kong, Singapore, Taiwan, India
Americas: Canada, Mexico
Australia
Source: http://www.mpegla.com/main/programs/AVC/Pages/PatentList.aspx
And I live in neither! Patent’s in my country are very much explicitly forbidden. Click on my name to see where I live.
PS: Oh, and yeah. Since the my country is quite small and I personally know the PM, previous PM, head of the patent office and some important politicians(it’s not that unusual in my country), I can pretty much be sure that software patents will never be legalised.
Though I am sure that hardware encoders and decoders are very much covered by patents. Software is 100% not.
As a resident of Lithuania who is happily untroubled by software patents, you can still help out the rest of your fellow men and women who live in countries not so blessed by refraining to use patented proprietary technologies over the web. It is the worldwide web, after all.
This is relevant:
http://hacks.mozilla.org/2011/03/promoting-the-use-of-new-web-techn…
Promoting the use of new web technologies in Lithuania
Keynote: ^aEURoeBuilding a better web with open, new technologies^aEUR
Thanks for your consideration.
Personally I feel hosting material — binaries, source-code, documentation, study material etc. — that might be of questionable legality elsewhere would be much more beneficial worldwide than just stopping to use H.264. Hosting such would benefit several people whereas refraining from using patented technology would only benefit himself, if even himself either.
http://www.groklaw.net/article.php?story=2011032316585825
Why Is Microsoft Seeking New State Laws That Allow it to Sue Competitors For Piracy by Overseas Suppliers?
tl;dr – Nope, that won’t help you at all.
Your only hope is for everyone, worldwide, to use royalty-free technologies. Win, win, win for everybody.
Anyone, anywhere, using proprietary technologies without paying US software patents rent … a world of legal hurt is headed the way of US residents and companies. It is the US residents and US companies who will pay.
Edited 2011-03-24 23:45 UTC
“Anyone, anywhere, using proprietary technologies without paying US software patents rent … a world of legal hurt is headed the way of US residents and companies. It is the US residents and US companies who will pay.”
Patents and the USPTO is the worst thing to happen to the software field in it’s history. You think that it only affects the US, however the US corporate lobbyists keep pushing for them throughout Europe.
One of the first acts of the US when they overthrew the Iraqi government was to send in RIAA personnel to write copyright laws favouring the US – even before violence was under control. It’s sickening.
Even if your country doesn’t accept software patents, chances are you pay for it through absurdly high prices in your local market/currencies. European retail prices are sometimes twice the US retail prices in real dollars.
Do you have any links for that claim about performance?.
I’ve been attempting to gauge whether WebM/Youtube/HTML5 runs better than H.264/Youtube/Flash on my Ubuntu netbook.
Like most netbooks it doesn’t have any hardware decoding support for either codec and Flash generally seems to lag behind on Linux. So while the number of people with linux netbooks might be small, this could be a good small niche for WebM to conquer first and give people a straightforward performance increase to encourage them to start testing it out.
It certainly seems to be getting better, but I’m wary of jumping to conclusions as there’s lots of different elements to consider (e.g. Firefox vs. Chrome, GL acceleration on intel, full screen vs. non-fullscreen, different sizes of video, both bitrate and resolution, the quality of Adobe’s software decoder vs ffmpeg), so I’d love to see some serious benchmarks if anyone has done them.
Edited 2011-03-24 14:41 UTC
Nope… It’s very much my machines. I checked Linux non accelerated.
Same video. Dancing android from YouTube (MAH01434) @ 720p. Same visual quality of video.
Athlon X2 EE @ 1GHz – 65%-85% (WebM) 80%-90% (H.264)
ThinkPad T42 – both basically killed the system. Both play with lag.
Atom N270 @ 1.6GHz GMA945 @ Windows XP – 22%-33%(WebM) 40%-50%(H.264)
In short WebM does look like least resource hungry on decode and Linux still sucks at graphics.
I mean, an Athlon X2 2.1GHz on Ubuntu being beaten by a lowly first gen Atom on Windows XP? Something is really wrong here…
That would be the Intel graphics drivers for Linux (not Linux per se).
Hopefully a GPU-based decoder for WebM written in OpenCL which runs in conjunction with a Gallium3D graphics driver will be unveiled in a few weeks time.
Huh? Neither of my Linux boxes has Intel graphics. T42 is ATI Radeon 9600M desktop is GeForce 9500GT(LowProfile).
Windows 7 on the same desktop played back WebM/H.264(no h/w) video with 16-25% of CPU utilisation, Linux about 35-45%.
Linux actually supports several acceleration frameworks for several types of video cards but ultimately, its up to the player to leverage those capabilities. Sadly, many players do not properly leverage this capability. What you’re blaming on Linux should likely be blamed on your video player. Its not a Linux problem for lots of common hardware.
Adobe has been especially slow to leverage available hardware acceleration frameworks which have long been available on Linux. Sadly, they’ve been chasing a NIH development path and creating their own acceleration layer rather than use what’s long been available.
So long as you are offering the h264 video for free there are no license fees
I believe that even if you let people look at your video for free, if it is a commercial video (e.g. advertising) then you must pay a license if you use h264.
Ergo, videos such as those I linked to:
http://www.mozilla.com/en-US/firefox/4.0/whatsnew/
which can be thought of as advertising, are thus cheaper to encode as WebM, since you face no risk of being sued for hosting them.
Mozilla face no such legal risk for the perfectly fine quality WebM videos they are showing to advertise Firefox 4 on their “Whats New” blog page.
All it took to avoid any risks was for Mozilla to take just a couple of extra minutes to encode the videos in WebM. That is an absolute pittance cost compared with the cost of making the videos in the first place, or the cost to Mozilla blog of getting a license for h264, or defending a lawsuit for not having such a license.
Edited 2011-03-24 04:08 UTC
Can you provide a source for this?
I can’t find any relevant articles via Google search.
Edited 2011-03-24 05:15 UTC
There is quite a lot of discussion on the web around the fact that the only free use of h264 encoding is for, and I quote direct from the licenses, “ONLY FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER”.
Backup:
http://bemasc.net/wordpress/2010/02/02/no-you-cant-do-that-with-h26…
http://news.cnet.com/8301-30685_3-20000101-264.html
http://www.betanews.com/article/10-questions-for-MPEG-LA-on-H264/12…
Mozilla putting a video touting the advantages of Firefox 4 on their web pages would NOT constitute “non-commercial use”. Not on any day of the week.
This isn’t even contentious … of course one needs a license to use h264 for any kind of commercial use (even if people are allowed to look at ones videos for free).
From the bemasc.net article:
Edited 2011-03-24 05:50 UTC
Actually, thinking about this a bit, and putting a lawyer-think hat on for a moment:
http://bemasc.net/wordpress/2010/02/02/no-you-cant-do-that-with-h26…
… I think a lawyer would interpret this phrase as meaning that one’s purchase of the program (in this case Final Cut Pro) gives the purchaser a license to use the h264 AVC functionality of program ONLY for uses which are both personal AND non-commercial.
So … no editing of a video of your amateur basketball team or you local church choir or your school sports day … such uses would be non-commercial but not personal.
Come to think of it … putting any video on a social website such as facebook … public use, not personal. Any use of h264-encoded video on the web at all really (even non-commercial uses) … is not personal use. You are therefore not licensed.
OTOH, here is a gallery of examples of quality videos for which kind of use everyone IS fully licensed:
http://www.mozilla.com/en-US/firefox/video/?video=fx4-whatsnew
Enjoy.
Edited 2011-03-24 09:16 UTC
You wish.
Only ‘non-commercial web video’ is free. What, exactly, ‘non-commercial’ means is a mystery to everyone – which is exactly what the MPG-LA wants. At OSAlert, we have ads. As such, when we upload a video using HTML5, are we non-commercial or commercial? We’re not going to take the risk.
Using YouTube embeds is fine, since the burden is then on YouTube. However, with HTML5 video, WE are the video provide – not YouTube or Vimeo. As such, the rules suddenly apply to us, and the rules state that we need to pay up.
We’re not going to.
Actually, according to the licenses of effectively every h264 product available for people to use, only “non-commercial AND personal” use is licensed.
Submitting even a non-commercial video to YouTube would be a public use, surely. Reading the licenses, it would seem to me that one would need a separate license from MPEG LA to cover a public (non-personal) use such as that.
And not to forget that even if “non-commercial” would be clearly defined, this still does only cover “web” distribution, not distributing on DVDs or on non-commercial TV (e.g. university campus TV).
And not to forget (do we see a pattern here) it only covers distribution, not creation or consumption.
But maybe those non-commercial web videos appear out of thin air and are purely consumed on bitstream level.
This is what people really need to be focusing on. WebM, based on what I’ve seen, is beating x264 in decoding performance and now has hardware support. The combination means better battery life. And for WebM, its a double-win over x264.
While every stream must be encoded, once, successful streams will be decoded millions of times in comparison. Which ultimately means, decoding is extremely important, second only to visual quality. Encoding time, frankly, I’m not even sure is the third most important element. For encoding, the only factor that really matters is if its fast enough. By all accounts, encoding time is fast enough which is why we are seeing WebM gaining market share and acceptance.
Realistically, WebM is already reshaping the market. Hardware and codec support is rapidly growing. In the very least, x264 now has a proven rival. The real question is, how long will the market have two significant codecs?
Unless, of course, people do this for a living. In which case their time is actually … you know … valuable.
Agreed. People in a commercial position should objectively weigh up how much one minute per video will cost them compared to the cost per year of buying a license from MPEG LA for their commercial use of h264, and the cost per year to whoever is going to host the video, and the cost of being on the wrong side of the best interests of the end users.
I can’t in my wildest dreams image any balance sheet where h264 comes out better than WebM.
You never were very imaginative.
This web site seems to be doing quite well with live and archived h.264 streams. Check out their quality and savvy:
http://www.digitalconcerthall.com/en/help
It’s not “facts”: yes, H.265 (or whatever they’ll call it when it’s out) is in development, but it takes time to create a complete and optimized encoder. Go ahead and take a look at how long it took for x264 encoder to come where it is now. It didn’t just happen overnight the next day when H.264 was released. And it won’t happen with H.265 either. There will still be a transition time from old codecs to the new codec.
Considering how much support WebM has gotten from major international corporations I’d say they disagree with you.
Go ahead and browse the VP9 git repository.
Actually, I am curious about VP9 as well. Is there really a public repository where it is being developed? Any links to that?
I know there is VP8 experimental:
http://goo.gl/cIf3B
Is that what you meant?
As for H264 vs WebM…I bet that anybody that considers themselves a web power user can pay this link:
http://goo.gl/fSfS4
Which of the videos on this site can they play?
http://goo.gl/JDobV
What the heck is one of them when they’re at home?
Calling someone a tool isn’t the best way to create a healthy discussion.
The spec has been in development a long time, implementations of it have not. Only once software is released will there be any optimisation. I should point out that reference implementations tend to less optimised for code clarity etc.
Regardless of which format you favour, “too late” seems like a stretch. WebM arrived before the H264 royalty collectors caused widespread damage. It also arrived before the HTML5 video tag has gone mainstream. I would say that is not too late.
While there is no doubt that H264 is currently the superior format, I think it is an open question if it is enough better to offset it’s disadvantages. I see no reason by WebM cannot become the dominant video standard on the web.
Today, most people get their web video via Flash. The user does not care if it is H264 or WebM as long as Flash plays it. Flash supports WebM. In the future, most video will likely be served natively via HTML5.
WebM is (and will be) much better supported by desktop browsers than H264. Firefox, Chrome, and Opera all support WebM exclusively. Internet Explorer and Safari (desktop) support WebM if it is an installed video codec on the OS. Android supports WebM obviously. So, that leaves only iOS where WebM would currently be unwelcome. The fastest growing platforms all support WebM.
There are a huge number of companies implementing WebM in hardware:
http://www.webmproject.org/about/supporters/
WebM is less expensive and safer for content producers and hardware manufactures to adopt. WebM may soon have an even bigger addressable audience online than H264.
So, the question is really why H264?
Quality? WebM is the same quality as H264 baseline and getting better. IMHO, WebM is “good enough” for the mainstream Internet user. This is not enough of a reason to keep it from being used for video on the web.
Really, the only reason to use H264 is because Microsoft or Apple have made it impractical/impossible to choose and they have chosen for you. That may be a difficult line for them to hold. Microsoft is already giving ground and smartphones have short lifetimes.
If WebM takes the web, it has a real shot at other niches. This is not MP3 vs Ogg.
Too late? We will see.
I would expect HEVC/H.265 encoders to mature as the HEVC/H.265 standard matures.
WebM has about two years to gain traction before HEVC/H.265 becomes reality. That is a very small window of opportunity.
When I’m watching a video, I’m watching, you know, the video. Not pretty graphs.
It’s good that at least ssim was used and not psnr, but fact remains that not even ssim accurately represents visual quality. Using ‘–tune ssim’ with x264 deactivates some psy-opts. And so while this test shows how well the encoders do optimizing for ssim, pretty graphs tell me nothing about how the actual videos look like.
The thing is: even humans evaluations are biased!
Several examples:
* audiophile prefers ‘warm’ (distorted) sounds because that’s what they are used to from their vinyl disks.
* young prefer sounds distorted like MP3 because that’s what they are used to..
* consumer cameras by default do not try to have good color reproduction because users prefers ‘hot’ colors..
SSIM vs bitrate overage
Also, since some seem to have trouble staying at the designated bitrate, shouldn’t encoding bitrates be adjusted to stay at the requested criteria. Doesn’t this mean x264 should be encoding at a lower bit rate? Likewise, doesn’t this mean Theora should be encoding to a much higher bitrate, especially given your statement, “The Theora encoder considers container file overhead when encoding, possibly explaining its lower bitrate.” Which appears to mean Theora is artificially punished on the SSIM evaluation because its pro-active efforts to maintain the requested bitrate.
It appears x264-high consistently exceeds the designated bitrate which would have the affect of artificially driving up its SSIM rating. It really seems its still not an apples to apples comparison until all are actually encoding at the designated bitrate; else those that do honor the requested bitrate are effectively punished. Seems like a cleaver way for x264 to consistently obtain an undue advantage in comparisons such as these.
I should point out that the container format overhead is included in the bitrate charts so a small margin over the target is expected for vpxenc and x264 (and Theora should be expected to hit it exactly). Container overhead should be in the rage of around 0.5%
All the encoders do a good job of matching the requested bitrate in 2 pass mode, all bitrates are within around 1% of each other. IMO not significant enough to worry about.
The differences are a little more significant in 1 pass mode indicating that the problem isn’t just Theora considering container overhead but rather Theora is just doing a worse job of matching the requested bitrate. I consider this an encoder decision and passing Theora a larger bitrate to compensate would be be unfair.
But again the differences here are still very small, getting a comparison with all encoders hitting the exact bitrate in 1 pass mode would be quite difficult.