The WebM project Blog has announced an update release of the VP8 Codec SDK codenamed the “Bali” release. The Bali release was focused on making the encoder faster while continuing to improve its video quality.
Compared to the initial launch release (May 19, 2010), and the previous release codename Aylesbury (October 28, 2010), the new Bali release achieves the following improvements:
- “Best” mode average encoding speed: On x86 processors, Bali runs 4.5x as fast than our initial release and 1.35x faster than Aylesbury.
- “Good” mode average encoding speed: Bali is 2.7x faster than our initial release and 1.4x faster than Aylesbury.
- On ARM platforms with Neon extensions, real-time encoding of video telephony content is 7% faster than Aylesbury on single core ARM Cortex A9, 15% on dual-core and 26% on quad core.
- On the NVidia Tegra2 platform, real time encoding is 21-36% faster than Aylesbury, depending on encoding parameters.
- “Best” mode average quality improved 6.3% over Aylesbury using the PSNR metric.
- “Best” mode average quality improved 6.1% over Aylesbury using the SSIM metric.
Note that the Aylesbury release itself achieved over 7% overall PSNR improvement (6.3% SSIM) over the launch release. This brings the total improvement in the SSIM quality metric for the Bali release to 12.8% improvement over the launch release. Most of the image quality and encoding speed comparisons posted online comparing WebM and H.264 compare the launch release of WebM, not the Aylesbury release and certainly not this new Bali release.
I appreciate Google posting these updates. This is the 2nd one they have done so far in a fairly short amount of time so that are at least demonstrating some determination in making webm better.
However, I _really_ wish they would give some indication of actual performance instead of just reporting everything relative to the last release. The relative stuff is useful, don’t get me wrong, but the lay person reading this has no idea how the webm encoder actually performs…
I suspect they don’t want to do so because performance is still so bad, but facts are facts and they shouldn’t hide them. I want to see webm get used more, but the sad fact is that although the encoder is now 4.5 times faster than it originally was, it is still over 15 times slower than x264 even using best case comparisons for webm.
They have a looonnnggg way to go. Achieving encoding speed parity (or at least being in the same zipcode) is a key requirement for adoption by most users I would think.
I don’t understand where you get this “bad” idea from. When it was released, comparisons between WebM and H264 showed that WebM was only slightly behind the best H264 encoder (x264) in quality. WebM has improved over 12% in objective quality since then, and significantly in subjective quality as well.
WebM is fast enough to encode at acceptable quality to be used real time:
http://blog.webmproject.org/2011/02/vp8-for-real-time-video-applica…
Improvements in webM are ongoing:
http://blog.webmproject.org/2011/03/next-up-libvpx-cayuga.html
I’m pretty sure that the original encoding speed comparisons compared hardware-accelerated H264 against software-only WebM encoding.
Hardware acceleration of WebM encoding is now becoming available in production hardware with the Nvidia Tegra 2 platform, and ARM Platforms with Neon extensions.
http://www.arm.com/products/processors/technologies/neon.php
As far as I know, there has never been an apples-with-apples comparison of the two using equivalent levels of support.
Edited 2011-03-09 01:44 UTC
Dude… I USE webm. And I am not talking about quality at all, only speed.
Anything is fast enough if you throw enough hardware at it and set the presets right. The statement that it is “fast enough” for realtime encoding simply doesn’t mean anything relative to other codecs.
I know that, and I appreciate them immensely. I am a supporter of the format. However, overstating the facts doesn’t help anyone. Webm was and is not “slightly” slower than x264, it varies depending on the presets, the source materials, and the number of cores you have, but for Aylesbury it is still about 5-10 times slower when run single threaded on MY source files… Running it multi-threaded affects quality negatively (at least it did in Aylesbury), x264 does not suffer from that issue. Running them both multi-threaded closes the gap to about 3-6 times slower, but it is still ALOT slower.
No. They were not. It was x264.
That will certainly help, but my point about not reporting real performance still stands. I’m not ragging on the state of webm – it is what it is and it will get better. I simply don’t see the point of not reporting absolute performance.
I will happily post a comparison as soon as I get Bali working.
WebM encoding is indeed considerably slower than x264, I didn’t say otherwise. Here is what I did say:
Now H264 has patents on many of the most optimal methods, so in order to approach H264 in quality, there is a tradeoff. WebM decoder speed is not traded off, in fact WebM decoding is less computationaly expensive than H264. Encoding speed is where the compromise was made. webM encoding is indeed a lot slower, that is how it can make up ground in quality while being forced to use less-than-optimal methods.
The real question is “what is acceptable”? Given there must be a compromise somewhere, where should any sacrifice of performance be made? Given that most people will only ever encode relatively few video clips, and that video clips are typically encoded once per many thousands of times they are decoded, it makes sense that the performance compromise should be made in the encoding. The Bali release makes this sacrifice a good deal less painful than it was in the previous two releases of WebM.
For people who do have a lot of WebM encoding to do, the best solution at this time is to obtain a platform with support for encoding in hardware. At this time there are two hardware solutions for encoding, they are the Nvidia Tegra 2 platform and “ARM with Neon extensions” platforms.
There is also AFAIK a work in progress to provide wider support for hardware accelerated WebM encoding via GPUs using the OpenCL language, but that is not yet useable at this time AFAIK.
Well fucking said! I’m tired of developers whining about actually working for a change.
“Ooh, it’s too slow!!!” Well guess what, do what you’re supposed to do, what you’re PAID to do and encode the damn video’s already.
Just like web developers. “Oh noes, we have to move to a newer format (HTML5), I can’t do it!!!” Well guess what again loser, get off your fat ass and code the damn website already.
If most developers were actually just slightly less lazy, we wouldn’t be stuck with stupid apps that relied on outdated programs (IE6) to begin with.
This comment winds me up rotten. Not because of anything fault of yours or the WebM developers, but because of the sorry state patent laws leaves consumer choice.
I know the topic of patent protection has been done to death already, but I really do wish developers were left to write code rather than worrying about avoiding best practices because they’re patented.
If someone “steals” code, then they’re right to be sued under copyright law. Though that’s a different case-study altogether. Patents, however, have no place in software.
</rant>
i’d like to add that the neon support in x264 is rather limited – although i don’t know the extend of optimization for neon in webm.
(not saying x264 is worst or something we all know its an _excellent_ encoder – the best in fact, but let’s not make the mistake to think its perfect in every single way)
Metrics are pretty much useless. Optimizing for PSNR is completely useless, it gives you one big blur. SSIM is better, but not the real deal either. x264 has tunings for both PSNR and SSIM. You know what these tunings do? They turn off options that make the video look good. Do a test yourself, three 2-pass encodes to the same bitrate, first with “–tune psnr”, second with “–tune ssim” and third with “–tune film”, you’ll see.
As long as the libvpx developers focus on metrics, they’ll never create a good encoder. Psychovisual optimizations is where it’s at. And these psy-opts lower PSNR and SSIM, not raise it (well, AQ raises SSIM compared to no-AQ, but then there’s other psy-opts that lower it again).
Edited 2011-03-09 01:18 UTC
http://blog.webmproject.org/2011/03/vp8-codec-sdk-bali-released.htm…
Some of these, especially the last, would not improve the objective quality measures such as PSNR or SSIM much at all. They are “Psychovisual optimizations”.
Edited 2011-03-09 01:42 UTC
“Google Releases New Version of VP8 Codec”, seeing as though they’re the only ones who can officially do anything with their “open” project…
The WebM project provides “reference” code, here is their license page:
http://www.webmproject.org/license/
Individual Contributor License Agreement (“Agreement”), v1.1
http://code.google.com/legal/individual-cla-v1.0.html
Software Grant and Corporate Contributor License Agreement (“Agreement”), v1.1
http://code.google.com/legal/corporate-cla-v1.0.html
Caveat: In order for outside contributions to the WebM project itself to be accepted into the WebM project codebase, Google must first assure themselves that the new code does not infringe any patents. Quite reasonably, Google wouldn’t want some “helpful” outside contributer attempting any code sabotage, would they?
Other people/projects can of course do what they like within their own codebase, at their own risk, such as this project which includes WebM:
http://www.ffmpeg.org/
Edited 2011-03-09 03:33 UTC
That’s kind of like saying Linux isn’t open because Linus Torvalds is the only one who can do anything with it.
https://groups.google.com/a/webmproject.org/group/codec-devel/topics
https://groups.google.com/a/webmproject.org/groups/dir
Edited 2011-03-09 06:03 UTC
I think the point being made here is that no one but Google (and ultimately the acquisition it came from) ever had any say in what VP8 or WebM was going to be. Whereas h.264 was a collaborative project among many companies, VP8 was created by a single company behind closed doors, acquired by Google, released as final, and then adopted in an official capacity on the world’s largest video site, also owned by Google. Given the circumstances, anyone who doesn’t insist on conflating openness with freeness would have to admit that h.264 was, and likely remains, more open.
Initial reports also held that VP8 documentation was very poor (code is not documentation), though that may have changed now. This was (is?) a real problem — a well-documented spec allows clean implementations to be created. Well-documented, actually-open specs breed superb projects like x264 and LAME, while poorly-documented, purportedly-open specs breed placeholder projects like Gnash and Mono.
I must reiterate (ad nauseam) that cost is a separate dimension from openness. They are independent variables that usually, but don’t always, correlate. The latter does not depend on the former. (This, of course, does not fit into the populist all-or-nothing, good-vs.-evil model, where “open” has no real definition and is used instead simply to mean “good”.)
Given that the entire WebM product, top to bottom, is, in both patent and copyright, royalty free for all use cases, there’s no practical disadvantage to there being only a single implementation, codebase-wise. But if Google’s documentation is just code, then Google’s code will be the only code (copyright-wise), and there’s no practical advantage to being a supposedly open spec (patent-wise), except for that alluring $0 price tag, because it was (and may still be) impossible to create a copyright-clean reimplementation.
WebM: unencumbered by patents, free, Free, open source, can be implemented by anyone – wherever, whenever, however.
H264: none of the above, but instead of being developed by a single company, it was developed by a few big shots.
And somehow, H264 is more open?
Exactly.
How anyone can be so out-of-touch and not be dropped-on-the-head stupid amazes me.
They were able to form sentences, punctuate things correctly, spell properly, and overall form things into cohesive paragraphs.
Yet they somehow think “Many companies is more open than one. Period.”
*head implodes*
Greater than or equal to, yes. Open is a separate dimension from free and/or Free.
And WebM is not patent-free, though it is, as you say, unencumbered, in the sense that it is licensed Freely for free.
But being open, anyone can indeed implement h.264, wherever and whenever, but the “however” is a problem – a freeness problem. Depending on implementation, royalties may be levied.
But that kind of openness, despite its shortage of Freedom, is beneficial to the Web, because it solves the single vendor problem both in theory and in the real world. No one is stuck going to MPEG-LA simply to make it work.
In the absence of adequate specs, this is not the case with WebM or Flash. I believe you personally have brought up several times during this whole saga that Flash is a supposedly open spec as well. But in practice, for Flash to run on any computer requires huge cooperative investment between Adobe and the platform developer. Apple takes some heat for not playing along to get support on iOS, but the Xoom, for all its enthusiasm about Flash, can’t run it either. Working implementations of Flash outside of mainstream PCs simply don’t exist, with or without Adobe’s involvement. That’s a hell of a single vendor problem, and really calls into question even the freeness of Flash. It takes a tremendous outlay of resources to get it working, and that cost trickles down.
Whether the same problem will arise with WebM remains to be seen, as development in that area is currently obscured by h.264 + native app solutions on smartphones.
Bottom line, I don’t believe giving something away is inherently good, nor is charging for something inherently evil, and when you factor out openness, that’s essentially what we’re left with.
Yeah, you do seem to live in a separate dimension.
Of course you are. x264 is unusable without a MPEG-LA licence.
You are the absolute _master_ of doublethink.
Open is not separate from free. It’s essentially dependent on freedom of usage. Open in terms of technology means that there is no monopoly, no one can pull a plug on it, and anyone can use it freely. Nothing of the above applies to H264. So it’s not at all open.
really? open and free are no dependent on each other. it very much derives which aspect are you showing as open and which as free.
i can make open software under commercial only license where everyone can contribute as long as they relicense modifications to me.
i can also make software free of charge, but not open source it.
neither makes the definition of free OSS.
open and free are two completely separate confinements. same as free and free can be different. there are various representations of those terms and they are called licenses
btw… h264 is closed i agree
Edited 2011-03-09 18:16 UTC
Nope, they are related. You understand open as ‘being available to see the code’ but that’s not a full scope of what it means in this context. So in order to avoid the confusion define your terms first.
Edited 2011-03-09 19:54 UTC
those two were just examples how your assumption doesn’t fly. i won’t go defining all possible combinations just to point out something
actually if you want specifics (use any combination, both can specify one, any or all). and you can have open nonfree software just as well as free and not open
open relates to project, specification, contribution and source
free relates to either price or freedom (freedom for software or freedom for user)
Edited 2011-03-10 01:35 UTC
As so many people like to confuse these things –either deliberately or simply not thinking things through– many advocates use the specific words “libre” and “gratis” instead to differentiate what they mean.
Up until that point, no-one would argue that VP8 was anything but closed and proprietary.
No way. WebM was opened by Google after Google’s acquisition of On2, and from that point onwards the entire nature of VP8 was changed. No longer was it closed, proprietary, and developed by a single entity, from that point on it became open and community-developed.
No problem with this, and a specification is indeed being written and refined. However, you cannot expect a full-blown correct specification to spring up out of thin air, it has to be developed, and this takes time.
No argument here … openness has nothing to do with cost. It is not because h264 costs something that it is not open, but rather it is that h264 is not open for anyone to implement that makes it not open.
Whatever gives you the impression that there is only a single implementation? Already there are many. There is the reference codec, made by the WebM project, and there is ffmpeg, I believe the x264 project has another implementation of WebM, and there are a number of hardware implementations.
I think you might be getting confused by the concept of the “reference codec”. This is not a single implementation, but rather it is the “gold standard” implementation. What this means is that if your decoder implementation cannot play a WebM video encoded by the reference implementation, then you are doing it wrong. By definition. Likewise if a video encoded by your implementation cannot be played by the reference implementation, then you are doing it wrong. By definition.
Get it?
Sorry, but that is just your misunderstanding. Google’s code is not the only code, it is merely, for the time being, until the specification documentation is sold and proven, the reference implementation against which other implementations must test themselves. Once another implementation tests correctly against the reference implementation, then it can be said to be a WebM implementation.
Here are some links:
http://blog.webmproject.org/2010/10/vp8-documentation-and-test-vect…
http://blog.webmproject.org/2010/08/ffmpeg-vp8-decoder-implementati…
Google’s implementation is called libvpx.
ffmpeg implementation is called ffvp8.
“The ffvp8 implementation decodes even faster than the WebM Project reference implementation (libvpx), and we congratulate the FFmpeg team on their achievement. It illustrates why we open-sourced VP8, and why we believe the pace of innovation in open web video technology will accelerate.”
http://www.osnews.com/story/23598/FFMpeg_s_ffvp8_the_Fastest_VP8_
The in-work specification:
http://www.webmproject.org/code/specs/
Hope this helps.
Edited 2011-03-09 12:59 UTC
Google never accepted community input on the binary spec for VP8 files, so while everyone is invited to contribute to the programs that create and decode it, it’s a stretch to say that VP8 itself is community developed.
While I think that was the most practical way to go about it, I don’t suffer any illusions that VP8 is OSS, born and raised. The basic issue is that h.264 is and was always intended to be open, which is why the documentation is mature, while VP8 was intended to be commercial until it changed management, which is why the documentation isn’t. It’s not a terribly important point, but there it sits.
My concerns with source-as-doc were more or less allayed by a quick visit to Wikipedia, which states that Google’s VP8 code is licensed under BSD, so copyrighted code snippets should be a non-issue.
But if, for example, there were no complete documentation on the ext2 file system that did not contain GPLed code samples, I could not say that ext2 is an open spec, because a copyright-clean reimplementation would be impossible.
So yes, if the current documentation is messy only in a presentational sense, that’s understandable, but I had been concerned that it was messy in a legal sense.
On that topic, your examples of outside implementations were quite informative. It does sound as if VP8 should be able to grow very quickly (very much unlike Flash, as I was discussing with Thom above).
It’s not open for anyone to implement… free of cost. Either cost means it’s not open, or it doesn’t. I say it doesn’t. You seem to have said both.
What about if they put prohibitive costs for us to do something? Would that close it to you and me?
Do you mean the container format for WebM? It is essentially matroska … which indeed was community developed.
http://www.matroska.org/news/webm-matroska.html
As for the actual encoded bitstream format, that is necessarily set by the imperative to avoid patents held by other parties. There is no room for community input there, it is what it had to be.
Meh. The specification documents exist, they are just not proven. Warning: PDF
http://www.webmproject.org/media/pdf/vp8-bitstream.pdf
The WebM project code is not really the documentation, the code is the “gold standard” test. In the event of a conflict between the specification document and the bitstream format produced by the code, at this time the code is the final arbiter, and the documentation will be corrected to reflect what the code produces. After a while, this will switch around the other way, and what the specification says will become the final arbiter. I’m not sure when this is expected to happen, but whenever it is the main point is that it is not really a huge issue at all that some parties are trying to make it out to be.
No, the project is fully open to community participation. Anything you might be hearing to the contrary is merely astroturfing by vested interests trying to rake up some mud somehow.
No, to be truly “open” means that the author of an implementation must be able to pass the right to change/re-implement the work on to downstream recipients. This characteristic emphatically applies to both the BSD-style open licenses and the copyleft style open licenses. Even if one author purchases a license to implement H264 from MPEG LA, the author of a work CANNOT pass on that right to downstream recipients of his/her work.
Hence there can be no community participation in h264. Ergo, h264 is not open. One cannot correcly use open licenses such as BSD or GPL for one’s implementation of it.
Indeed, community didn’t. Now they do.
Indeed, big, large corporations had a say. Community however didn’t. And they still don’t.
So, your argument is that there needs to be more than 1 entity when creating the initial version of something for it to be open, regardless of how many entities can freely modify and study it after the inception of the initial version?
News at eleven: a new project doesn’t yet have full and complete documentation, people to barricades.
There is no such a thing as copyright-clean reimplementation, unless released as public domain. Being copyrighted isn’t even a problem as VP8/Webm is released under a license that waives Google’s rights to it. Ergo, your point is moot.
Oh well, nice trolling attempt mate, with enough practice you might make a true alpha troll when you grow big! Just hang in tight and keep your head high!
Bogus algorithm for computing improvement in the
article: total improvement is multiplicative,
not additive
If the improvement from base to A is M% and from A
to B is N%, then we need to convert to improvement-
factors to do the computation.
A’s improvement factor is 1 + M/100, and B’s factor
is 1 + N/100, so the total improvement factor is
(1 + M/100)*(1 + N/100).
Converting that to percentages gives an incremental
improvement of 100*(1 + M/100)*(1 + N/100) – 1),
or M + N + M*N/100
If I read the individual numbers M,N correctly,
that gives a 13.7% improvement, not 12.8%.
A (Aylesbury) is (1+M%) * base
B (Bali) is (1+N%) * A
So, B is (1+N%) * (1+M%) * base
Bali: “Best” mode average quality improved 6.1% over Aylesbury using the SSIM metric.
Aylesbury: “Best” mode average quality improved 6.3% over launch release using the SSIM metric.
Total improvement from launch release to Bali release = 1.061 * 1.063 = 1.127843. I rounded this out to 12.8%.
H.264 / MPEG-4 AVC
Pros:
-Has open source implementations of encoders and decoders.
-The audio, video, subtitling, and container specs is open and well documented
Cons:
-Has parts patented by a vast number of companies, so a single holding company for the patent load was created.
-Container and audio codecs were both developed by Apple, the spec audio is AAC, and the spec container is mp4, which is essentially a Quicktime MOV container.
-At some point in the near future, royalty payments will have to be made to the holding company by someone.
VP8/WebM
Pros:
-Has multiple free and open source implementations.
-Use VP8 video, Vorbis Audio, and Makrosta container, which are all free and open source. MKV and Vorbis have well defined open specs
Cons:
-The final standard of VP8 is defined by code, not a well defined spec. Essentially, as WebM evolves, movies will compatible with WebM-git-DATETIME, like HTML is moving to be.
-While open source, Google “holds the keys to the cathedral,” so there is not that much community involvement in the central defining project.
-Probably uses some H.264 patents (FFMPEG reuses a large amount of their H.264 decoding functions in their implementation of WebM decoder.)
-TO MY KNOWLEDGE, NO DEFINED SUPPORT FOR SUBTITLES ( http://www.webmproject.org/code/specs/container/#demuxer_and_muxer_… )
Edited 2011-03-10 03:14 UTC
…
What?
MKV has well-defined support for subtitles.
There are numerous other problems with what you said, but I’ll let someone more well-versed in the terminology debunk them.
MKV can do everything. It’s by design infinitely extensible, you just have to define the streams and other files in XMLish file inside and pray your player knows what to do with it.
MPEG-4 AVC has a defined subtitle format. http://en.wikipedia.org/wiki/MPEG-4_Part_17
WebM on the other hand does not currently. And the hardware currently being created for it will probably ignore subtitles when they are added to the spec.
There is no need for WebM to define any specific subtitle format. What would the point be anyway, subtitle stream has nothing to do with video codec whatsoever.
None of the current hardware renders subtitles. Subtitle rendering is done all in software, including H.264. Again, subtitles have nothing to do with the video stream.
This is not a “pro”. Open source implementations of encoders and decoders simply ignore the fact that users in some countries require a license. It is left up to said users to get that license. People (especially in the US) could be caught unawares by this, and have to pay fines.
“Well documented” is true, “open” is not, because to be “open” requires that implementing the specification is royalty-free. This is not the case with H.264.
A large number of cons have been missed out here. The most obvious one is that h.264 development costs have been recouped ages ago, and so now all of the money that royalties continue to bring in is pure cream, and a dead-weight loss on the economy. The second most serious con is the chilling effect on innovation that a heavily-patented codec has.
VP8 has a well-defined spec … it is just not proven for now because it is new. The 105-page bitstream spec is here:
http://www.webmproject.org/media/pdf/vp8-bitstream.pdf
That is well defined, the only problem is that it may have inaccuracies. For now, if any inaccuracy is found in the spec compared to what the WebM Project reference codec produces, then the spec will be changed to correct the discrepancy rather than the refernce codec. This is needed because the spec is new, and may still contain inaccuracies, and there is a need to preserve the validity of existing encoded files rather that freeze the spec right now.
Nope. The bitstream format, as instanced within the existing corpus of WebM files, and defined as the bitstream format produced by the webM Project reference code, is frozen. That is the webM format.
FTA:
http://blog.webmproject.org/2011/03/vp8-codec-sdk-bali-released.htm…
Today we’re making available “Bali,” our second named release of the VP8 Codec SDK (libvpx). Note that the VP8 format definition has not changed, only the SDK.
The only thing that is not frozen is the specification document for that format:
http://www.webmproject.org/media/pdf/vp8-bitstream.pdf
… because it is 105 pages of complex and realtively new text, and it may still contain divergences from the frozen bitstream format it is meant to describe. Once those have been shaken out of the specification, it will become the formal specification, and this document will then be frozen also.
Nope. There is a pretty decent community and building momentum behind WebM, and a number of independent implementations of it. Where do you get this guff from, exactly?
Probably not. There is a great deal of “common ground” video compression technology that has prior art (and is therefore not patentable by anyone), or for which applicable patents have expired. Piror to buying On2, as part of due diligence Google undertook a lengthy and what they describe as thourough patent investigation of VP8, and they gave it a clean bill of health.
Full quote from link: “WHATWG / W3C RFC will release guidance on subtitles and other overlays in HTML5 <video> in the near future. WebM intends to follow that guidance”.
So, when the subtitles and other overlays requirements of HTML5 are settled, WebM will support them. There is plenty of capacity in underlying the format structure to do exactly that. This is a “pro”, not a “con”.
PS: despite repeated attempts to characterise it otherwise, WebM is both free as in gratis (no charge) and free as in libre (no royalties).
Edited 2011-03-10 04:24 UTC
Source code is speech, which is supposedly free in “civilized” nations.
Binaries on the other hand…
Only in Europe, which is also the only place your definition of libre makes sense.
Edited 2011-03-10 05:30 UTC
This does not rebut the point made.
This does not rebut the point made.
Essentially, as WebM evolves, movies will compatible with WebM-git-DATETIME, like HTML is moving to be.
Let me first say, WTF? The community is updating quality and performance of VP8 encoder. It means it would still be backward compatible. Have you seen recent activity going on on Theora ? They are just upgrading code while keeping compatibility. It is same here, VP8 I mean.
The fact that there are about to be a bunch of hardware VP8 decoders means that they won’t change the binary spec. They might be able to get away with it if it was only being decoded through software but you can’t simply thrown away hardware every 6 months.
Indeed.
There have been three releases of the WebM Project reference codec (called libvpx), those being the launch release, the Aylesbury release and just recently the Bali release, and they all use the exact same bitstream format. In addition, the ffmpeg project has released their implementation (called ffvp8), which is a different codebase. That too uses the exact same bitstream format. There are two hardware encoder/decoders shipping now (within the nVidia Tegra 2 platform and the ARM Neon extensions) that use the exact same bitstream format. WebM in Android 2.3.3 and Android 3.0 uses the exact same bitstream format. There are several hardware implementations becoming available soon, such as the Rockchip Rk29xx ARM SoCs and TI OMAP 4 and OMAP 5, which use the exact same bitstream format.
The bitstream format is frozen.
I simply cannot see what is apparently so difficult for some people to understand about such a simple fact.
I suspect they do understand it just fine, and that their actual desire is simply to spread FUD.
Edited 2011-03-10 04:50 UTC
For now…
http://review.webmproject.org/#q,status:merged+project:libvpx+branc…
Though I guess they could just call it VP9 and WebM 2.0, once they decide they have to break with their current Bitstream definition.
What the hell are you going on about? Why would they break the bitstream definition when it is working perfectly fine and all the work to be done is on the encoder itself, not the bitstream?
http://blog.webmproject.org/2010/06/future-of-vp8-bitstream.html
Having a fixed bitstream constrains the optimizations that can be done.
Edited 2011-03-10 05:24 UTC
Quote: “At some point in the future”, as it clearly says there. And you know what? It’s pretty normal for new codecs to emerge after a while, like you know, H.263 went to H.264…. Oh, right, you wanted to troll, sorry.
This is what the poster was talking about:
http://blog.webmproject.org/2010/06/future-of-vp8-bitstream.html
However, what the poster failed to recognize is that EXACTLY the same consideration applies to H264.
http://en.wikipedia.org/wiki/HEVC
High Efficiency Video Coding (HEVC) is a proposed video compression standard, a successor to H.264/MPEG-4 AVC (Advanced Video Coding), currently under joint development by the ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG).
Yeah, I noticed. He must be confused somehow as there is absolutely no plan to break VP8/Webm, it’ll instead be a new codec and it won’t be useable for atleast a year now. AND VP8/Webm development will still continue alongside the new codec.
But considering his original post is full of misconception and plain cr*p it really ain’t a surprise, ya know.