A few months ago, Google open sourced the VP8 video codec as part of the WebM video project, to create a truly Free/free unencumbered video format for the web as an answer to the non-Free/free patent-encumbered H264 format. Today, Google launched a new image format for the web, WebP, which aims to significantly reduce the file size of photos and images on the web.
Google explains that of all the bytes transmitted through the intertubes today, 65% is made up of images and photos. In other words, if there’s an area where you can save on bandwith – and thus, speed up the web – it’s image optimisation. To that end, Google has unveiled WebP, an image format which makes use of the techniques from VP8 video intra frame coding. The container format is lightweight, and based and based on RIFF.
A more detailed explanation can be found on WebP’s website. “WebP uses predictive coding to encode an image, the same methodology used by the VP8 video codec to compress keyframes in videos,” the page reads, “Predictive coding uses the values in neighboring blocks of pixels to predict the values in a block, and then encodes only the difference (residual) between the actual values and the prediction. The residuals typically contain many zero values, which can be compressed much more effectively. The residuals are then transformed, quantized and entropy-coded as usual. WebP also uses variable block sizes.”
So, what is the result of this? “In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality,” writes Richard Rabbat, product manager at Google, “This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.”
They’ve set up a comparison site, with the original JPEGs on the left, and the re-encoded WebP images on the right. In this case, the WebP images use the PNG container format so we can actually see them. To my untrained eye, there’s absolutely no difference, and I have a pretty decent monitor. The one difference I do not is the sometimes ridiculous reduction in file size – which makes me happy.
Likely to make it easier for Google to parse the format, identify you and rape your privacy in one solid format.
They can do the same now. The image formats used on the internet have open specs, so they can rape your privacy NOW!!!
They don’t need a new image format to know what you’re doing.
All that information is easily parsed already from document tags.
Even that aside, advances in image and character recognition means that the image format is irrelevent when scanning for usable information
anyone else click the “comparison” link and watched as the new format images loaded 2 times slower than the JPEG ones? hahaha. i did this in Opera and got a huge kick out of that.
You do realize that the sample image is a lossless PNG shot of the WebP image, so you can check it out quality-wise in your current browser.
Not even Chrome supports WebP rendering yet.
Why do people say “You do realize” when they know the person didn’t realize. You don’t need to be a bitch about it
Well, no harm was meant here, excuse me if it seemed so.
(I am not a native English speaker, and as such, I do not have a good sense of wording and phrases.)
In case you find it offensive, pls just take it like it was “Note, that …”
Oh, it’s perfectly correct and normal English. It’s just that when you think about what you’re really saying it’s a bit odd. It’s like when someone says “here goes nothing” before doing something. Quite obviously, they aren’t about to do “nothing,” but that’s what they say. There are plenty of common phrases out there that are pretty silly when you actually think about what they mean literally, and yet they’re perfectly common and correct English.
Your mirth is misplaced.
The pictures you downloaded are lossless PNG format, not WebP. You don’t have any software which can display WebP on your machine, so the only way that Google can show you the quality of a WebP image is to take an uncompressed image, encode it in WebP, then re-encode it in a lossless format that you can display.
Edited 2010-09-30 23:52 UTC
well that’s what i get for not reading it or paying any attention to what is directly in front of me.
2 years ago I got my KDE-fanboyism whipped out of me after being to quick to draw a conclusion. Not pleasant. The OSAlert crowd can be hard but is usually fair.
it’s weird being the target of the mob for a change. I got used to being in the side with torches and pitchforks
You get my vote for the politest and most civilized smackdown in the history of the internet.
Most are pretty similar but there are a few exceptions that are less then subtle.
The photo of the NFL player (image 2) looks more saturated with the background changing from blue to aqua and the skin tone redder.
Image 7 goes from magenta to blue especially around the shoreline and piers.
I saw that in the thumbnails also – but when i opened both pictures and made sure my browser wasn’t scaling them to the window size – then flipped back and forth between them – it wasn’t as obvious. Therefore, I’m suspecting the scaling mechanisms between JPG and PNG are causing the differences in the thumbnails.
I did notice the words NFL on the mic have a slightly different set of artifacts, and the detail on his head seems sharper in the JPG. It’s kinda hard to make an objective comparison anyway when the source picture was already lossy.
Huh? Image 6? Again, it looks a lot different when my browser scales the images than they do unscaled at full resolution.
Edited 2010-10-01 00:17 UTC
I tend to think there’s an improvement in some of these images, but your point makes me wonder.
There would be zero improvement.
The JPG was the source.
What they need to do is shoot RAW and export to TIFF or PNG. From the TIFF/PNG convert to both JPG and this new format.
That would be a comparison.
What they’re showing here is that you can compress already compressed files at a loss (although seemingly small).
Realistically thought compressing already processed jpegs is probably the most likely use case for the web. I have 20 thousand or so jpegs in my htdocs folder right now. If I can run a script over them and re-compress them in a new format and get smaller files without too much loss of quality them I’m totally interested. If on the other hand I have to go back to the unprocessed original and start from to get any benefits, then I kind of lost interest.
Well they do appear in certain spots to be less noisy.
You don’t get the point: The mission of image compression is to reduce the size of the image with a minimal loss of information.
You may talk about different kinds of information loss, and you may prefer some kinds of loss over other kinds of loss. But you cannot talk about improvement. Any change of the image caused by the compression technique – whether you find it improving or not – is a change, and thus it is bad.
If you want to “improve” an image, you use other techniques (sharpening, blurring, scratch detection, etc.). And then you get a new “original” that you may try to compress with different compression techniques.
Again, some images in some spots look better to me – less noisy.
I don’t have to get the point of compression or image manipulation to notice improvements.
By “Wasn’t as obvious” you mean “wasn’t there”? I think you guys are succumbing to the audiophile effect (“yeah, it definitely has more clarity and depth”).
If I flick between them there is zero visible difference. I checked in matlab and the difference is really really really small (actually over half the pixels are identical).
Well, yeah – it wasn’t visible enough to make a difference to me. Furthermore, if I saw them side by side, they would be identical. But when you’re swapping back and forth between the two images in the exact same window you start to see very minor pixel-level differences. Nothing I would consider a “deal breaker” though. Mainly I was pointing out that the scaled down thumbnail versions did indeed display more differences than I saw when I viewed the full size images – and indicated that I believe the browser scaling for different image formats may be different.
The worst part is that the JPG was used as the source for the WebP – that was a poor choice for producing comparison images on.
Edited 2010-10-01 20:50 UTC
You’re right. You can see the averaging trying to blend where instead of a random scatter pattern via the JPEG you now see groupings of non-linear shapes to cut down on the file size.
The same with his forehead.
The format will forever be associated with Google, and any derivatives will be able to refer to the WebP format as its origin, so why do that?
Just MIT the thing…
How come Google isn’t comparing it against JP2?
Because this format is aimed at the web and thus ‘compete’ against standard jpg. Do you see many jpeg2000 images on the web? No, because they are computionally very expensive for what is roughly 20% better compression. Personally I think it’s great for archiving high resolution images, not so much for web surfing.
Quoting one of the commentators on the site:
So WebP does not even fare well against standard jpeg…
Possibly quite correct. The DCT is a pretty damn good image transform. The problem with it is computational cost and the ijg libraries frankly suck.
For something like WebP to get any traction is if it’s even simpler and easier to implement and computationally more efficient. Right now the google pages about webp seem to be out of commission so I can’t look into this part myself.
There is always DWT
jpeg2k is a pretty horrific over engineered difficult spec to code and it’s computationally even more expensive than jpeg. Quality wise jp2 isn’t clearly better than jpeg, just different with tradeoffs in what artifacts you get.
Well, unless you have decided that this particular random guy on the interwebs are ABSOLUTELY CORRECT in his assessment despite only offering his own subjective perception then that comment really decided nothing. I’m looking forward to a real test by some experts, preferably using non-lossy compressed media to begin with. But even if webp turns out to be alot more efficient than jpg in terms of size/quality I think it’s going to be really hard to make a dent in jpg’s dominance on the web. Heck, even gif files are still in heavy use despite png being a superior format and even at it’s heyday gif was nowhere near jpg in terms of widespread usage (I am old enough to remember having fuzzy dithered gif porn images in my youth, kids today don’t know what we oldtimers had to suffer through ).
You are of course 100% correct in that observation, but please remember that is is Google themselves that started this nonsense with comparing some random samples and made a gallery biased to make it look like WebP is “much” better than jpeg.
If in fact it is not much better but just a little better, would you use it?
1/There isn’t a single supported standard for animated PNG across all browsers.
2/Most PNG encoders bundled in image editors aim for quality and don’t support artificially enforcing use of limited color palettes, so in the end you can make GIF much smaller than PNG when it’s needed
Edited 2010-10-01 07:30 UTC
JPEG is a pretty awesome format, all told. It’s just that it’s not used appropriately in 95% of all cases, and many of the programs that use it don’t expose all the features of the format.
The problem with that post is that a JPEG set to highest compression does look very crappy. I don’t care what this internet anom stated, it’s very noticable.
So yes, you can compress JPEG to ~80%, but there’s a massive trade off in image quality. Much like MP3 compression really and such is life with any lossy compression formula.
They did; see here:
http://code.google.com/speed/webp/docs/c_study.html
Jpeg 2000? You mean the zombie?
Seriously, someone here is using jpeg2000?
Jpeg 2000? You mean the zombie?
Seriously, someone here is using jpeg2000?
Seriously, did you have some point with your comment? Why shouldn’t someone use JPEG-2000? It is computationally more expensive, yes, but it also produces slightly less artifacting and smaller files than regular JPEG. If you are going to use a lossy format for archiving purposes you may as well go for JPEG-2000.
With disk space so cheap png is the real winner here for typical digital camera image storage. Kind of like flac vs mp3: you want to archive lossless and re-encode as needed for whatever new portable device you have.
I do, but I use it in lossless compression mode. In planetary science, it’s a particularly useful format as it allows you to view segments very large images at different zoom levels simply by evaluating different chunks of the image file. This is a huge advantage of using the DWT, especially when the image sizes grow to be overly large.
eg: HiRISE images ( http://hirise.lpl.arizona.edu/ ) come in at about 2.5 Gigapixels; jpeg2000 is perfect for viewing pieces of the image at different zoom levels.
That said, implementing jpeg2000 for web photos would be silly, as the web is currently designed. Pretty much all images on the web are viewed at 100% zoom; should that change, jpeg2000 would be useful. As it is, other algorithms are faster than the DWT used.
I’m afraid IE users will see an “X” instead of image encoded in this format for decades to come…
Javascript?
You’ll have to store both formats of the picture but you’ll save on bandwidth.
Then you can also put a watermark on the IE version that says “Use a modern browser”. Come on, the format is already a day old…. why isn’t it supported already?
That it’s in a RIFF container, when Chrome won’t even play raw unencoded PCM in a .wav file with it’s AUDIO tag…
Which is just a RIFF container with raw data in it… and here I thought they were into that whole “RIFF comes from the evil empire” mentality on that. Kinda surprised they didn’t just gut matroska for it like they did with webM.
Lemme guess, right hand knows not the left?
Edited 2010-10-01 07:16 UTC
Looking at wikipedia RIFF was created after AIFF (apple). The one difference being RIFF is little endian and AIFF is big endian.
It is easy to achieve 20%-40% JPEG size reduction simply by optimising quality and size for a particular page/content.
Ah… Google “doing whats best for all”. Except that we do not need yet another format…
We already have JPEG2000 and JPEG XR: http://en.wikipedia.org/wiki/JPEG_XR
Technically i think JPEG XR > JPEG2000 > JPEG quality-wise and WebP is probably worse than any of these except maybe JPEG.
I don’t care if WebP “is open and free”. In that case we have JPEG which is good enough. Either we all change to something vastly superior that JPEG (i.e. JPEG XR) or we do change anything at all.
Just my thoughts…
Exactly. Also, JPEG XR has other features such as lossy+lossless compression and HDR imaging while being OSS-friendly. WebP’s only advantage is that google owns the IP — whether they wish to use it or not — while JPEG-XR’s IP is owned by Microsoft.
Microsoft released JPEG-XR under a licence that intentionally prohibits use in copyleft (GPL) licenced works.
That’s for the library they released. Not the specification itself. It however permits BSD licenced software, also I believe it can work with LGPL (I’m not sure of this though).
Not sure why there is so much moaning about this.
Take a look at the final pic, less artefacts (look around the nose on the boat) and it is 66% smaller! Amazing!
Moaning about this is like moaning that h.264 is better than DIVX.
Might want to read this: http://x264dev.multimedia.cx/?p=541
You know, I’d tend to trust devs working on H.264 technology talking about their competitors just as much as Xiph devs talking about H.264.
Just sayin’…
But trusting Google is no better, who’d definitely flaunt the format with exaggerations
Indeed. Testing should be done by a third-party who has *no* business interest in orienting the test results.
Competitor? The very same guy also wrote a VP8 decoder for the ffmpeg project from scratch.
He’s an active contributor to both worlds. I don’t see him competing with his own software.
If you don’t trust him, feel free to repeat his tests and compare the results.
He is a competitor. He is pushing X264 as the web standard for video, and he and other x264 devs are trying to get permission to dual-licence x264 so that they can charge money for propriety projects that want to incorporate x264. The fact that he and some other developers (he wasn’t alone) wrote a vp8 decoder for the ffmpeg project does not change this.
No, good old Dark Shikaru has a little too much personal interest in x264 vs webm/webp for my taste. Also, why the heck did he use a motion video frame as source? Smells like a tailored test methinks.
I’ll look forward to totally independant comparisons using a wide range of raw images (so that it won’t favour either encoder) which can then be compared quality-wise between jpg and webp when encoded to the (near as possible) same file size.
Like I said earlier though, even if webp would prove to be a better format in terms of size/quality than jpeg I seriously doubt it will gain any traction. Jpeg dominates the web by being supported everywhere and being ‘good enough’ in terms of quality/size. I believe a new format would have to be so much better (quality/size, no submarine patent threat, permissive licencing) that it’s simply a no-brainer to do the switch from jpeg even with the pain of transition, and I really don’t see that webp is or ever will be that. Time will tell.
Dark Shikari doesn’t care about “web video” per se, he’s just interested in high quality.
And btw, x264 is *already* dual licensed and being sold. Lots of companies want it. And you know why? Because it’s simply that good.
As another commenter said, if you don’t trust DS results, do your own test. These conspiracy theories regarding DS here at osnews are quite silly to me.
(if that last sentence gets this comment downvoted, so be it. won’t chance the fact that by constantly accusing DS of bias, you’re doing nothing but actually showing *your* bias)
Oh, and regarding using a motion picture… here’s what Dark himself said (yes, it’s advisable to read all the comments to his blog post):
“That video is taken on 65mm film by a camera that costs more than most houses ^aEUR” it is higher quality than almost any image taken by any ^aEURoephoto camera^aEUR. I highly doubt your average Creative Commons images even have a quarter the detail that an Arriflex 765 can take.”
Edited 2010-10-01 21:39 UTC
If h264 becomes the ‘web standard’ he and other x264 devs will stand to make more money from licencing x264 then they would otherwise.
Again, he has money to make on the success/dominance of h264, I am always VERY sceptical when people with a monetary interest claim their technology is much better than the competition.
Well, A) that is his words B) it is ONE image, hardly makes for a serious study by any measurement. Also, I wonder what codec the video was encoded in, if it was in h264 then it would seem logical that in reencoding a still frame from it x264 would be favored since it uses the same compression technique as the source (note, I do not know what codec was used for the original video from which the frame was captured, I am just assuming it was h264, I may be dead wrong here). Also, why did he use jpgcrush on the jpg image? It seems to me that he wanted to be able to use a higher quality setting for the jpg while keeping it at the same file size as the webp file. I don’t know exactly what jpgcrush does (other than optimizing the file size obviously) but I guess the added compression time from using that is why it’s not normally part of standard jpeg compression and chances are the same techniques used in jpgcrush could be applied to a webp image to make it smaller while keeping the same quality. Again, this test smells tailored to me. Again, I will await a serious study using a wide range of non-lossy images.
He’s already swamped with the orders for commercial licensing of x264, h264 being “web video standard” or not.
For the rest… as I said, make your own test. That’s the only way you’ll be sure. When WebM was released and DS made his blog post, people here were also accusing him of all sorts of things. But instead of coming up with conspiracy theories, I took my favorite clip (chapter 3 of Serenity) and encoded it with various encoders (x264, libvpx, xvid, theora-ptalarbvorm). My conclusion from that test was that libvpx totally and completely blows. And since it’s the same libvpx making these images, I’ve no doubt in DS’s results.
But don’t take my word for it, or even DS’s. Just *do your own test*.
Also, why go “wondering” about the source video? DS provided all the necessary links in his post. I suppose accusations are easier when you skip the provided info and can depend on “wondering”.
Well, when I pressed the link before (http://media.xiph.org/video/derf/) it was unavailable (likely due to traffic) from what I gather the video was y4m uncompressed so that’s no issue. But again, one single frame makes for a poor testbed. And while I could do my own tests and add another flawed ‘here’s my impression of how these image formats compare’, I’d rather wait for some independent expert tests who can choose a range of test images using experience/knowledge. And yes, I think x264 is great, I’ve used it in conjunction with VirtualDub and Avisynth to backup alot of my dvd’s to mp4 files for convenience, the quality/size is likely the best there is at the moment but that has nothing to do with wether Jason Garrett-Glaser is objective when viewing competing codecs.
Also, webm not being as good as h264 doesn’t make ‘completely blow’, that’s just the fanboy in you talking. It’s royalty free which is very important when it comes to a ‘web standard’, if the quality is ‘good enough’ (you know, like jpeg) for people or not when it comes to web video content is something time will tell.
No, the fact that it completely blows makes it completely blow. A blurry mess, devoid of any details. That you need to resort to name-calling only confirms me saying that you are the one with a bias.
I know that vp8 is royalty free and I know the web needs such a codec. The problem is that libvpx is *not* good enough. If Google was smart, they’d hire Dark Shikari to work on a high quality vp8 encoder. It’s not the format that blows, it’s libvpx, the currently only publicly available encoder for that format.
How was that ‘fact’ established? By you when you encoded your favourite Firefly episode and cried out, “dude, this completely blows”? Really…
Again, says you.
I doubt that would be smart, due to having implemented h264 patented techniques into x264 he would likely be ‘tainted’ by that code (should anything he added to vp8 be similar to patented h264 techniques it’s not as if he could ever feign ignorance). And it’s very doubtful that he has any clue himself as to what is or isn’t patented in the h264 specs since by his own admission he doesn’t give a crap about software patents. I’m sure Google would love being able to say that aswell but with all their money and competitors they’re a prime target for patent litigation.
The fact can be established by everyone doing a test. Have you done one? Two pass encodes targeting the same bitrate, one with x264, one with libvpx. Then compare the videos. As another commenter said, go ahead, disprove me.
I don’t have videos from my test anymore, so I’m running a new one. Once it’s done, I’ll edit this post, adding commandlines used and a few screenshots.
What you write about ‘tainting’ is nonsense. If there’s techniques in vp8 that are covered by h264 patents, they’re already there, whether DS touches the code or not.
Alright, here we go. The source is Serenity, chapter 3, PAL DVD. The video has 7623 frames and is 5:04.88 long. Avisynth script is simple, it just crops away the black borders:
MPEG2Source(“VTS_01_1.d2v”)
crop(0,76,720,432)
Encoding is done on a 32bit Linux installation on a Core i3-530 processor, using the performance CPU governor (which means the proc is running at a constant 2.93 GHz). The encoder versions, commandlines and encoding times (I ommited user and sys time, they’re not important IMO):
x264 core:106 r1732 b20059a
WebM Project VP8 Encoder v0.9.2-75-gf143a81
theora-ptalarbvorm SVN revision 17478
In case you’re wondering about the vp8 settings, they’re from here: http://www.webmproject.org/tools/encoder-parameters/ – “2-Pass Faster VBR Encoding”
Also note how x264 is acutally quite a bit faster than libvpx. It didn’t used to be the case, v0.9.1 took about 10 minutes at the same settings, so the encoder got slower.
Now of course, the meat of this test – the screenshots:
http://www.imagebam.com/gallery/ylkht70jtcgkkzvu7lccsidsfo9yx1ww/
For each of the five screenshots, the order is: original, x264, libvpx, theora-ptalarbvorm
In the first shot libvpx does very well actually, only theora really stands out. The third one though, libvpx completely fails, even theora manages a bit better. The fifth is almost transparent, except with theora. So there are moments livpx does well. Unfortunately there are other moments where it totally fails.
By just skimming over this I wonder why are you not using the BEST quality on both encoders? Instead of picking the ‘slower’ preset for x264 and 2-Pass Faster VBR Encoding for libvpx. Atleast go for ‘veryslow’ (‘placebo’ is pretty much placebo from my own tests) for x264 and 2-Pass Best Quality VBR Encoding for libvpx. Trying to mix and match mid-level presets between encoders is pretty much doomed to fail, only fair comparison is when using them both at their BEST settings.
I knew you’ll find something wrong with my approach. You know why I used those settings? Cos they’re realistic. When I encoded the whole Serenity movie, that’s what I used – preset slower. And the “2-Pass Faster VBR Encoding” for libvpx is actually the second slowest available, at least according to that site. Using –best would be akin to using preset placebo with x264.
And the thing is, it wouldn’t change the results. Not by much. Both encoders would just be a lot slower, while slightly better. But their massive difference would remain. I can indulge you though, I can make another libvpx encode, using –best, in fact, I’ve just started such an encode now.
But it doesn’t bode well for an encoder to be that much slower, while producing so much worse quality in several parts of a video.
How the heck did this turn into another x264 vs VP8 argument? I thought this was supposed to be about JPEG and WebP.
WebP *is* VP8. It’s the same libvpx creating the video I posted and WebP images. But ok, you want an images comparison? Here you go:
To create the jpeg, I used GIMP and saved the original with a quality of 30. Why that low? So you can easily see the difference to the original. For webp I used ‘webpconv -quality 51’, so the filesize was approximately the same – 38415 bytes for jpeg, 38826 bytes for webp.
Then I converted the webp image to png, for easy viewing in the browser.
Original: http://www.imagebam.com/image/c5ce0e100464038
Jpeg: http://www.imagebam.com/image/4b0505100464040
WebP converted to png: http://www.imagebam.com/image/4ae1f4100464043
WebP, before converting to png: http://www.sendspace.com/file/2nnbc9
The background is annoyingly blocky on the jpeg, at quality 30, that’s no surprise. But look at the cat. Looks very ok in the jpeg, looks like quite a blur in webp.
In my opinion, the cat does look blocky too in JPEG… But well, I suppose it’s a matter of taste.
Well, obviously if you are going to compare the best quality between two encoders you will use the highest quality setting for both encoders. Anything else is pointless since there’s no way of knowing how one mid-level setting corresponds to a mid-level setting on another encoder, but best is best.
Sorry, you don’t get to define what is realistic settings. Depending on available processing power and or target quality practically any setting can be realistic. This is why there’s a ton of presets available for most encoders, otherwise all we would need is one called realistic.
Bullshit conclusion, like the name implies, x264’s ‘placebo’ preset is aptly named as such because it really makes no perceptually visual difference, only takes more time to encode. Libvpx’s BEST setting is named ‘BEST’, not placebo. The fact that libvpx does not have a ‘placebo’ preset (like tons of other encoders don’t have one either) only means that not every encoder sees it fit to add such a setting (and I can’t blame them).
I’d put no stock in tests by someone who claims to know the results beforehand. Objective? I don’t think so.
Yes, I claim to know the results beforehand. But I’m not pulling stuff out of thin air, I have the two encodes I already did, so I’m making an educated estimation based on those.
libvpx loses to x264 heavily when both use their second slowest setting. You think this will change if I use the slowest setting for both? Of course not.
But well, I already did a libvpx encode with –best. It took 32 minutes. I’ll now do an x264 encode with preset veryslow, then come back here with the results. And when they’ll be exactly like I said they would be based on my educated estimation, what will you say then? What will you find then to attack me?
Ok, here we go:
libvpx
x264
Well interesting, x264 at veryslow was faster than libvpx at –good.
And now the screenshots, only two this time, but they should be enough to validate my claim:
x264_2_veryslow: http://www.imagebam.com/image/e3ab5d100545286
libvpx2_best: http://www.imagebam.com/image/64a4fb100545289
x264_3_veryslow: http://www.imagebam.com/image/963456100545290
libvpx3_best: http://www.imagebam.com/image/61d057100545293
As you can see if you compare to the previous encodes, not much has changed except it took longer to encode. The huge difference between x264 and libvpx remains. *Exactly* as I said.
Not surprising since webm is currently very unoptimized, which was pretty much demonstrated by the optimized webm decoder glaser and some others wrote for ffmpeg which ran circles around the current implementation. x264 on the other hand has been extremely optimized using lots of hand optimized assembly over a long period of time.
Well, it’s hard to tell if ‘not much changed’ since for some weird reason you used totally different frames this time around.
As for your claims that it ‘completely blows’, no I don’t think it completely blows. It’s not as good as h264, we already knew this. But it’s not targeted to compete with h264 for top quality high definition encodes or some such, it’s supposed to be a ‘good enough’ web video format with the main strenghts of being royalty free, open source and thus able to be implemented everywhere without costing a dime. Also, while most h264 devices out there are contrained to ‘baseline’ or ‘constrained baseline’ profiles, which is what webm will often compete against, here it’s being compared against preset ‘veryslow’ which like ‘slower’ is WAY beyond ‘baseline’ or ‘constrained baseline’ in terms of quality. These profiles don’t even support b-frames, cabac, qsm etc.
So no, I don’t think your screenshots prove in any way that webm ‘completely blows’, not in this particular test and definately not for it’s intended purpose (see above).
At the end of the day, if webm is ‘good enough’ is something neither you nor I will define. It will be defined by how much it will be used on the web.
Err, libvpx is not new. It’s been in development at On2 for years, some comments in the source date back to 2004 (iirc, but in any case I’m not far off). So this “it’s not optimized” thing has absolutely no merit. A whole team of paid full time developers had many years to work on optimizations.
Excuse me?? I made extra sure it’s the same frames. It *is* the same frames. I even numbered the new screenshots accordingly, so you know exactly which to compare with what (hint: click “save image” and you’ll get the files as I’ve named them).
There’s more and more HD videos online. So yes, top quality *is* necessary for a codec, particularly as time moves on. Libvpx doesn’t bring it. A better vp8 might.
I see what you’re doing – first you accused me of idk “cheating with the settings” or something like that. But now that that’s failed, you went into full defensive mode, diverting away from the discussion about picture quality.
I know about the royalty situation, I said so already in a previous comment. But that doesn’t change anything regarding libvpx’s quality.
Your info is outdated. The latest iPhone as well as Android phones are *not* constrained to baseline. Also, what youtube serves by default (360p) is not baseline either. Then there’s the already mentioned increase in HD videos. And that’s *today*. Libvpx can’t keep up with today, and with time demand for high quality video will only go up as more and more HD cameras are being sold and the number of high-profile capable mobile devices continues to rise.
I can already see your next argument (cos you’re saying nothing new, it’s all been said plenty of times before): “It’s possible to improve libvpx.” – Yeah it is, but considering the current progress, I doubt much will happen regarding it. A new encoder is needed, preferably written not by the former On2 people.
If you can’t see the blur and lack of details in everything but shot 1 and 5, I can’t help you. Especially screenshot 3 shows complete failure of libvpx, and there’s other areas in the video I didn’t take screenshots from that are just as bad.
If you can’t see the blur and lack of details in Dark Shikari’s pictures (where it’s even more pronounced), then I don’t think I’m going too far if I say that you *don’t want* to see it.
That’s true. But forgive me if I’m skeptical about the format’s future (or not, your call).
Doesn’t matter if a whole team has had many years to work on optimization if they didn’t do it or simply lacked the technical know-how (implementing code that works and speeding up that code are two different things). Again, look at the increase in webm decoder speed in the new version for ffmpeg, and according to gasset-glaser there’s more optimization to be made (some already done that they haven’t submitted yet) so it’s kind of obvious that there’s room for optimization.
Ahh, my bad, I only looked at the first frame last time where there was a spaceship exterior shot, and it was not in this last test. Though why you would choose a bunch of testshots that are all 80% black is beyond me (which is why I focused on the spaceship shot which atleast had a decent lightsource).
Yes, there’s more and more HD videos online, but that doesn’t mean that the quality has to be much better, HD is about resolution, not picture quality. I’ve seen tons of crap quality HD encodes, the web is full of them, heck I can record my old c64 games in 1920×1024 and say, look, it’s frikkin HD!. And that brings us back to the original point, webm needs to be ‘good enough’. The vast majority of youtube videos are of mediocre to ok quality, same goes for the video content on other sites like daily motion, even places like veoh and vimeo which actually focus on hd video are filled with content that webm would have no problem to handle quality-wise. These are the type of sites to which webm is attractive, big media hd streaming will stick to h264 since they offload the cost of licencing on to their customers either through direct charging or through advertising in the video.
No, I never said you were ‘cheating’, I said that the settings for the test was obviously flawed, no matter what the result turned out to be. Again, there’s no way to claim that x264’s preset ‘slower’ corresponds to webm’s ‘fast’, heck they don’t even have the same number of presets so how could you reach that conclusion even to begin with? If you test an encoder for best quality, you use BEST setting, not really rocket science. And the only one who has made claims on picture quality is you, with your ‘completely blows’ (maybe it sounds cool in your head, who knows).
Neither are you.
Well, I don’t know what Google has in plan for webm in terms of improvement. But they can’t be blamed for On2’s past slow development. When did they release webm? May? June? And bought it around february I think, likely they haven’t had time do anything other than examine the code for patent violations before they released it. We’ll see if they intend to put resources into improving it or not, but somehow I doubt they bought on2 for $106m just for vp8 as is. But maybe there were patents in the deal that they saw value in.
That’s ok. I’m not sure of it either, again it needs to be ‘good enough’ in order to be attractive together with being free. The web video ‘market’ will decide that by either choosing or not choosing to support webm, I doubt the end users will have much say (nor that they will really care since judging by the overall low quality of web videos it really seems to have no major impact on the videos corresponding popularity).
I second this.
The claim that there’s a high demand for high-quality video on the web is purely false. HD is about resolution, but it only leads to a fabulous increase in quality when scaling up the result on a large screen.
1/The current trendy thing is mobile devices, which don’t have such a large screen.
2/What most of the videos people upload on youtube and to a lesser extent DM (hence most of web videos) are short clips that either have no real video content or don’t require high quality nor full screen to be enjoyable (talks, gags, music and AMVs…). Plus most of the time, the limiting factor is the quality of the content in amateur videos, not the encoder.
3/Actually, many people like I have a spontaneous tendency to *decrease* video quality on such websites, so that they have a better chance to get smooth streaming. The higher the quality, the higher the chances you’ll have to wait a long time for a video that’ll finally brutally hang in the middle.
Then even the “quality” notion is subjective. Do you think people rather want quality (= good precision, ability to distinguish tiny 4px details in the background) or a gorgeous look (= smooth-looking picture, blurring out all of those horrible-looking square artifacts no matter the loss of information) in online videos ?
In my case, it’s about the latter, and I think I’m not alone. Quality is for professionals and people who spend a lot of money into high-end audio and video systems, but most of the web doesn’t need it as long as the major elements of the picture are kept intact. Things like Youtube are about watching something fun, not about analyzing it in details despite the rampant JPEG-like blocks all arount.
My point was rather that – at least the way I see it – vp8 supporters often use this “it’s not optimized” as an excuse for the current state, and it’s used with the implication that things will improve soon. But it’s been months, and the encoder got slower, not faster, between v0.9.1 and v0.9.2. While it probably takes longer to optimize an encoder, surely at least dixie (new vp8 decoder that’s being developed) should’ve been done by now. This does not inspire confidence that the implication will come true.
I chose random shots from the video. A good codec should be able to deal with every situation. x264 does. What I sense here is you trying to find another excuse for libvpx – “He chose all black sources!”
Of course an encoder will look better if you do selective analysis. But that borders on playing denial. I could’ve done selective analysis, I could’ve done only shots that look really terrible with libvpx and wouldn’t even include shots 1 and 5. But that wouldn’t be fair. A random selection is.
That’s true. But where’s the point of wasting space for the high resolution videos, if they’re just a blur and contain not more info that a smaller res video would contain. The demands for higher and higher quality will go up, that was my point. What is “good enough” now, won’t be enough tomorrow. Things change.
You know, I could again say this is just making excuses for libvpx. There’s a bad encoder here, but you brush it off by saying “it’s good enough”. And maybe it really is. For now. But see above regarding the future.
This is a complete diversion tactic. You’re playing a semantics game. An encoder needs to do well when given a set of setting it provides. I’d get complaining if I deliberately used crappy settings, or if I didn’t bother to set up the encoder at all. But that wasn’t the case here, I used balanced settings provided by the encoder developers themselves. “Balanced” is not “flawed”.
And now it just hit me that there actually *is* a correspondence – I used second slowest settings for both encoders, as provided by the encoder developers. So your semantics game didn’t work out.
I tested encoders based on how I’d use them in an actual scenario, counting for the fact that my processor is not the fastest. You know, a real life test of real usage, instead of an academic exercise. Yet another case where I could say you’re using a diversion tactic – you don’t like the results, so you attack the test.
An encoder that’s good but takes forever is useless, when I intend to encode several seasons of a tv show. So I didn’t choose the slowest options, but the second slowest. A good encoder would provide a satisfactory results even at those. x264 does. If an encoder can’t be used except at it’s slowest setting, if all but that is “flawed”, it’s nothing but a poor reflection on the encoder.
And I adjusted the test to your criticism, even though I consider my first test fully valid, so there.
It was a discussion about encoder quality, why else would I go do an encoder test. And based on my results, and those of Dark Shikari’s tests, I came to a conclusion. Granted I gave the conclusion a colorful description. But if that bothers you, here’s a less colorful description of the results: Libvpx is very bad. It takes longer to encode than x264 while producing a quite worse picture.
But when you didn’t like the result, even after I adjusted the test to your liking, you started to bring in all that other stuff. I see it as an attempt to sugarcoat the bad test result.
Err, I provided you results from a small encoding test. Even adjusted the test based on your criticism. Instead of just talk, I provided substance. Where is your test? Choose a source, one that isn’t “mostly black” if that bothered you, provide your testing method and results like I did.
Also, I showed that x264 at it’s slowest setting (–preset veryslow) is faster than libvpx at it’s second slowest setting (–good). That is new. At least it was to me.
I too don’t know what Google plans for WebM. All I know is, the situation is really bleak for them, and I have little confidence that it’ll improve. You can attack my test (you failed though), you can try to divert all you want to other stuff. But fact remains, libvpx is bad and if WebM is to have a future, something needs to change. And fast.
I’ve never said that it will be optimized, I’ve said that it’s obviously unoptimized, as proven by the much more optimized ffmpeg decoder.
Well, naturally if you want to compare details, it’s better to have shots with, you know, lot’s of actual details. And while there’s nothing wrong with frames containing alot of black for a test, you really would want some more varied shots when making comparisons. But despite your attempts to make this about libvpx, it’s not, what I’ve been critizicing is your test, not the result.
People happily encode in HD these days even if the source quality/resolution isn’t adequate. Even professionals release poorly upscaled/filtered crap and slap a ‘HD’ marker on it in the hope to sell the same old thing all over again.
Ahh, so while you scorn the ‘future’ argument in favour of webm and possible increases in quality (one that I have not even made, you are the one who keeps bringing it up!) you yourself see no problem in using ‘but in the future’ in order to lay weight to your arguments. LOL, really?
You say it’s a ‘bad encoder’, which is that black or white mentality which likely spawned comments like ‘completely blows’. Then of course you backpedaled and said, ‘ok, it might be good enough for now, but look into the future…’.
Yikes, what are you babbling about! There’s no way of knowing if those ‘balanced settings that YOU chose to define are comparable. We don’t know what developer A thinks is ‘balanced’ in comparison to developer B. But we know what both of them thinks is BEST.
Please, are you kidding me? They don’t even have the same number of presets, so even by your twisted logic it doesn’t make sense. Webm has like what, 8 presets and x264 has 10 I believe so your ehum… ‘logic’ with the steps don’t even make sense here since the ‘steps’ in quality would be larger in webm and thus going down one step would degrade the quality more there than it would in x264 which has more steps (presets). But even if they had the EXACT same amount of steps it would still be flawed because we have no way of knowing what developer A feels step X would be quality-wise compared to what developer B. We do however know that in their BEST setting they will throw in all their bells and whistles.
I criticized the test because it was flawed, I never critizised the results, please stop trying to make up things. As for real life testing, what is ‘real life’? Obviously very subjective. You said your processor was not the fastest, I don’t know what that is. I do my encodings on a core i5 clocked at 3.2ghz, I don’t know how this corresponds to your cpu but this would be the base of my ‘real life’ settings. And someone else with a better or worse cpu would use it as the base of his/hers ‘real life’ setting. So really, ‘real life’ doesn’t say squat.
Again, these are your metrics, if you only want to make a comparison that is valid only for your particular needs then the result pretty much only applies to you aswell.
Are you confused? The reason for using the slowest setting was because it’s the one with the BEST quality, which is what this test was all about. If we had been testing the speed of the encoders then that would have been a totally different thing.
Thank you!
What other stuff? And please cut out all these poor accusations.
So sayeth you.
That’s just what I meant with “selective analysis”. It’s the video in it’s entirety matters, cos I’ll be watching it in it’s entirety, I won’t be looking at only the parts which have lots of details.
How an encoder deals with flat areas is equally important, cos well, they’re part of the video too. It does not help me that parts of the video look ok, if other parts are a blurry mess.
I expressed skepticism towards future improvement of the encoder based on it’s present state and pace of development, yeah. And because of that skepticism, I’m saying the future looks bleak. I don’t see any conflict here.
I didn’t backpedal anything. I still say the encoder blows, I would never use it for my own encodes.
I only added that it might be (which is different than is) good enough in scenarios where quality demands are currently lower. Keyword “currently”.
You’re saying here that I handicapped libvpx because it has fewer presets. If anything, the encoder handicapped itself then, cos I merely used what was provided by the devs. And no, I’m not kidding you, thanks for the condescension.
The conclusion I get from this is that according to you, it’s impossible to compare encoders except at their top settings. That is just extremely limited.
Of course a possible test is with both encoders maxed out, and I even did such a test. But the encoders will not always be used maxed out, so other scenarios, like choosing the second highest provided preset, are just as valid. That “second highest” can mean something different with different encoders does not invalidate the test. What else can I do, except use what’s provided.
Encoding a DVD at the not-top setting, that is real life. Something a person would actually go and do, you know, in real life.
If I was doing something very specific, you might have a point. But only then. What I’m doing though, encoding a DVD at the not-top setting, is a common scenario. So my test is fully relevant.
I did the test, so I think I’ll know better what it was about. It was about a realistic scenario, encoding a DVD. It was only afterwards that you turned it into a “best” thing, and I obliged by doing additional encodes.
The stuff about royalties and “good enough” and “intended purpose” and such. The test is about quality. But you brought in the other stuff to justify libvpx’s lack of quality. I imagine you’ll now accuse me that I made this up. In this case I don’t believe I did, but you’re free to deny it.
About accusations… you called me a fanboy, you called one of my conclusions bullshit, that I’m babbling…
You know, it’s been fun. But you’re starting to get more and more condescending, so I’ll bow out here. The command lines and pics are there, so everyone can make their own conclusions.
Edited 2010-10-04 19:24 UTC
Of course, but having all testframes dominated by large flat black areas makes for a poor testbed. Again (just so that you don’t start with your paranoia) I’m not complaining about the results of your test (which I am NOT doubting), but the actual shortcomings of the test. And no I am not saying nor am I implying that using a more varied set of frames would actually change the overall result here, but that as far as the quality of the test goes, it would have been an improvement.
You say that no argument should be made concerning the possibility of webm improving in the future while at the same time saying that while webm might be good enough now, it’s current quality won’t be good enough in the future. And you see no conflict in that? …
If you are comparing their best quality, then yes you need to use their best setting. If you are measuring other things, like speed vs quality then that’s a whole other matter.
Not for comparing the best quality two encoders can produce, which is what this discussion was about.
Make up your mind, you’ve been saying it’s suddenly about ‘realistic’ settings (of course defined by YOU) during the last few paragraphs.
You sure do imagine alot.
Well, don’t know about fun exactly, but it’s been interesting.
Dude, that comment completely blows.
Take care!
Wrong. At constant filesize, all presets are just as realistic. Depending on the circumstances, people will take their pick and choose their quality preset according to some little tests used to estimate the quality/encode time ratio of each preset.
Edited 2010-10-03 20:14 UTC
What I meant by “realistic” is a setting one would use to encode their videos. Would you use the slowest encoder setting on a Core i3 processor? I wouldn’t. It would take too long.
libvpx with –best took 32 minutes on the sample clip, more than 3x as slow as the x264 encode with preset slower. I would not use this setting to encode full movies, or further, full seasons of a tv show. And we’ll see if the increase in encoding time is worth it, once I take a look at the result. My guess: it isn’t. Stay tuned for results, which I’ll post in the other comment, once the x264 encode finishes.
Edited 2010-10-03 21:48 UTC
Actually, yes, that’s exactly what i would do. You only have to encode the video once, IMO it’s worth doing it right the first time so you can be happy with the result. There’s no reason not to have the encode running in the background for hours, it’s not going to stop me from surfing the web or whatever while i’m waiting.
If you’d have just that one video, sure. But if you want to encode tv shows, no way. Anyway, this point is moot for the purpose of this test, I’ve done encodes with the slowest settings, I’m in the process of extracting screenshots from them, to compare the results with the other encodes I did.
Edited 2010-10-03 22:33 UTC
Why not? You just need a way to queue everything up, so that you can leave the encoder running all night (or all week if necessary).
Your argument about encoding time mattering is much more relevant for sites like YouTube which have to continually process more video all the time and have it ready for viewing right away. For personal use, I’d always recommend going for quality first.
Disprove him or shut up.
Your FUD spreading is lame. Go get a camera that can generate RAW files, go out, shoot photos, and compare WebP vs. JPEG.
I haven’t said that his results are wrong, I have said that he is potentially biased and that the test of one image is inconclusive. I have not painted myself as an expert and therefore await a serious comparison from someone who is, using a wide range of images.
And not offering a shred of evidence never stopped you from sprouting things like –“On2 sold a slightly tweaked h264 codec as their own invention”, –“Google releases the source code to a modified H264 Baseline codec” etc. Can you prove this? Else maybe you should you practice what you preach, and you know, shut up?
WTF? Of course he released evidence. He didn’t release a full list of every detail where AVC and VP8 are alike but he released some examples in
http://x264dev.multimedia.cx/archives/486
In chapter 2 he writes that VP8 uses all 9 AVC intra prediction modes and an additional on top.
In chapter 3 he describes a shared problem with the deblocking filter in 4×4 blocks. He describes that VP8 and AVC use the same filter but VP8 applies it to 4×4 blocks while AVC uses 8×8 blocks, causing performance impacts for VP8 but besides that it’s the same.
He also released the source code to ffvp8. Releasing the source code = releasing evidence.
You may not understand the evidence in detail (neither do I) but just because someone does not understand the evidence the situation is not that there is no evidence.
Sorry to say that but reading your comment makes me remember fundamental Christians who claim that there is no evidence for evolution while in fact there is very hard evidence for evolution. In fact evolution is way better understood than gravity (which apart from ‘things fall towards objects with high mass’ isn’t understood at all and the quest to understand gravity is one reason the LHC was built). They just don’t want to accept the evidences for evolution.
Btw, Garrett-Glaser does not bash VP8 all the time. In the post I linked he points to some of the features where VP8 diverges from AVC. The 10th intra prediction mode is “cool idea and elegantly simple”.
“Tree-based arithmetic coding (…) greatly reduces the complexity — not speed-wise, but implementation-wise — of the entropy coder. Personally, I quite like it.”
He also writes in chapter 6 how great it is that VP8 does not support interlacing, bashing on AVC for the added complexity of interlacing.
Overall I find that blog post balanced. Sure, the haters only see what they don’t like. But the reality is that out of 6 chapters, 3 present positive aspects of VP8 and 3 present negative. That’s a 50:50 ratio. You can’t get more neutral than that.
But neither you nor Garret has shown any evidence that these are A) patented techniques or B) if patented, patented by mpegla. Garret was saying that the techniques were the same or atleast very similar, and that there was potential patent problems. YOU jump to the conclusion that these techniques ARE patented and that they are patented by MPEGLA without offering any sort of proof. ‘If’ they are patented they may very well be patented by On2 whose vp3 pre-dates h264.
LOL, I’d say it’s the other way around. If anything you are worshipping at the shrine of Garret-Glaser.
It is hardly an opinion piece, he does rather in-depth technical commentary, and, more importantly, he produces photos using freely available tools. Completely reproducible, and I must agree that WebP has a hard time matching up to even JPEG in his test:
http://x264.nl/developers/Dark_Shikari/imagecoding/vp8.png
vs.
http://x264.nl/developers/Dark_Shikari/imagecoding/jpeg.png
And, as quite often noted, JPEG is more or less the worst case compression-wise these days. The introduction of WebP is a lot like trying to introduce a new audio compression method with the argument that it beats MP3, while failing to match Vorbis and AAC. Sure JPEG is the standard choice on the web, but not because no one else has beat it on quality.
WebM remains a good thing to have around (though its greatest victory already happened when MPEG-LA loosened the h264 licensing deal for web streaming in direct response). WebP seems rather unnecessary though.
Really heartening in some ways to see his link at the end, where Theora through its years of retuning actually does a much better job than VP8 at this task:
http://x264.nl/developers/Dark_Shikari/imagecoding/theora.png
I am hardly a huge Theora fan, but hats off to the Xiph guys for their hard work.
Edited 2010-10-04 09:17 UTC
Quite a damning review!
I take my comments back for the time being (although I suppose there is is nothing to say that the compressor cannot be tuned/have more psy optimisation added in order to give better detail).
Edited 2010-10-01 13:53 UTC
Funny you mention that. How is it possible that source image has more compression artifacts than their WEBP version? I noticed this on image 5. Control-+ until the images are enlarged and look on the left at the yellow bricks. On the JPEG version you can’t make out the individual bricks, but on the WEBP version you can. I noticed this on most images here. Most JPEG edge artifacts are gone on their WEBP counterparts. Even if the originals are just scaled down, you should still always see less detail on recompressed images.
I guess they didn’t use the JPEG on the left hand side to create the WebP on the right, I guess they just used the same source picture to create both.
EDIT: Apart from being a logical explanation, this would also be the thing that makes the most sense, as they want to compare both codecs with each other.
Edited 2010-10-01 15:17 UTC
commenting about how you can get similar compression rates from JPEG.
http://code.google.com/speed/webp/docs/c_study.html
Google took a random sampling of images from the web, then re-encoded all of them as JPEG, JPEG2K, and their new WebP format, all targeting the same PSNR rate of 40. Google’s format was significantly better, especially at the smaller images.
Now, that doesn’t mean this format is an automatic win. How important is another 20% compression compared to the problems associated with supporting a new format? How CPU intensive is the decoding/encoding process? etc. A 3rd party test/comparison would also be good, just to double check that google didn’t forget anything important. One good thing is that most of the format is defined in VP8 already, so browsers will already have most of the code in them, it should be just a matter of hooking it up to the correct places to display the images.
Edit:
Another thing to be aware of, as dark shikari has pointed out VP8 is optimized for PSNR. He shows that that’s bad for quality, but it also might have twisted the results of Google’s test a bit since they were targeting a constant PSNR of 40 in all 3 formats.
Edited 2010-10-01 08:56 UTC
Yeah but why on earth did they use JPEGs as the input, and optimise for PSNR?
Also in their comparison gallery all the images are so loosely compressed that there are no differences at all. Totally useless.
Edit: Actually I checked by subtracting the images. They are exactly identical; they must have messed up somewhere.
Edit 2: Ahem, sorry they aren’t exactly identical – the differences are nearly all less than 5 though, which is basically imperceptible. If you plot the difference image it looks pure black.
Edited 2010-10-01 13:24 UTC
This guy does a better job. He fucked the JPEGs up at first, but if he has got it right this time then it would match Google’s graph and WebP would indeed look better for similarly sized files.
http://englishhard.com/2010/10/01/real-world-analysis-of-googles-we…
Indeed, looks like it’s the same thing as with independent H.264 vs VP8 comparisons : JPG has a hideous blocky rendering but keeps more details, while WebP makes more nice-looking images at the cost of some blurring.
I prefer blur myself. In my opinion, being aesthetically-pleasing as a whole is much more important for a picture that some detail in the wild disappearing. Moreover, JPEG/H264 blocks are very annoying when doing image editing (they make tools based on contour detection fail miserably). And after all, those who really want details will only be satisfied with lossless formats anyway
Edited 2010-10-02 08:09 UTC
Thanks to the Datatypes system of AmigaOS 4, the WebP format is already avalaibale and working with older browsers (IBrowse or OWB) and with more recent ones (NetSurf) but unfortunately not with OWB since the latter doesn’t make use of datatypes.
Moreover, every paint programs or picture viewer can now display WebP images.
See http://www.amigans.net/modules/news/article.php?storyid=1242
That’s awesome. I don’t know any other OS that can add new file support to every application just by putting a datatype file in a folder.
Well, Haiku sorta has the same ability. By writing a ‘translator’ for a file format all programs can automagically support that format.
Oh yeah, you’re right. I added Amiga IFF graphics support to BeOS that way… but besides Amiga and BeOS/Haiku, I don’t think any other operating systems have that ability.
…higher bandwidth.
100MB/s should be common by now in the US. Sadly most homes can’t even get close to 10MB/s speeds and high numbers of people don’t even have 1/10 that even.