Hi10 encoding


Ad: Buy Girls Und Panzer Merch from Play Asia!

truth2belief

-dono
MATV Subs
As many keeping up with recent anime releases may have noticed, many releases within the last year are labeled as hi10 on the filename. You may also noticed some file names now have 8-bit in their names to differentiate between the old 8-bit and the new 10-bit encodes. 10-bit has become a very popular encoding method as of late because it makes for better video quality and less space usage for video files. Those of you who have updated codecs shouldn't really be bothered by the issue but since this is relatively new, I would just like to share what this is. I have tested out encoding in both 8-bit and 10-bit and have compared the effects and have come to see 10-bit promising. A lot of this may be speculation, but I hope it helps encoders if they like to understand. Feel free to object or add to it. I've searched myself for in-depth explanations but I couldn't find one, or at least one simply put.

For those that want to know about 8-bit color, here's a little background info:

Now 8-bit color, aka 256 colors, has been around quite a while. For older PC users, many may have seen the options appear since win 95. It's become the standard of color depth when it comes to monitors and videos for a long time. Current PCs usually are set to 32 bit (4,294,967,296 colors) or 24 bit color (16,777,216 colors). Does this color range sound familiar? For those that have bought monitors recently may notice that their monitors have 16.77 million colors. But here's the catch... there really isn't 16.77 colors in the monitors. The common monitors that you buy are either 6-bits or 8-bits of color depth. The monitors use a method of rapid color fluctuations that make it seem like there's a color between. So let's say you flash red and orange so fast on the screen. It'd look like red-orange. So on a side note: monitors that say they have 16.7 million colors are usually 8-bit where monitors that say they have 16.2 million colors are usually 6-bit.

When it comes to videos, videos are usually encoded in 8 bit color (at least when it comes to fansubbing and most internet streams). You may have noticed when you look at a sunset in a video, you can see the colors in the gradation between the Yellow, red and purple the sky becomes further out from the sun. This is in part because the color scheme of 256 colors isn't enough to make it seem like a smooth transition between the yellow of the sun and the blue of the sky. It's also in part due to the dithering of the colors. When a video is encoded and the encoding program recognizes a gradient, the encoding program will often take out a number of the colors in the gradient and add a dither to make it seem like there are more colors. This results in less colors and this means less data meaning less file space. When the colors in a gradient become apparent instead of a smooth transition, this would be called banding.

Now, here's the difference when it comes to 10-bit encodes:

This'll get very technical, so here's your warning. To understand how 10-bit gives a smaller video size, you'd have to look into the video encoding methods. Currently, the most popular encode method would have to be h264 videos, aka mp4, aka all videos on youtube. When encoding a video, there are 3 types of frames. I frames, P frames, and B frames. You may notice that when you open videos with certain players (like an old version of VLC media player) and seek to the middle of the video, you get a mess that seems to clear itself as a character moves or may completely become a clean video when the scene changes or after waiting a short while. This is because you probably seeked to a P or B frame. The way these frames work is, they take the data difference of the previous frame, or also next frame in the case with B frames, and make the commit the changes on your screen. Thus only moving or changing things would be visible while everything else is a mess. You may also notice a sudden clear picture quality. This is because I frames are put in when there is a huge change in the video and also at spaced intervals to make the video reset in quality. I-frames, which contain the full data of the frame, add more to the video file size than a b or p frame do, which contain only difference between 2 frames.

So here's where 10-bit does better than 8 bit. A 10-bit color has 1024 colors and 8-bit has 256 colors. When it comes to video quality, there's no doubt that 10 bit is better since more colors equals better video. So when there's a gradient of colors, the video encoder will take out say... 50% of the colors in the gradient. The actual percentage is more dependent on the quality you set, but colors are taken out in a huge amount. Even if 50% of the colors is taken out, this is still not very visible as the colors are dithered and such. Just as an 8-bit color monitor can show 16.7 million colors, video processing methods makes it seem like much more. Still, 50% of colors taken out of say... a sunset would take out like 20 of the 40 colors there are between the yellow, red, and blue gradient. now in 10 bit, this would be 80 out of the 160 colors. 10 bit color encoding would probably take out a larger percentage of colors than 8-bit encoding but it's still quite a large amount more than 8-bit color. There would be less banding cause there are more colors. Not only that, the encoder also recognizes in 10-bit form so the encoder will find it easier to recognize the difference in the edge of say... a peach being held in a hand. This results in clearer edges.

Now that we understand how 10-bit gives better video quality, how would 10-bit save video size? With the way a computers count numbers, every bit gives an exponential increase to the number length. starting with 2 to the zero power to equal 1, 2 to the first power to equal 2, 2 to the 3rd power to equal 4,... 2 to the 8th power to equal 256, 2 to the 9th power to equal 512, and 2 to the 10th power to equal 1024. This means that with 20% more bits, you get 400% the number range or in the case with videos, color range. However, more bits equals more data. But if this were the only factor in video quality vs video size, you'd get 20% of a bigger file size. So then how does a 10-bit video get you a smaller video size?

Accuracy. When an encoding program does is constantly compare the original video quality vs the new video being made. When the video quality strays an x percentage too far from the original, an I frame is input to reset the video quality. Working with an accuracy of 1/1024th vs accuracy of 1/256th, the 10-bit encode excels. So when a color strays just 3 color values in 8-bit, it's 1.2%. When a color strays 3 color values in 10-bit, it's .3%. Thus video quality is more constant as well.

So why not go the next level and do 12-bit or like normal computer hardware, from 8-bit to 16-bit and then 32-bit as more bits makes for exponentially more color space? It's overkill. 12-bit is 4,096 colors, 16-bit is 65,536 colors, and 32 bit is 4,294,967,296 colors. While accuracy does make the video size go down, it can only go so far before its benefits are overshadowed. Apparently 10-bit is where the efficiency levels off. Also, There comes a point where the video quality is not going to be recognized as perceptible or even visible by the monitor unless you have monitor for such a profession as graphic design.

In the end, Hi-10 encoding provides somewhere of around 5% to 25% better quality vs 8-bit encoding but probably averaging somewhere around 10%. This is just my estimate on based only on what I see. The problem behind Hi10 is that it doesn't have the support. It doesn't have CUDA acceleration, it's not supported by a standard, it can't be played on xbox 360 or PS3, but that's now. I would suspect sooner or later, Hi10 will become more apparent, at least in the fansubbing world.

[Mod's Note: Moved to Miscellaneous from Anime]
 
Playasia - Play-Asia.com: Online Shopping for Digital Codes, Video Games, Toys, Music, Electronics & more
Back
Top