Diary Of An x264 Developer

05/09/2008 (2:51 pm)

Variable length coding and you

Let’s say you want to write a number to a bitstream. The decoder is expecting a number, so it knows exactly what to look for. But how do you format this number?

You could write 16 bits no matter what, putting a max of 65535 on this number. This would also waste tons of bits though; what if the number you’re writing here with your encoder is almost always 0, 1, or 2? Writing 16 bits would seem ridiculous, so it only makes sense to use a variable-length code (VLC). But how would the decoder know how *long* the code is? Here’s a simple example:

5 -> 101
2 -> 10

Both will look the same to the decoder, since it doesn’t know whether the code ends after the first bit, the second, the third, etc. But there’s a solution to this problem; a common one lies in the form of exponential Golomb codes. Here’s an example:

0 -> 1
2 -> 011
5 -> 00110
8 -> 0001001

The number of zeroes is the number of bits in the code, and a 1 is used to terminate the list of zeroes. This may seem inefficient, but in a sense one has no other choice but to have some sort of coding scheme like this. Of course, exponential golomb codes are only optimal in the case of a specific probability distribution; an optimal variable-length coding system has code lengths chosen based on the probability of each value. This is the primary reason why MPEG-4 Part 2 (Xvid/DivX) is superior to MPEG-2; the VLC tables were much better chosen.

Of course, the ideal system would be to allow custom VLC tables to be put in the header of the video file; its quite trivial to design a twopass encoder to create optimal VLC tables for a given video, or even a given frame. H.264 partially does away with this problem with CABAC; while there would be benefit to including a custom initial CABAC state, the cost of various CABAC modes adapts quickly to the video. Of course, this is not always the case; CABAC still relies on variable-length codes, because it converts numbers to series of binary digits, which are then written into the context-based arithmetic coder (the “binary” part of CABAC). One result of this is that even if one has a B-frame made up entirely of intra blocks, the arithmetic coder won’t be able to completely adapt, and each block will still cost more bits; the variable-length coding for the macroblock type is not chosen in such a way that would allow such “complete” adaptation.