If one works long enough with a large number of H.264 encoders, one might notice that a large number of them are pretty much awful. This of course shouldn’t be a surprise: Sturgeon’s Law says that “90% of everything is crap”. It’s also exacerbated by the fact that H.264 is the most widely-accepted video standard in years and has spawned a huge amount of software that implements it, thus generating more mediocre implementations.
But even this doesn’t really explain the massive gap between good and bad H.264 encoders. Good H.264 encoders, like x264, can beat previous-generation encoders like Xvid visually at half the bitrate in many cases. Yet bad H.264 encoders are often so terrible that they lose to MPEG-2! The disparity wasn’t nearly this large with previous standards… and there’s a good reason for this.
H.264 offers a great variety of compression features, more than any previous standard. This also greatly increases the number of ways that encoder developers can shoot themselves in the foot. In this post I’ll go through a sampling of these. Most of the problems stem from the single fact that blurriness seems good when using mean squared error as a mode decision metric.