My Camera only has a 12-bit sensor, a 12-bit master is fine right?
I recently had a client that had pre-graded material that was coming to me for a final pass and mastering/deliverables. They asked me if 12bit dpx would be ok. This client was not local and the smaller file size would save time and money in data transfer fees.
I thought about this one for a second and replied irrevocably with a no. My answer would have been different if these were camera originals since the show was shot with a 12bit sensor. Here is why 16bit matters in a 12-bit capture and display world.
Uncertainty in Measurement
Uncertainty in Measurement is defined by wikipedia as “the range of possible values within which the true value of the measurement lies.”
When I was a young chap, I had an instructor who explained to me how Subway was ripping me off. Every time I ate there they were cheating me out of sandwich and getting away with it. The loophole was simple. The metric Subway uses is not very precise (or in our world translates to having a low bit depth.) Subway sells a “footlong” sub. Now that is 1’, not 1.0’, not 1.1’ or 0.9’. The end of Subway’s precision ends at 6 inches. It’s either a foot or a half of a foot. The problem here is in actuality the Subs are around 9-10 inches. How do they get away with cheating me out of 2 inches?
The cheat is effective by not using any zeros. Subway simply rounds to the nearest foot since that is all their mathematic language accounts for. Using Subways “precision” a “footlong” sub could be anywhere from 0.6 feet to 1.4 feet. To no one’s surprise, I have never had the luck to receive the latter, but have received many less than subs.
Ok, I’m hungry, but what does this have to do with an Arri Alexa?
Most current cameras work with less than 16bits of precision. This doesn’t mean that we should. Let’s take two parrell paths. Same operations, but one executed with higher precision.
Higher Precision Pipe
Starting Captured code value = 9.0
Graded down a stop (linear data) 9.0/2 = 4.5
Mastered to a file that can accommodate the precision = 4.5
Remastered in the future graded up a stop 4.5*2 = 9.0
Lower Precision Pipe
Starting Captured code value = 9
Graded down a stop with lower precision (linear data) round(9/2) = 5
Mastered to a lower precision file = 5
Remastered in the future graded up a stop 5*2 = 10
Now that is a super watered-down example but hopefully, you get the point. We land in a world where 2+2 doesn’t equal 4.
Do we visually need the extra data? Is it worth it?
Absolutely not. Our current display systems have a 12bit precision at best. A 16bit file vs a 12bit file will look identical on the screen. The big difference is the 16bit file has more information about the path that was used to get there. Baselight and Resolve, both work at 32bit float internally. We should strive to retain as much information about what happened in the grade as possible. Additionally, GPU’s think in 8’s or 16’s. Even if a 12bit word is smaller it is padded to 16bit before processing. There is zero performance gain except for slightly less disk IO and usage. Visually two icebergs may look the same, but they can be very different under the surface.
Let’s protect our pixels by ensuring they have the best chance to weather the bends folds and manipulations that future generations will impact on them. 16bit is the new black ;)