I have been infesting audio forums for about ten years and people are forever comparing video with audio. They are not at all comparable.

People speak of "resolution" in audio thinking of the same term in video. It is bollox, 2*4 bits does NOT get you better "resolution" than 16, just a lower noise floor. Each 'bit' is still about 6dB.*

Audio just needs a 20-20kHz bandwidth and a low enough noise and distortion floor. Video bandwidth must increase as the amount of data, pixels, goes up.

Dave.

This is something that I've wondered about and haven't been able to find a credible answer to.

Yes, adding a bit adds 6dB to the S/N ratio, but if you look at the math, you have a tremendous increase in the number of discrete steps. 16 bit audio provide 2^16 levels, or 65536. 24 bit audio has 2^24 levels or 16,777,216. Are those extra 16.71million levels only used for the extra 48dB on top? If you start at 0dB, you actually have 23 possible levels to drop to -6dB. With 16bit, you only have 15 levels. Does it provide for a more accurate approximation of the waveform? You are most likely using the top 50 dB of the signal level. Thinking from top down, it would seem that 24 bit, or 32 bit, would allow for a more accurate sample, (less quantization noise). This is separate from dither, which is LSB randomization.

I have looked for some information on this, but all everyone focuses on it lowering the noise floor. In terms of dB, how much of an decrease do you get going from say 16525001 to 16525000, vs 65321 to 65320?

Any math majors here that can clarify this?