24 bit dynamic level difference

Big Mike B

New member
So I spent a few hours today reading about 16 bit vs 24 bit and came to the conclusion (from what I understood) that 24 bit is just a greater dynamic range but larger file size, with the difference between 16 and 24 bit being 96 and 144 dB (48dB difference).
So here's my question:
To which part of the amplitude scale is this extra 48dBs being added, the bottom, meaning that the noise floor will be lower, or the top, meaning that I'll have more headroom with the same gain set on the preamp?

Or just correct me if I got something wrong.
 
These things are relative, aren't they? For the most part, though, it's closest to correct to say that it's the noise floor that gets pushed down. You can more safely work with smaller signals without quantization noise/distortion. 16bit is better than most of the analog gear feeding it. 24bit is better than most analog gear on the planet.
 
With 24 bit, the noise floor is pushed down. This means you can use lower gain settings on the preamp without losing digital resolution or getting too close to the digital noise floor..
 
Anything that anyone adds to answer your question will probably result in a flame war. There are a lot of very sharp people who have mis-conceptions of what is involved in bit depth. I would recommend carefully reviewing the following Wiki article READ HERE . Also read the Discussion tab on this article. Lots of good information there.

Glen
 
Ok, so maybe I should reword the question:
If I were to convert 16 bit sound to 24 bit sound (pointless, I know, but just for the sake of argument), would I find that what was at -1dBFS goes to -49dBFS or the noise goes from being at a floor of 16dB to 64dB?
 
Ok, so maybe I should reword the question:
If I were to convert 16 bit sound to 24 bit sound (pointless, I know, but just for the sake of argument), would I find that what was at -1dBFS goes to -49dBFS or the noise goes from being at a floor of 16dB to 64dB?

The potential noise floor will drop but the noise from the bottom of the 16 bit audio will still be present in the data.

For a newly converted audio stream it depends on the converter. Theoretically they can be calibrated arbitrarily high or low, but generally +4 signal ends up around -18dBFS. I'm pretty sure that lines right up with ashcat's 0dBFS = 27.6V p-p.
 
0dbfs is at the same point either way. If you were to convert 16 bit to 24 bit, it would literally just add 8 zeros to the end of the 16 bit word. The peaks stay in the same place, all the extra dynamic range is added at the bottom.
 
The potential noise floor will drop but the noise from the bottom of the 16 bit audio will still be present in the data.

For a newly converted audio stream it depends on the converter. Theoretically they can be calibrated arbitrarily high or low, but generally +4 signal ends up around -18dBFS. I'm pretty sure that lines right up with ashcat's 0dBFS = 27.6V p-p.
I've often wondered if or why or should the analog input (at least) actually swings all the way to the digital rails. It seems like if we want to be absolutely sure that we never write digital 0, and we know what input voltage will do that, then we can build the analog part of the converter to never quite go that far. It's something that I haven't bothered to research.

Any insight on how most home studio level converters handle this? Opinions on how you'd prefer your ADC to handle it?
 
0dbfs is at the same point either way. If you were to convert 16 bit to 24 bit, it would literally just add 8 zeros to the end of the 16 bit word. The peaks stay in the same place, all the extra dynamic range is added at the bottom.

Thank you! That's all I wanted to find out!
 
I've often wondered if or why or should the analog input (at least) actually swings all the way to the digital rails. It seems like if we want to be absolutely sure that we never write digital 0, and we know what input voltage will do that, then we can build the analog part of the converter to never quite go that far. It's something that I haven't bothered to research.

Any insight on how most home studio level converters handle this? Opinions on how you'd prefer your ADC to handle it?

I don't know, but what I've heard is that some converters from early in the 16 bit era were clean on the analog side right up to 0dBFS and beyond, so overs would result in terrible noise, and later converters are designed to run out of analog headroom just before they hit 0dBFS. It's those early converters that gave rise to the fears of digital overs but they don't really exist anymore. But, as I said, I don't actually know.

An interesting related issue is tolerance of intersample peaks by DACs. From what I've read converters that meet the standard for CD playback have no headroom to accommodate intersample peaks. Some players are designed with the extra headroom, but technically they violate the standard. I think the standard was written when digital recording and playback had been figured out but digital processing had not yet become widespread. If you just digitize a signal and play it back you never have to worry about intersample peaks since they aren't there in the original analog and the ADC can't generate them on its own. But as soon as you put digital processing between the ADC and the DAC it's possible to create digital audio data that represents voltage above the intended limit.
 
I never understood intersample peaks. If anyhing happens between the samples, it would be happening at a frequency above nyquist and would be filtered out anyway. What am I missing?
 
0dbfs is at the same point either way. If you were to convert 16 bit to 24 bit, it would literally just add 8 zeros to the end of the 16 bit word. The peaks stay in the same place, all the extra dynamic range is added at the bottom.

I believe that most converters use fractional arithmetic. If you think of the signal as being contained between -1 and +1 and the value is represented the same in 16 bit and 24 bit then there isn't anything 'more' at the top or the bottom. Just more points between zero and one. The result is improved resolution. I think that the whole 'theoretical S/N ratio' proposal is just a red herring to sell newer gear.

Glen
 
I never understood intersample peaks. If anyhing happens between the samples, it would be happening at a frequency above nyquist and would be filtered out anyway. What am I missing?
You misunderstand the way that sampling works. It's a very common and intuitive misunderstanding, but that ain't quite the way it works.

Does this help?
Fig1.png



It seems to me, though, that this is also a function of the limits of the analog crap that comes after the actual DAC itself. I could be mistaken on that. I'm sure if it was as easy as it sounds...
 
Last edited:
I believe that most converters use fractional arithmetic. If you think of the signal as being contained between -1 and +1 and the value is represented the same in 16 bit and 24 bit then there isn't anything 'more' at the top or the bottom. Just more points between zero and one. The result is improved resolution. I think that the whole 'theoretical S/N ratio' proposal is just a red herring to sell newer gear.

Glen
I think you're close there, but isn't each bit a fraction of the one to the left of it? If all 1s gives you the biggest possible value then turning off a bit makes it smaller and turning off more bits makes it more smaller, right?
 
I think you're close there, but isn't each bit a fraction of the one to the left of it? If all 1s gives you the biggest possible value then turning off a bit makes it smaller and turning off more bits makes it more smaller, right?

Sound is alternating current to the device. If you think of the max volume as being converted to a sine wave of 1 volt plus and minus, and silence as being at zero volts it might be easier to think how fractional representation ends up with no real enhancement of dynamic range. Of course, this all depends on how and why a conversion might happen. For example, the conversion that a DAW might use for its internal processing with 32 or 64 bit processing may not use fractional arithmetic. For example, in Sonar - using only internal processing effects in the Pro Channel - I can sum sounds in a bus that go way beyond 0Db without causing digital clipping. This suddenly becomes untrue as soon as I send to a hardware device or use most third party effects. Internally, Sonar uses the bigger numbers for additional headroom. I would imagine that other DAWs do the same.

So zero ends up being zero in both 16 bit as well as 24 bits. One (or minus one) also ends up being the same number in both bit depths. There are just more possible values between zero and one in a 24 bit sample than there are in a 16 bit sample.

Glen
 
Sound is alternating current to the device. If you think of the max volume as being converted to a sine wave of 1 volt plus and minus, and silence as being at zero volts it might be easier to think how fractional representation ends up with no real enhancement of dynamic range. Of course, this all depends on how and why a conversion might happen. For example, the conversion that a DAW might use for its internal processing with 32 or 64 bit processing may not use fractional arithmetic. For example, in Sonar - using only internal processing effects in the Pro Channel - I can sum sounds in a bus that go way beyond 0Db without causing digital clipping. This suddenly becomes untrue as soon as I send to a hardware device or use most third party effects. Internally, Sonar uses the bigger numbers for additional headroom. I would imagine that other DAWs do the same.

So zero ends up being zero in both 16 bit as well as 24 bits. One (or minus one) also ends up being the same number in both bit depths. There are just more possible values between zero and one in a 24 bit sample than there are in a 16 bit sample.

Glen
Errr... Smaller values use less bits. Less bits means more noise. More bits to represent those smaller values means smaller signals for the same proportion of noise. That's as far as I'm going down this rabbit hole today. Skepticism is good, but this point has been pounded by a number of folks who understand it a lot better than both of us, and they convinced me.
 
I never understood intersample peaks. If anyhing happens between the samples, it would be happening at a frequency above nyquist and would be filtered out anyway. What am I missing?

I understand intersample peaks to be the reconstructed peaks created as part of the D/A process, and come into play for digital audio that is pushed up to 0dBFS.
During A/D sampling, the sampling points don't all pick the very top of every analog peak as a point....so you end up with some digital peaks that show 0dBFS when digital levels are up that far, but in order to recreate it on the way out, that 0dBFS is then pushed above 0dBFS by the reconstruction filters.

There's a more involved explanation over on the GS forum...but that's how I understand it, and it makes sense, and offers another reason not to shoot for 0dBFS with mixes, and to leave a little room to absorb the potential for intersample peaks.

Here's that link:
Tips & Techniques:Intersample peaks - Gearslutz.com
 
...there isn't anything 'more' at the top or the bottom. Just more points between zero and one. The result is improved resolution.

Nope. One bit equals 6dB or 2:1 voltage regardless of the number of bits. More bits means more dynamic range, however you want to allocate it. With 24 bit audio you get 144dB of dynamic range vs. 96dB with 16 bit (to oversimplify it a bit).
 
Back
Top