24 bit vs. 16 bit recording

  • Thread starter Thread starter ben123
  • Start date Start date
B

ben123

New member
Are there really big advantages to recording in 24 bit as apposed to 16 bit. I have done both and well once i put the 16 bit tracks through all the sound processors it sounds like a 24 bit track such as a full rich sound. Your thoughts???
 
Sampling rates tend to be a personal choice...but most everyone records with 24 bit depth these days.
 
Your thoughts???

My thoughts were that 16 bit would be a good choice because finished songs end up on CD and other lower quality formats. After reading about the difference and the extra headroom that 24 bit gives I decided to record in 24 bit. That's my choice for my music.
 
well 24bit means bigger file sizes, and if you're up to large track counts then your harddrive will need to be able to keep up with the higher data rate.
24 bit does give you more headroom, which is nice. Beyond that, i suppose that you could run all your plugins at 24 bit and then dither down to 16 bit for the CD master at the end, so I guess your sort of 'internal' noise floor might be lower, BUT I suspect the audible difference may be negligible anyway, to your average listener at least.
 
123456789012345678901234
is a more accurate number than
1234567890123456

Your computer is doing math calculations when you apply effects and do stuff to your sound.
With 24bit you get better calculations and more accurate sound.
 
Actually, it'd be 65,536 vs. 16,777,216

*Edit: unless we're talking about a decimal (vs. binary) system ;)
 
You should always record at 24-bit wherever possible.

This allows decent headroom before clipping and also allows you to manipulate in a DAW without compromising the final 16-bit CD.

So - record and keep at 24-bit until you have finished mastering and only then, at the final moment, down-convert to a 16-bit file.

Sampling frequency is less important as all this does it to extend the top-end frequency response. And as we can only hear up to about 18-20kHz anyway.......
 
You should always record at 24-bit wherever possible.

This allows decent headroom before clipping and also allows you to manipulate in a DAW without compromising the final 16-bit CD.

So - record and keep at 24-bit until you have finished mastering and only then, at the final moment, down-convert to a 16-bit file.

Sampling frequency is less important as all this does it to extend the top-end frequency response. And as we can only hear up to about 18-20kHz anyway.......

Absolutely agree with this.

I'd also add that working at 24 bit and going to 16 for the final product gives you far greater advantages than you'd get by changing the sample rate. In many cases it would probably be easier and better to choose the sample rate you're going to end up with for the final product and avoid doing a sample rate conversion later. Eg. going from 24 bit/44.1kHz to 16 bit/44.1 should be easier than 24/96 or something to 16/44.1.

With computer speeds and storage capacity the way it is these days it's hard to find an excuse not to use 24 bit recording.
 
wow awsome points. In my mind i was thinking that once everything was put on cd or mp3, some of the bits from the 24 bit rec would be truncated thus rendering it 16 bit equivalent, but i see the point of more headroom. Im looking to get an interface that records 24 bit. Right now i have the basic mbox that records 16 bit. I have a korg multi track recorder that records in 24 which ill just transfer the tracks over to my DAW. I just like the sound that i get recording straight into the DAW itself as apposed to the multi track recorder. So ill look into an upgraded mbox that records in 24, with more inputs as well.
 
Always record with 24bit. The extra bits give your DAW more information for your plug ins to work with, which far outweighs the added increased disk space. There is a more in depth explanation for using 24 bit and sample frequencies here:

mixtipsblog.blogspot.com/2011/04/sample-rates-and-bit-depths.html
 
Man, Spare Dougal ! That would make a fantastic name for a band like 'Wicked Lester'{so much better than Kiss}.
 
With 16 bit, there are only 65,536 possible volume levels. With 24 bit, there are 16,777,216 levels. So imagine a volume knob with 65,536 detents/clicks/possible positions. That's 16-bit. Now, image a volume knob with 16,777,216 detents/clicks/possible positions. That's 24-bit. So with 16-bit, it's much easier to clip (overload) the input on a loud transient than with 24-bit. Of course, you should be tracking at fairly conservative levels anyhow to minimize the risk of clipping, but with 24-bit, the likelihood of clipping on even a loud transient is very low. So definitely choose 24-bit.
 
With 16 bit, there are only 65,536 possible volume levels. With 24 bit, there are 16,777,216 levels. So imagine a volume knob with 65,536 detents/clicks/possible positions. That's 16-bit. Now, image a volume knob with 16,777,216 detents/clicks/possible positions. That's 24-bit. So with 16-bit, it's much easier to clip (overload) the input on a loud transient than with 24-bit. Of course, you should be tracking at fairly conservative levels anyhow to minimize the risk of clipping, but with 24-bit, the likelihood of clipping on even a loud transient is very low. So definitely choose 24-bit.

The bit depth doesn't affect the ease of clipping. 24bit and 26 bit both have the exact same volume capacity they just divide it up in different amounts. Like dividing a cake up in to 65,536 or 16,777,216 pieces. They still both make one whole cake.

The issue is resolution it's analogous to pixels on a camera.
 
While the headroom and resolution factors of 24-bit recording are indeed advantageous, I believe we have all the amplitude resolution in 16-bit audio that we need if the right dither and noise shaping are employed.

You can effectively achieve around 18 bits of resolution (within the 16-bit audio) this way and for modern pop/rock music, this is more than enough. This is because dither allows the ear to perceive attenuating signals below the noise floor and when noise shaping is added, this noise is virtually almost disposed of (or, shifted rather), depending on what type and shape you use. It's actually a natural function of the ear to be able to perceive signals below the noise floor in the natural world. This is a result of the evolution of the ear as a defense mechanism and the principle on which dither is based.

Don't get hung up on the mantra that 24-bit audio is vastly superior to 16-bit audio. There is no reason why you should not be able to achieve a good recording (and sub-sequent mix) working in 16-bit, given the right considerations. The main argument in my mind for working at a higher bit depth is for internal DAW precision where processing would benefit from the lowered noise floor and less quantization distortion.

And just to clear up headroom in digital audio...

Headroom can only be defined in relation to a nominal level, like +4dBu or 0VU. Without that reference, increased amplitude resolution can only be understood as more room on the bottom. IOW, a lower noise floor. 0dBFS is the same in 24-bit as in 16-bit however the real-world clipping point is mostly defined by the input and output sensitivities of your converters, which varies, depending on your calibration and converter model/type. Granted, there is more of a risk of hearing quantization distortion in 16-bit audio but, like I said, with the right dither and noise shaping considerations, this problem becomes manageable.

Cheers :)
 
The bit depth doesn't affect the ease of clipping. 24bit and 26 bit both have the exact same volume capacity they just divide it up in different amounts. Like dividing a cake up in to 65,536 or 16,777,216 pieces. They still both make one whole cake.

The issue is resolution it's analogous to pixels on a camera.

I understand completely on your point about the resolution/camera pixels being a better analogy than the detents on a volume knob. But isn't it true that with 24-bit, you're able to record at more conservative levels with less noise than 16-bit (because of the increased dynamic range), which therefore decreases the likelihood of clipping?
 
You can effectively achieve around 18 bits of resolution (within the 16-bit audio) this way and for modern pop/rock music, this is more than enough. This is because dither allows the ear to perceive attenuating signals below the noise floor and when noise shaping is added, this noise is virtually almost disposed of (or, shifted rather), depending on what type and shape you use. However, for dither to work, you have to have a higher resolution signal that you dither down to 16 bit. The dither allows you to hear the stuff that was there (to and extent). However, if you recorded in 16 bit, that information was never captured in the first place and dither won't create it
 
Back
Top