24 bit vs. 16 bit recording

  • Thread starter Thread starter ben123
  • Start date Start date
I understand completely on your point about the resolution/camera pixels being a better analogy than the detents on a volume knob. But isn't it true that with 24-bit, you're able to record at more conservative levels with less noise than 16-bit (because of the increased dynamic range), which therefore decreases the likelihood of clipping?
24 bit gives you more dynamic range, but the added dynamic range is down at the bottom. 0dbFS is the same either way, but in 16 bit you can go down to -96dbFS and in 24 bit you can (theoretically) go down to -144dbfs.

This means that you can record at a lower level without worrying about losing resolution.
 
I understand completely on your point about the resolution/camera pixels being a better analogy than the detents on a volume knob. But isn't it true that with 24-bit, you're able to record at more conservative levels with less noise than 16-bit (because of the increased dynamic range), which therefore decreases the likelihood of clipping?

Noise floor and headroom are all analogue domain factors. If you calibrate your digital system to 0dBFS=+4dBu and your noise floor is at say -90dB, then regardless of your bit depth, you'll effectively had a working dynamic range of 90dB, all you bit depth does is divide that dynamic range up either 16,777,216 or 65,536 segments. Similar to a 1kHz tone being sampled at 44.1kHz or 96kHz. The resulting tone is still a 1kHz wave, but the accuracy of the definition of that wave is more accurate at 96kHz.

This debate really comes into it's own when you talk about plug ins really. On a simple A/B test between 16 and 24 bit of a straight recording, most people would find it hard to tell much difference, but when you start using plug ins and adjusting the volume on the faders in pro tools or logic, it means the computer has more accuracy to work from. Adjusting the volume in pro tools will degrade the sound more.

I was mixing analogue once and using pro tools simply as a tape machine, but the recorded signals were far too hot on one song, so I just dropped all the levels by +4dB so as not to have to recalibrate the convertors each time. As soon as I did this the stereo image shrunk to about 70% of what it was before and is sounded less accurate. Try it, if you can get your hands on a large format console.

With all said and done, 24bit doesn't increase disk space dramatically, especially not now you can buy a TB for threepence. You might as well record in 24 regardless, then the resolution is there if you need it.
 
... If you calibrate your digital system to 0dBFS=+4dBu and your noise floor is at say -90dB, then regardless of your bit depth, you'll effectively had a working dynamic range of 90dB, all you bit depth does is divide that dynamic range up either 16,777,216 or 65,536 segments.
Wouldn't 16 bit really limit you to about a 78db dynamic range because you only really use about 13 bits, if that?

...the accuracy of the definition of that wave is more accurate at 96kHz.
How is it more accurate? Isn't the precision at 20 hz-20 khz of a sampled signal at 44.1 khz, 96 khz or even analog exactly the same below the Nyquist limit? Granted, in between one 44.1 sample and another there are unsampled variations but they would have to be above ~22khz so why would we care?

If I'm recording audio for cd's I use 24 bit - 44.1
If I'm recording audio for video I use 24 bit 48.1
I don't need to understand this, lol.
 
Noise floor and headroom are all analogue domain factors. If you calibrate your digital system to 0dBFS=+4dBu and your noise floor is at say -90dB, then regardless of your bit depth, you'll effectively had a working dynamic range of 90dB, all you bit depth does is divide that dynamic range up either 16,777,216 or 65,536 segments.
No. One bit is approximately 6db of dynamic range no matter how many bits you have. Yes there are more segments to work with, but in your scenario of only 90db of dynamic range, you would effectively have the same number of segments above the analog noise floor.


Similar to a 1kHz tone being sampled at 44.1kHz or 96kHz. The resulting tone is still a 1kHz wave, but the accuracy of the definition of that wave is more accurate at 96kHz.
No. 44.1k can reproduce a 1k sine wave perfectly. You can sample it at 100MHz and it would still be the same and no more accurate. The only thing that 96kHz could give you that 44.1k can't are harmonics above 20kHz. A sine wave doesn't have any harmonics, so there is nothing to capture. If you did have a signal with content above 20kHz, 96k sample rate would capture it, but everything below 20k would be the same and no more accurate than if you have recorded at 44.1kHz
 
Noise floor and headroom are all analogue domain factors. If you calibrate your digital system to 0dBFS=+4dBu and your noise floor is at say -90dB, then regardless of your bit depth, you'll effectively had a working dynamic range of 90dB, all you bit depth does is divide that dynamic range up either 16,777,216 or 65,536 segments. Similar to a 1kHz tone being sampled at 44.1kHz or 96kHz. The resulting tone is still a 1kHz wave, but the accuracy of the definition of that wave is more accurate at 96kHz.

Besides being a bit confusing, this portion of your post is full of weird claims. Firstly, there is no system I know of where 0dBFS is calibrated to +4dBu. Could you possibly have meant 0VU? In any case, the +4dBu standard is ALWAYS 0VU so there's no calibration about it. Having a 0dBFS calibration to +4dBu means that you have effectively killed all of the headroom in your system as it will overload when it reaches +4dBu. Keep in mind that most analogue gear overloads anywhere from +18 (and less) to +30dBu (and more) depending on the gear. It is the amount of available dB's over +4dBu (0VU) - the NOMINAL operating range of the gear - that defines how much headroom you have available.

This debate really comes into it's own when you talk about plug ins really. On a simple A/B test between 16 and 24 bit of a straight recording, most people would find it hard to tell much difference, but when you start using plug ins and adjusting the volume on the faders in pro tools or logic, it means the computer has more accuracy to work from. Adjusting the volume in pro tools will degrade the sound more.

Granted, yes, you lose bits whenever you attenuate a signal in the digital domain but you only lose 1 bit for every 6dB of attenuation. If you find your levels need more than that, then there's something seriously wrong with the recording's gain structure. This is also why we have floating point and double precision which can accommodate these sort of things. Dude, I'm sorry, but I get the feeling you are grossly misinformed.

I was mixing analogue once and using pro tools simply as a tape machine, but the recorded signals were far too hot on one song, so I just dropped all the levels by +4dB so as not to have to recalibrate the convertors each time. As soon as I did this the stereo image shrunk to about 70% of what it was before and is sounded less accurate. Try it, if you can get your hands on a large format console.

No offense, but I think you may have been hearing things. There's no way in my mind that attenuating the levels within a DAW (ESPECIALLY if it's being stemmed out to a LFAC) can affect the stereo image if you're mixing on a LFAC. Is it possible that you were somewhat through your mix and dropping the level affected saturation/processing? I am willing to bet that this was the case instead of pushing the blame to losing bits because of attenuation.

With all said and done, 24bit doesn't increase disk space dramatically, especially not now you can buy a TB for threepence. You might as well record in 24 regardless, then the resolution is there if you need it.

Well, now that I can agree with.

Cheers :)
 
No. 44.1k can reproduce a 1k sine wave perfectly. You can sample it at 100MHz and it would still be the same and no more accurate.

Not true. 1kHz tone may not show up as many discrepancies as say a 20kHz tone (or even more realistically a complex sound wave), but there would still be discrepancies. On A-D conversion a signal is more oversampled at X times the sample rate. Groups of samples of this higher sample rate are then averaged out to one level to represent the mean voltage over the period of time one sample must represent (1/44100th of a second for 44.1kHz or 1/96000th of a second for 96kHz). Then on D-A conversion the signal is then over sampled again, however this time the extra samples are an estimation of the original oversampling. This changes the larger stepped voltage signal (44.1 or 96) into finer steps, which are then 'ironed out' by using low pass filters, effectively smoothing the voltage into a continuous one rather than a stepped one. more analogous to the original. Therefore when averaging out the oversampling levels into a single 'sample', you're much more likely to have an accurate average if you take a smaller number of oversampling samples to create your single sample (ie 96kHz).

This averaging and then re-guessing effectively slightly changes the shape of the waveform reproduced, added to further by the fact the clocks in an audio converter (due to impurities in the crystals used) aren't 100% accurate and so timing discrepancies are introduced with the sampling and re-sampling. The argument comes into it's own when you introduce harmonics as you say, but just because you can't detect a specific frequency (ie about 20kHz) doesn't mean you can't detect the effect it has on the overall signal. I have experimented with a very well renowned engineer friend of mine, who can't actually directly hear frequencies about 10kHz, but has without fail detected the difference between a 44.1kHz sampled sound and a 96kHz sample sound (where sound coming from exactly the same source material) as well as a 1dB adjustment of a 16kHz HF shelf when recalling mixes.

The fact is that discrepancies are there, whether you can hear them or whether you decide they are insignificant enough for the lower sample rate to suffice... they are there.

Could you possibly have meant 0VU?

My apologies there. I meant -14dBFS = 0VU, but the figures were almost irrelevant. My point was that your analogue input amps to the converters have an inherent noise floor/head room ratio. By using 16bit, each bit/quanta is representing a larger proportion of the analogue signal than with 24bit. More precisely in the 16 bit system the OR gates used in the converters to denoted which bit should be used to represent a certain voltage, only trigger once a higher electric charged has been reached. If the case was that the OR gates triggered at equal levels regardless of bit depth then when flicking between the two bit depths you would have to recalibrate your input amps each time as, when your 24bit depth read 0VU=-14dBFS, your 16bit depth would then read a higher dbFS level. As it is the calibration level stays the same, but the electrical charge represented by each bit differs.

Granted, yes, you lose bits whenever you attenuate a signal in the digital domain but you only lose 1 bit for every 6dB of attenuation. If you find your levels need more than that, then there's something seriously wrong with the recording's gain structure. This is also why we have floating point and double precision which can accommodate these sort of things. Dude, I'm sorry, but I get the feeling you are grossly misinformed.

Firstly, one doesn't always record one's mixing material, so it can be the case that you need more attenuation/gain than 6dB. I generally record in a way that means at mix down my fader rarely moves beyond +/- 4dB (disregarding fading in and out, which happens).

Secondly and the point here, is that attenuation/gain in the digital domain is not as simple as truncation. The algorithms used to generate attenuation/gain introduce artifacts that truncation doesn't. Fade movements are real-time calculations, rather than off line bit truncation.

No offense, but I think you may have been hearing things.

No offense, but if I was, so were some other highly experienced engineers and producers. It was done on the album 'Mandé Variations' by Toumani Diabate, with engineers Jerry Boys, Tom Leader and producer Nick Gold also present. The mix was running out of Pro Tools 7.x, through 192 convertors at 96kHz into an SSL E Series console. During the test the 192s were calibrated to -14dBFS with the faders at 0 and a mix put down to 1/2 tape, then at -18dBFS with the faders at -4dB, and a mix also put down to the same 1/2 tape. This meant that signal to the console was the same level for each mix and all that had effectively changed was the Pro Tools fader level. In an A/B playback at equal listening volumes, the unanimous verdict was that stereo width was compromised with the lower fader level in Pro Tools. The fact that the 192 output amps changed was negated by the result of this test matched precisely with result when the fader levels alone were altered and the monitoring level was then adjusted for equal listening levels. Admittedly this was on a solo kora recording, which is much more susceptible to noticeable degradation, however the artifacts were there.


Well, now that I can agree with.
 
what i find with bit depth is that it doesn't matter if it's 16bit or 24bit as cd quality is 16bit 44.1. so why would you want to add the extra work load to the pc just to get an extra 8bits that will be lost by the end result.

i record at 16bit 48khz just for the sake of the high end and i export at that to. so what i hear from start to finish is what i get. there is not down grade at all at any point.
 
what i find with bit depth is that it doesn't matter if it's 16bit or 24bit as cd quality is 16bit 44.1. so why would you want to add the extra work load to the pc just to get an extra 8bits that will be lost by the end result.

i record at 16bit 48khz just for the sake of the high end and i export at that to. so what i hear from start to finish is what i get. there is not down grade at all at any point.

The point is if you're using in the box processing the accuracy of the processing is higher. The higher quality you start with the better the end result, no matter what it's end quality. If you're worrying about extra workload on a pc between 24 and 16 you really need to upgrade to more modern PC... or preferably Mac.

Also if you're recording at 48kHz, you've got to sample rate convert at some point (unless you go into the analogue domain and re-sample at a lower frequency) which is the worst thing for sound degradation. Even the expensive algorithms in SADiE degrade the sound. You're far better off recording at 44.1 and never sample rate converting that doing any sample rate converting... and it's not true that going from 88.2 to 44.1 is better than 96 to 44.1. The algorithms don't work by simply getting rid of every other sample.

When all said and done, if 44.1 16bit is good enough quality for you, go for it. Hey, an album I mixed which was recorded at 44.1 16bit was nominated for a Grammy, so that worked for me. It was the songs that made it, not the bit depth.
 
my pc is less than a year old. quad core 6gb ram 1tb hard drive. it's not my pc thats the issue i find running at the extra bit depth takes away some resources for say plugins. i never change any of it at any point. i start there and i burn cds there.

a lot of people will say thats wrong but i haven't come across a cd player that will not play the cd so im happy with this. it works for me. i dont have to worry about changing anything from start to finish and how it sounds from start to finish never changes.
 
Wow. Fascinating thread considering I'm completely green but coming from a long affair with high end home audio. It's interesting to hear arguments on the recording side of things regarding this topic. In home audio, the camps are divided. The high rez guys are hep cats in with the new, and their 24/96 is lighter and airier than your 44.1 dinasour DAC and even if they can't hear beyond 20K their 60K super tweeters provide them with sinusoidal ecstasy. Then there are the 44.1 redbook guys who have really thrown the gauntlet by claiming they are the only real "music lovers" and standard 44.1 DAC's have all the magic and harmonic mojo and anything that handles high rez files (or upsamples) jacks up the X factor. NOS (non oversampling) DACs are fetching higher prices these days... but I'm still spinning vinyl in my underwear and house slippers.

As you can see, I don't really have anything to contribute here...just eatin' popcorn and reading along. I'm going to bed now.

Drone.
 
Last edited:
24 bit vs. 16 bit is like driving on a highway with wide lanes and plenty of shoulder and median rather than a narrow 2-lane. It doesn't automatically make your driving better, it just lets you relax and focus on making music instead of stressing about pushing levels to the edge of clipping.

Spare Dougal, I think you're overstating the degradation of resampling. There are more important sonic issues to worry about than that. In fact, when I do it it seems to sort of glue the mix together.
 
Not true. 1kHz tone may not show up as many discrepancies as say a 20kHz tone (or even more realistically a complex sound wave), but there would still be discrepancies. On A-D conversion a signal is more oversampled at X times the sample rate. Groups of samples of this higher sample rate are then averaged out to one level to represent the mean voltage over the period of time one sample must represent (1/44100th of a second for 44.1kHz or 1/96000th of a second for 96kHz). Then on D-A conversion the signal is then over sampled again, however this time the extra samples are an estimation of the original oversampling. This changes the larger stepped voltage signal (44.1 or 96) into finer steps, which are then 'ironed out' by using low pass filters, effectively smoothing the voltage into a continuous one rather than a stepped one. more analogous to the original. Therefore when averaging out the oversampling levels into a single 'sample', you're much more likely to have an accurate average if you take a smaller number of oversampling samples to create your single sample (ie 96kHz).

This averaging and then re-guessing effectively slightly changes the shape of the waveform reproduced, added to further by the fact the clocks in an audio converter (due to impurities in the crystals used) aren't 100% accurate and so timing discrepancies are introduced with the sampling and re-sampling. The argument comes into it's own when you introduce harmonics as you say, but just because you can't detect a specific frequency (ie about 20kHz) doesn't mean you can't detect the effect it has on the overall signal. I have experimented with a very well renowned engineer friend of mine, who can't actually directly hear frequencies about 10kHz, but has without fail detected the difference between a 44.1kHz sampled sound and a 96kHz sample sound (where sound coming from exactly the same source material) as well as a 1dB adjustment of a 16kHz HF shelf when recalling mixes.

The fact is that discrepancies are there, whether you can hear them or whether you decide they are insignificant enough for the lower sample rate to suffice... they are there.
Any of the discrepancies would be above nyquist, so they can't exist in the final analog audio.



My apologies there. I meant -14dBFS = 0VU, but the figures were almost irrelevant. My point was that your analogue input amps to the converters have an inherent noise floor/head room ratio. By using 16bit, each bit/quanta is representing a larger proportion of the analogue signal than with 24bit. More precisely in the 16 bit system the OR gates used in the converters to denoted which bit should be used to represent a certain voltage, only trigger once a higher electric charged has been reached. If the case was that the OR gates triggered at equal levels regardless of bit depth then when flicking between the two bit depths you would have to recalibrate your input amps each time as, when your 24bit depth read 0VU=-14dBFS, your 16bit depth would then read a higher dbFS level. As it is the calibration level stays the same, but the electrical charge represented by each bit differs.
No. Each bit always represents the same amount of dynamic range regardless of the noise floor. Adding bit depth just pushes your digital noise floor farther down. The 16 most significant bits in a 24 bit signal represent the exact same voltages and the 16 bits in a 16 bit signal.

I don't know where you are getting your info, but it is just plain wrong.
 
24 bit vs. 16 bit is like driving on a highway with wide lanes and plenty of shoulder and median rather than a narrow 2-lane. It doesn't automatically make your driving better, it just lets you relax and focus on making music instead of stressing about pushing levels to the edge of clipping.

Spare Dougal, I think you're overstating the degradation of re-sampling. There are more important sonic issues to worry about than that. In fact, when I do it it seems to sort of glue the mix together.

Nice analogy about the bit rate but I'm not overstating the resampling. If you like that effect that's fine, I'm not saying you can't like it as the noticable effects will be depending on the genre of music. Hell, I use 8bit sampling for some electronic sounds. The gluing a mix together would be the fact that some definition is lost.
 
... even if they can't hear beyond 20K their 60K super tweeters provide them with sinusoidal ecstasy...
How is their 60K tweeters providing anything extra when the source material is filtered above 22K?
 
How is their 60K tweeters providing anything extra when the source material is filtered above 22K?

That was the exact argument audiophiles where having when JBL came out with their UHF beryllium teeter (AKA "the bat slayer"), but they sold 'em.
DVD-A, SACD, and even PCM are all capable of producing 100K. There exists some unfiltered recorded material that has frequency extension far beyond 20k, but it was never popular- made mostly for manufacturers demonstrating their UHF tweeters. Some Japanese audiophile mags were claiming the unfiltered UHF extension of recorded cymbals sounded better (not that they could actually hear the extension). My opinion is that it is pointless to have a driver that goes beyond 20K. Redonkulous.

Drone.
 
Last edited:
Any of the discrepancies would be above nyquist, so they can't exist in the final analog audio.

The higher the sample rate the smaller the discrepancy. Well this is what I've always understood to be the case and the practical experience reflects this conclusions (plus 5 years of study, reading a number of industry standard publications, plus 6 years working in and running recording studios with many experienced engineers). The discrepancies due to clock crystals impurities will be across all frequencies. They will show up more in the higher frequencies admittedly, but they will still be there.

No. Each bit always represents the same amount of dynamic range regardless of the noise floor. Adding bit depth just pushes your digital noise floor farther down. The 16 most significant bits in a 24 bit signal represent the exact same voltages and the 16 bits in a 16 bit signal.

The pushing down of the noise floor as you say is to do with digital noise, which is relative to the number of bits, yes. Though, as the analogue noise floor (mic>mic amp>compressor>EQ>desk>convertor input amp) is going to be higher than the digital one, even at 16bit digital noise it's pretty negligible. The analogue signal into the converters remains the same level whether you use 16 or 24 bit so how can 24 reduce the ratio of analogue noise to signal? Therefore, in a practical capacity (i.e. dBSPL from your speakers) at the same listening volume the digitisation of the analogue signal relates to cutting it up into more pieces not giving you more headroom. 24bit is therefore a matter of higher resolution than 16bit.

I don't know where you are getting your info, but it is just plain wrong.

Jay... such Vitriol. Nice to meet you too. I don't know where you're getting your information from either, but I'd love to read it. I'm always willing to learn new things. I understand what you're saying but it's only part of the picture. It's not being related to the practical sense.

When all said and done, 24 bit is the preferred option. It's not the end of the world if you record at 16bit, but 24 is preferred. I'd be happy to continue this discussion in private if you wish Jay, but think this has already gone beyond the relevant level to the original post. Wouldn't you agree.
 
Back
Top