recording at 24 bit 48k?

I really don't think there is a positive audible difference between 44.1 and 48k audio. For music recording I can't really see much gained from shifting up to 48k. When I work with stuff for video I use 48k, because that is what editors want/need. However, I wouldn't record at 16-bit. Definitely worth recording at 24-bit. For one thing the dynamic range of 24 bit is around 144 dB, whereas with 16-bit it is only 96 dB, which means you can record at lower levels and still not hit the noise floor by a long way.
 
the dynamic range of 24 bit is around 144 dB, whereas with 16-bit it is only 96 dB

No converters offer a dynamic range of 144 dB. Most are around 110 dB. Further, have you ever assessed the dynamic range of a typical studio room? For most sources, you'd need to record in a million dollar anechoic chamber to get a noise floor as low as 16 bit digital audio.

--Ethan
 
No converters offer a dynamic range of 144 dB. Most are around 110 dB. Further, have you ever assessed the dynamic range of a typical studio room? For most sources, you'd need to record in a million dollar anechoic chamber to get a noise floor as low as 16 bit digital audio.

--Ethan

Ethan, I love your work with studio acoustics etc. It's great. I take plenty of note of what you say there because you have great ideas and knowledge. However, I am not going to get into a relative merits of 16 or 24 bit recording or converters ( sound balster v Lynx Aurora) type debate here. Let's just say I have been involved in those kinds of threads on another forum where I, along with others disagreed with some of your points of view. I CAN hear the difference between converters, and I can hear the difference between 16 and 24 bit. If you can't that's fine. :)
 
With todays CPU and FPU speeds and inexpensive massive storage media, are there any compelling reasons not to record at 24 bit?

I see the main benefits of a 24 bit word length as: A) you can track with peaks safely below 0dBFS and still have ample dynamic range, and B) during processing, the higher word length means fewer rounding errors with all the math operations. Once the session is finished, you can export at 44.1kHz 16bit and be happy.

Actually, my system (Linux OS, using Echo Audiofire 8) doesn't even give me a choice of anything but 24 bit. Apparently, lower word lengths are possible, but they use the same packet size as for 24 bit with unused bits padded with zero, so the bandwidth usage is exactly the same.

Paul
 
With todays CPU and FPU speeds and inexpensive massive storage media, are there any compelling reasons not to record at 24 bit?
I see the main benefits of a 24 bit word length as: A) you can track with peaks safely below 0dBFS and still have ample dynamic range, and B) during processing, the higher word length means fewer rounding errors with all the math operations.
Paul

Exactly.... And why not just go for 48k as well?... It is a minimal increase on your computer's processing power over 44.1k.. AND you are building a body of work that is good-to-go for video should that future re-purposing happen.
 
Hi there
For what its worth, recording at higher sample rates (providing your converters can handle it) increases the accuracy of how the converter tracks the incoming signals and greatly reduces the chance of potential high frequency degradation when converting the analogue/electrical signal to a stepped digital signal.
The 24 bits, as discussed, greatly increases your overall dynamic headroom and the accuracy of assigning those samples their correct 'dynamic' value
The higher the sample rate and bit depth, the more accurately the original sound can be represented when it gets converted into a digital stream of 1s and 0s.

During recording, the converter has to make really fast decisions (44,100 times a second for 44.1kHz sampling) regarding how it assigns the electrical audio signal based on the available sampling rate and bit depth. Every sample therefore is a 'best guess' made by the digital converter, and will never be 100% accurate compared to the original analogue signal (though our analogue ears will not be able to tell the difference most of the time). The more samples made per second, the more accurately the converter tracks the frequency changes over time, and the less likely it is to mess up the digitising of the waveforms. Hence, 48kHz samples the incoming audio 48,000 times a second and 96kHz does it 96,000 times a second! The more bits available for it to work with, the more accurately it can plot the dynamic change over time of the samples relative to other samples.

These days, computers are really fast and have more RAM and HD space than the best computer available when pro-sumer 16/44.1 digital recording first appeared, so if your system can handle the overheads of processing larger files, maintaining the highest level of data integrity within the recording kind of makes sense. Yes, you'll be dithering down to 16/44.1 for CD, but its always better to start with a higher quality source.

Trying to squeeze my brain around a simpler explanation, if you start with 16/44.1 as the basis for your recording settings, any degradation of the signal during tracking, processing, bouncing, editing, etc (basically, further resampling of the original signal) will potentially bring the relative quality of the signal below what 16/44.1 is capable of mapping (if that makes sense?) With higher quality source files tracked at 24bits and 48kHz, 88.2kHz or 96kHz, any degradation of the audio during resampling is (hopefully) still going to keep the overall quality of the audio in your mix session well above the 16/44.1 quality threshold until it is time to dither the tracks down to CD.

*phew* My brain needs a holiday after that!

Hope that I have been able to help you grasp a bit more about what the higher sample rates and bit depths mean, Knivez

Dags
 
I won't enter the argument about whether I can HEAR a higher bit depth but it certainly offers rather more protection against clipping when you work at 24 bit instead of 16 bit. Indeed, unless I have to exchange sessions with somebody limited to 24 bit, I now work at 32 bit floating point. This is not to sound better, just to give more flexibility in the mix--I can do processes that raise the levels into what might be clipping at 16 bit then just normalise downwards.
 
I won't enter the argument about whether I can HEAR a higher bit depth but it certainly offers rather more protection against clipping when you work at 24 bit instead of 16 bit. Indeed, unless I have to exchange sessions with somebody limited to 24 bit, I now work at 32 bit floating point. This is not to sound better, just to give more flexibility in the mix--I can do processes that raise the levels into what might be clipping at 16 bit then just normalise downwards.

So, you're saying that 16 bit clips at a different level than 24 or 32 bit????
 
I had a brain fart when I typed that and mentioned 24 bit integer then talked about the advantage of 32 bit floating point--been working with 32 bit float for so long I forgot about 24 bit integer. (Cool Edit and Audition have had 32 bit float for 12 years now.)

Anyway, the advantage I mentioned is specific to 32 bit float. The extra 8 bits are a mantissa which adds hugely to the headroom in processing. I can seriously red line in mastering or mixing together a bunch of tracks, then just normalise downwards and, voila, no clipping.

Don't try it in simple 24 bit integer though. That clips at the same level as 16 bit.
 
That processing is going to be there no matter what bit depth the audio was recorded in.

That said, I track in 24-bit --- Huge dynamic range, no dither noise at the track level, etc.

FTR, I'm no fan of exceeding FS "just because the math allows it" -- It's one of those things that "sort of works" in theory --- Then it hits the DA converters and all bets are off.
 
I can assure it doesn't just work "in theory". It works perfectly and transparently every time. Processing in 32 bit floating point is one of the reasons I've stuck with Cool Edit and Audition all these years. A fair number of other DAWS are catching up with the advantages now and offering the same--for the same reasons.

As for "then it hits the DA converters" if you let these level hit the converters you're doing it wrong (other than some nastiness in monitoring if you let it get that far--but that doesn't affect the final product). The levels are normalised and the bit depth down to 16 by the time you're finished if you're doing your job properly.
 
Hi there
For what its worth, recording at higher sample rates (providing your converters can handle it) increases the accuracy of how the converter tracks the incoming signals and greatly reduces the chance of potential high frequency degradation when converting the analogue/electrical signal to a stepped digital signal.
The digital signal is not 'stepped' like the over-simplified graphic that everyone is familiar with. It can't be, because a signal that looked like that would have harmonics in it that would have to be filtered out before the signal got converted in either direction. Technically, there is no 'digital signal'. There is only a bit stream that can be used to reconstruct an analog signal.


The 24 bits, as discussed, greatly increases your overall dynamic headroom and the accuracy of assigning those samples their correct 'dynamic' value
More like dynamic foot room, since it pushes the noise floor down. 0dbfs is still in the same place no matter how many bits you use. Yes, you can leave more headroom without digital consequence because the noise floor is further down.


During recording, the converter has to make really fast decisions (44,100 times a second for 44.1kHz sampling) regarding how it assigns the electrical audio signal based on the available sampling rate and bit depth. Every sample therefore is a 'best guess' made by the digital converter, and will never be 100% accurate compared to the original analogue signal (though our analogue ears will not be able to tell the difference most of the time). The more samples made per second, the more accurately the converter tracks the frequency changes over time, and the less likely it is to mess up the digitising of the waveforms. Hence, 48kHz samples the incoming audio 48,000 times a second and 96kHz does it 96,000 times a second! The more bits available for it to work with, the more accurately it can plot the dynamic change over time of the samples relative to other samples.
All a higher sample rate gives you is the ability to record higher frequencies. The only real advantage is the ability to use a more gentle slope on the anti-aliasing filter. It's not more accurate if there is no content above 20k.

Most modern converters oversample. The actually capture the signal in the mega-Hz range and downsample to the target rate. The higher sample rate you use, the more likely jitter will be a factor, since there are more samples to get screwed up by it. So it ends up being a complete waste of time if there is no content above 20k to record.


Trying to squeeze my brain around a simpler explanation, if you start with 16/44.1 as the basis for your recording settings, any degradation of the signal during tracking, processing, bouncing, editing, etc (basically, further resampling of the original signal) will potentially bring the relative quality of the signal below what 16/44.1 is capable of mapping (if that makes sense?) With higher quality source files tracked at 24bits and 48kHz, 88.2kHz or 96kHz, any degradation of the audio during resampling is (hopefully) still going to keep the overall quality of the audio in your mix session well above the 16/44.1 quality threshold until it is time to dither the tracks down to CD.
While this is all true, if you start out with 24/44.1kHz, you don't have to downsample at all. You simply shave off the the least significant 8 bits, add dither and lose no quality.

One of the other advantages to using 24 bits instead of 16 is, when processing, any rounding errors will end up in the bottom bits. Those bottom bits get shaved off and all of those 'problems' go with it.
 
I can hear the difference between 16 and 24 bit.

I'm certain you cannot, at least for typical music recorded at sensible levels and played back at a sensible volume. But since you're too far away to visit in person for a blind test, there's no point is discussing further.

--Ethan
 
I find all of this facinating, but understand very little about it other than the general summary of bit rate and sample rate. Having said that, my concerns, as mentioned in this thread, are with the many other aspects of recording way before I worry about sample rate. I record at 24/44.1.
 
I find all of this facinating, but understand very little about it other than the general summary of bit rate and sample rate. Having said that, my concerns, as mentioned in this thread, are with the many other aspects of recording way before I worry about sample rate. I record at 24/44.1.

I second that ^^^^ I always record in 24 bit 44.1.

However some people have funny ideas about what is pro, I just received a project to master to CD, the client had gone to another studio to record, was given the mixes on a CD and told that this was the master. Well it was not mastered, all it is was a CD of the mixed songs, no fades, dirty starts, songs at different levels, I think he thought clicking "Normalize" was mastering. Anyway the client got all the project files from the other studio the files turned up as 16 bit Aiff. I rang him up and asked why they were supplied as 16 bit Aiff, the replay was that Pro Studios don't use Wav files and that 16 bit Aiff was a pro format and that if I wanted it any other way I was not a pro studio?
:facepalm::facepalm::facepalm::facepalm:
When 1 Facepalm is never enough.

Alan.
 
I rang him up and asked why they were supplied as 16 bit Aiff, the replay was that Pro Studios don't use Wav files and that 16 bit Aiff was a pro format and that if I wanted it any other way I was not a pro studio?
:facepalm::facepalm::facepalm::facepalm:
When 1 Facepalm is never enough.

Alan.

So THAT'S what I've been getting wrong all these years!
 
The reason for recording at 24 bit is that it provides a lower noise floor so you can record at a lower level and not ever have to worry about clipping. Recordings done at 16 bit have exactly as much resolution as those done at 24 bit but with a higher noise floor.

Wait, what? 16-bit has a lower resolution than 24-bit. 24-bit allows for 16,777,216 (as previously stated) levels of dynamic range, which means that when you convert back from D/A (as you've stated), the noise added from dither is lower. It has higher resolution, and as a result, there's a lower noise floor.
Of course, this may have been what you meant... But I like to start arguments, so even if I'm wrong, I'ma fight over this until I die. Or the thread does, either way. xD
 
I've done some (actually a pretty good amount of) DSP programming and have a few things to say on this topic - I think there's some confusion (and I may well add to it, but I at least have some comments about things I know, and questions about things I doubt that I've read in this thread...so here goes)

I see the main benefits of a 24 bit word length as: A) you can track with peaks safely below 0dBFS and still have ample dynamic range, and B) during processing, the higher word length means fewer rounding errors with all the math operations.

I think this is correct. Even further - there is no 24-bit data type in C, hence DAWs having a "32 bit audio engine" or "64 bit audio engine". Whatever your source is is upsampled into 32 bit floats or 64 bit doubles from the get-go.

One of the other advantages to using 24 bits instead of 16 is, when processing, any rounding errors will end up in the bottom bits. Those bottom bits get shaved off and all of those 'problems' go with it.

I think this is basically correct, too. The 16 most significant bits (or rather... the only 16 remaining bits), when downsampled from whatever the internal rate in your DAW is, is what you get "out", but with extra headroom the rounding errors don't really matter when truncated at the end (they're truncated out of the down-cast).

The reason for recording at 24 bit is that it provides a lower noise floor so you can record at a lower level and not ever have to worry about clipping. Recordings done at 16 bit have exactly as much resolution as those done at 24 bit but with a higher noise floor.

I think the lower noise floor is a side effect - I don't think the extra 8 bits are just used for noise floor purposes - the whole thing is actually recorded at a higher resolution and more accurately reflects the voltages on the convertors (whether or not the convertors have the technical capability to distinguish that many different voltages.....I'm not really sure - and I kinda doubt it).

The only time I'd bother recording at 24 bits is for a live event where there's a potential for unexpected loud volumes.

Are you sure that the extra 8 bits are used for "extra loud signals" only? I think this is an inaccurate statement for the same reason I think Boulder's was - just at the other end of the spectrum (and with the same caveat that I kinda doubt it even matters.... I just don't think you're right about what the extra bits are used for).

...I now work at 32 bit floating point. This is not to sound better, just to give more flexibility in the mix--I can do processes that raise the levels into what might be clipping at 16 bit then just normalise downwards.

I'm not sure it exactly relates to how DAW's work, but this is exactly the reason I always upsample to floats or doubles (64 bit floats) when doing any DSP programming - so there's not a clamp to 16-bit values at every stage - you only have to do that once at the end of the chain. Of course... the DSP programming I've done always uses 16 bit source material, so I think there may be a difference when taking into account higher resolution sources....but your point stands, and I agree with it - except that I would say it *does* sound better, to not downsample to 16 bits, repeatedly, every step of the way. Audibly, literally, better - but I think most DAWs use 32 or 64 bit internal processing anyway, which is basically the same....in which case - I still don't know if the source depth *really* mattered beyond 16 bits. I suppose some math examples might help.... if you start with x[n] and do y,z,a,b to the n samples in x, do you end up with a more accurate sample if x was sampled at 16 bits or 24? Almost certainly 24...but I don't know if it *really* matters, to be honest, since it's upsampled to 32 or 64 from the beginning, regardless, and we can't really distinguish more than is expressible in 16 bits.
 
Back
Top