Recording at 24/96K vs. 16/44.1??

  • Thread starter Thread starter WERNER 1
  • Start date Start date
That's because you atarted out with a 16 bit recording. If you would have recorded the singer in 16 bit, then recorded him in 24 bit, you would have noticed the improvement and it wouldn't have gone away with dithering.


When you convert a 16 bit file to 24 bits, you aren't adding any information. You still only have 16 bits of sound and an extra 8 bits of nothing.
 
But some give the advice to "track at 16 and do post
work in 24", then dither back. I found that simply
converting to 24 changed the sound.
 
Brackish said:
But some give the advice to "track at 16 and do post
work in 24", then dither back.
Who ever said that either doesn't know what they are talking about, or their information is horribly out of date.
 
Brackish said:
FWIW, I just did an experiment where I had a singer
recorded in 16 bit. I converted to 24 bit and I
could hear a change right there. The 24 bit sounded
more "open", and trebly. Then I EQed both the
16 bit recording and the one converted to 24 bit.
I then dithered the 24 back to 16 and compared the
results. I preferred the one that stayed 16 the whole
way -- it sounded fuller/warmer. Dithering the 24 down
took away some of the airiness that was taken on
by the conversion to 24 but not all of it.

I also noticed that if I just converted the 24 back
to 16 without dithering, this would retain more
of the 24 bit airiness than if I dithered.


I think your ears are playing tricks on you. You heard a difference because you though you should. Changing the bit depth of a 16bit file is only going to pad it with 0's. And if it did change it any other way, it wouldn't be for the better.
 
bryank said:
the way that i look at it, im an "at home" recording artist. I dont have a PRO set-up or equiptment, but if by recording at 96k gives me a small increase in fidelity/sound quality....even if its only 1% improvement..........its still an improvement
Not meant as a loaded question, but as an "at home" recordists are we qualified to judge the "increase in fidelity/sound"? Most consumer and pro-sumer gear can't capture or reproduce hypersonic frequencies. Your best mic probably rolls off at 15Khz, and your monitors not much above that. So in all likelihood capturing sound at 96KHz allows for frequencies that either don't exist in the source material, or worse, can't be heard in your listening environment. How is it an increase in fidelity if you add something to the mix that your equipment can't reproduce?

Our job as mix engineers (even us amateurs) is to create mixes that translate well. Adding frequencies that you don't perceive but others might is the antithesis of that. It's a bit like running your final mix through BBE or Vintage Warmer just because someone told you it sounds better. If you can't hear the difference, you're better off leaving the effect (or high frequencies) out of the mix, because at least then you have a better idea how the mix will translate to other systems.

Worded another way: If you can't actually perceive the "small increase in fidelity/sound", you might actually be damaging the sound ...
 
Brackish said:
But some give the advice to "track at 16 and do post
work in 24", then dither back.
The internal bit depth your DAW uses is probably (much) higher than 16 bits. Most modern software works at 32- or even 64-bits.

Here's an expriment: Set up a bus chain, with bus 1 feeding bus 2, which then feeds bus 3, and so on up to 5 or 6. Set each bus to reduce the signal level by, say, 20dB. Now on the last bus, add a limiter or 2, as many as you need to raise the level back up by the amount the bus chain lowered it.

Feed a 16-bit track into the signal chain. Even though your bus chain reduces the overall level by 100dB or more, the final ouput, when the signal is brought back up, will sound the same.

What does this illustrate?

The absolute theoretical dynamic range limit using 16-bits is 96dB. You used your DAW to reduce the signal beyond this limit. If your DAW worked at 16 bits internally, when you raised the signal back up again, there'd be nothing there. Once you go beyond 96dB, you're out of bits.

Try it with 24-bit files even. Reduce the total level by 150dB .. The sound will still be there when you re-raise the level.

So up-converting to 24 bits for post work is redundant, since your software is already working at 32- or 64-bits
 
Brackish said:
FWIW, I just did an experiment where I had a singer
recorded in 16 bit. I converted to 24 bit and I
could hear a change right there. The 24 bit sounded
more "open", and trebly.

bit depth has nothing to do with frequency content, though. How did it sound more open and trebly and airy? Those terms usually refer to frequency content...not dynamic content.
 
damn! i just got...burned! :eek: :eek: :eek: :o LOL

it was just my 2 cents...........thats all.

but riddle me this...........what if in the next 5 years, mics strated to capture frequencies over 15k, and monitors were able to reproduce higher frequencies over 20k, and such...........wouldnt we want to start recording at 96/24 then? maybe the fact that higher record/sample rates are available now, is because we are getting ready for the future of some sort of new revolution of recording, and the way its done.

kinda like when we went from 8 track, then to anolge cassete tape, then to CD, now we have DVD,...whats next? im glad we have the option of something better......even if its not 'audible' to our ears. its just preparing us for something better to come....................
 
what if in the next 5 years, mics strated to capture frequencies over 15k, and monitors were able to reproduce higher frequencies over 20k

1. many mics do capture above 15k

2. many monitors do playback over 20k

3. the reason why nothing out there bothers going much higher than 15-20k is because 20k is roughly considered to be the highest frequency the human ear can hear - and that's without suffering any hearing loss, which damages one's ability to hear high frequencies before anything else. this is also why no mics or speakers/subs operate below 20hz, which is the absolute lowest limit that the ear can perceive

oh, and for the record...i personally track everything @ 24/96, then output my final mix as a WAV at the same resolution/bit depth...then i load the WAV into my DAW for mastering, after which it gets converted to 16/44.1

i feel as though this approach makes the most sense, because it's the same process that would be used if you were sending your mix off to an experienced mastering engineer, many of whom have made a strong argument for keeping things at 24 bits thru the mastering process
 
thats exactly what i thought! i figured, if most recording engineers and producers like to keep it at 24 bits untill the final export AFTER mastering is done........i have this option of doing so also.....so ill do it that way too. :)
 
but this is a VERY strong argument.......and i better back out now......before i get beaten to death!

i dont have enough evidence to support why i do it this way......so by trying to convince anyone to do it like this, would be pointless for me to try.

ill just sit back and watch the outcome of this thread.................. :D
 
If people worried half as much about technique as they did sample rate, they'd soon discover what a minor issue sample rate is and how 96k is more often than not a high tech form of turd polishing.

G.
 
SouthSIDE Glen said:
If people worried half as much about technique as they did sample rate, they'd soon discover what a minor issue sample rate is and how 96k is more often than not a high tech form of turd polishing.

G.

A high sample rate is only one more way to improve the job. If people that have good room treatment, excelents mics, excelente pré amps, Protools HD uses... why people without the best condictions can´t try "the last polishing" , for exemple , recording in 192 ? what´s the problem? (someone speak that if would be 1% better, it´s a improvement, it´s nice !).
(why the rule cannot be the same, for expensive or cheap gear owners?)
 
CIRO said:
A high sample rate is only one more way to improve the job. If people that have good room treatment, excelents mics, excelente pré amps, Protools HD uses... why people without the best condictions can´t try "the last polishing" , for exemple , recording in 192 ? what´s the problem? (someone speak that if would be 1% better, it´s a improvement, it´s nice !).
(why the rule cannot be the same, for expensive or cheap gear owners?)
Because it's a waste of effort and resources.

Recording at high resolution doesn't make poor sounding sources sound "better", it keeps better sound things from sounding worse.

Think of it this way; recording at high resolution is like watching television on a high-definition monitor. If the TV signal you are watching is an old black and white re-run of "The Honeymooners" broadcast from a TV station 40 miles and one rainstrom away, it's not going to look any better on the HD TV screen than it will on a standard PAL or NTSC portable TV screen. The high-resolution "recording" of the TV signal onto the TV screen does not help, nor does playing it on the standard tube make it any worse.

On the other hand, if one is watching a HD transmission or HD DVD copy of "Star Wars III", it'll look fantastic on the HD TV, and it will still look great, though not as good as as the HD, on a standard TV tube.

That little morality play is analogous to our audio situation. If the quality of content and of the signal chain *before* it gets to the recorder is not up to snuff, 96k recording will only make an extremely accurate copy of the poor signal. It won't make it any better. Nor will recording at 44.1 make it any worse.

Just as an HD TV is only better than a high-quality standard resolution TV if it is fed a high-resolution signal, high recording sample rates are only better if the source audio signal is of equal quality.

Very few of us have the type of source material or the tracking chops to provide the quality of signal that makes 96k and above worthwhile. That's why my point was that there's a lot more important things to worry about first before most of us need to worry about sample rate.

G.
 
I 100% agree with Glen, as usual (and would give rep to Glen for that, but the forum ain't lettin' me :p )
Back in the day when 44.1 (or 48) was the only sample rate to choose from for CD, there were still kick ass records. Do you think those CDs would be any better sounding if they had the ability to record at a higher sample rate? No!

Again, I go back to my original question several posts back that no one responded to. How big of a difference between 44.1 and 192kHz is it? Is the only time you can hear the difference when you sit down and A/B it yourself? Can you sit with a commercial CD and immediately tell which ones have been recorded at 192 and which ones at 44.1? I will bet you you can't. Can you tell the difference between a CD recorded in someone's basement with a behringer mic and a behringer preamp (no offense behringer) versus some studio using Neve preamps and Neumann mics?
When you hear a poor recording on this forum or even somewhere else, do you immediately ask the recording engineer "What sample rate were you recording at?? Oh, well that's your problem then...if you had tracked at 192 that would have solved everything!" No, because the obvious questions are what's your preamp, what mics, how did you position the mics, what acoustical treatment in the room do you have, etc.
 
There would be a perceptible difference with greater bit depth if the conversion process to a lower depth didn’t destroy the product. You can’t really A/B 16 and 24 if 24 has already been massacred by a conversion algorithm.

To get the most out of greater depth try analog conversion. Master to analog 2-track, and then to Red Book CD from there… or even record higher rate digital directly to CD via analog inputs. Give your light pipe or internal conversion a vacation... you may be pleasantly surprised.

All things being equal -- music, equipment, technique, etc, you will realize a greater benefit from greater bit depth if it’s not cut to pieces through digital conversion.

For the same reason one can make a good case for staying in 16/44.1 from start to finish rather than starting in 24/96 or whatever and converting down.

People talk about A/D a lot, but D/D is the root of much sonic evil… IMO, the crux of the problem.

~Tim
:)
 
Last edited:
With the general public listening to most of their music in mp3 format, this discussion is just silly.

Beyond that, 96k doesn't make anything sound better than it does in the first place. If your source isn't top notch, it won't make any difference.
 
Sillyhat said:
With the general public listening to most of their music in mp3 format, this discussion is just silly.

Beyond that, 96k doesn't make anything sound better than it does in the first place. If your source isn't top notch, it won't make any difference.


Depends on the audience. My customers are audio geeks and normally have expensive playback gear. sometimes I record at 88.2
 
Back
Top