24bit vs 16bit and Hz

Un-subscribed....

See you guys in 23 pages. I'll come back to make sure everyone's still going in circles with this.

I told you it was a shitty title for a shitty topic. The corpse of the horse is now decomposing.



(What Johny says about the lack of a blind test is 100% true by the way)

OOps....Just realized this is the wrong thread. I thought I was in the "debating analog and digital" thread. Not that this thread hasn't become stupid, but it's not the one I thought it was. Wrong stupid thread. :D
 
OOps....Just realized this is the wrong thread. I thought I was in the "debating analog and digital" thread. Not that this thread hasn't become stupid, but it's not the one I thought it was. Wrong stupid thread. :D

:laughings: :facepalm:
 
OOps....Just realized this is the wrong thread. I thought I was in the "debating analog and digital" thread. Not that this thread hasn't become stupid, but it's not the one I thought it was. Wrong stupid thread. :D

I resent that comment.

Jimmy and I are doing our best to derail this one with silliness yet we seem to be only an also ran in the stupidness stakes.

I'm sulking now.
 
You guys are funny!

I try different things in my recording chain whether it's gear or bit rate or sampling rate and I decide if I hear a difference and then I go with what I think sounds best.

basically miro's method.
I'm pretty indifferent to what someone on HR says about whether I should hear a difference or not.
 
It seems likely that you are wanting to hear a difference to validate your decision to use 96kHz (sub-consciously, of course).

But that's just the thing....I don't use 96kHz.

I only said IF wanted to, it would be my preference based on what I was perceiving. There's NO need for me to do "null tests" and "double blind tests" to have a preference. It's a subjective thing....don't you guys get it?

When you pick 2-3 mics out of the locker....do you do a "double blind null test" to decide which one to use or do you just go with your preference based on what you are hearing at the moment?
Do you do double blind null tests before making any/all decisions about anything/everything you are doing in the studio???

:facepalm: :D

I don't know about some of you guys, but I'm not running a lab experiment in my studio....I'm just trying to make some music.


Oh...I use 88.2 kHz with my converters, but I also use 44.1 and 48....depends on my mood.

;)
 
OOps....Just realized this is the wrong thread. I thought I was in the "debating analog and digital" thread. Not that this thread hasn't become stupid, but it's not the one I thought it was. Wrong stupid thread. :D

Can someone call a nurse to show RAMI the way to his room....he's on the wrong floor again.....
 
But that's just the thing....I don't use 96kHz.

I only said IF wanted to, it would be my preference based on what I was perceiving. There's NO need for me to do "null tests" and "double blind" tests to have a preference. It's a subjective thing....don't you guys get it?

When you pick 2-3 mics out of the locker....do you do a "double blind null test" to decide which one to use or do you just go with your preference based on what you are hearing at the moment?
Do you do double blind null tests before making any/all decisions about anything/everything you are doing in the studio???

:facepalm: :D

I don't know about some of you guys, but I'm not running a lab experiment in my studio....I'm just trying to make some music.


Oh...I use 88.2 kHz with my converters, but I also use 44.1 and 48....depends on my mood.

;)

Apologies - must have been a misread on my part. Anyhoo, knowing which file is which and the conditions under which your tests were done compromise your ability to tell whether you really can hear a difference. I'm not suggesting anyone starts scientific testing in their studio. I was merely pointing out that the only way you can really test this is via blind testing in more rigorously designed studies.

In answer to your question, I don't perform scientific studies before making decisions about this kind of thing. However, when I first started out, it was useful to know that due to the Nyquist-Whatshisface theorem, I didn't need to go up to 96kHz or 192kHz to get a quality recording (my interface goes that high - it must be useful, right?!?!). A quick Google later and I was able to settle on 44.1kHz as a decent starting point without needing to break the HDD-space bank too quickly. I don't see any need to go higher - my limitations are not my sample rate (i.e. garbage in = garbage out :D).
 
In answer to your question, I don't perform scientific studies before making decisions about this kind of thing. However, when I first started out, it was useful to know that due to the Nyquist-Whatshisface theorem, I didn't need to go up to 96kHz or 192kHz to get a quality recording (my interface goes that high - it must be useful, right?!?!). A quick Google later and I was able to settle on 44.1kHz as a decent starting point without needing to break the HDD-space bank too quickly. I don't see any need to go higher - my limitations are not my sample rate (i.e. garbage in = garbage out :D).

Well..I was just making a point. :)

So you made your decesion based on your desire not to eat up HD space...and Google.
OK...that's your preference, though I'm curious if you've ever tried recording the same thing at all the different rates on your converter...and then playing them back at their respective rates to see if you liked any one more than the others...just by listening to them?
It's not uncommon for a converter to sound different at different rates...and that has nothing to do with Nyquist, it's just a design thing per converter. I'm not saying you should make your preference based on that and ignore the HD space concerns...but it's always interesting to compare.

We make a million decisons during a session based entirely on what we are hearing/perceiving (right or wrong)...
...so I get a little :yawn: when math becomes SO important for just 1-2 aspects of that subjective decision making process.
 
I have so much other stuff to improve before sample rate might become the dominant function. I had enough faith in the science and the opinion of others to make a starting decision. Had my recordings sounded obviously terrible, beyond the quality I would expect for my abilities, I would have experimented with bit depth and sample rate. As the recordings sounded about right as compared to the performance and quality of mics, etc., I don't consider it a limiting factor.

As I improve (assuming I do), sample rate might be something I come back to mess about with. Right now, it's not the place I'm going to get the most mileage from.
 
As I improve (assuming I do), sample rate might be something I come back to mess about with. Right now, it's not the place I'm going to get the most mileage from.

Oh I agree...there's so many other things in the studio environment that can make huge differences in the audio quality before making a big effort to change converters and rates and whatnot, but most of those other "upgrades" are chosen via subjective manner...often by budget considerations....sometimes based on views of others...etc...etc.
If you just go by specs-n-numbers on everything, it rarely tells the whole story. In the end you will choose what feels the best to you....your preference, for whatever reason.
 
Aren't you happy? 24bit is a big improvement. The Hz does matter. Since the CD will be 44.1 It is better to set it at 88,2 than at anything not divisible by 44.1. Otherwise there is dithering to get it to 44.1. Now here's a kicker. A major engineer did an upsampling from 16 to 24bit. The sound actually improved. Even on the bench you could see it was better. He had no idea why...it just sounded better.

Remember while recording to watch your levels, make every9one play as luod as they can while setting the levels and if necessary put a limiter on the input to protect your headroom. We thrash it out for a minute while setting things just below "0" and the tracks sound sweet when we use dynamics. Good Luck
NewYorkRod

I just did some 24bit recordings (used to do 16bit) and the difference is amazing! I feel like an idiot for not going 24bit in the past. Anyways, I'm wondering if it's worth recording at a higher Hz than 44.1? I've read that Hz don't convert down as well as bits, when formatting back to CD levels.

Thanks,
-Adam
 
Aren't you happy? 24bit is a big improvement. The Hz does matter. Since the CD will be 44.1 It is better to set it at 88,2 than at anything not divisible by 44.1. Otherwise there is dithering to get it to 44.1. Now here's a kicker. A major engineer did an upsampling from 16 to 24bit. The sound actually improved. Even on the bench you could see it was better. He had no idea why...it just sounded better.

Remember while recording to watch your levels, make every9one play as luod as they can while setting the levels and if necessary put a limiter on the input to protect your headroom. We thrash it out for a minute while setting things just below "0" and the tracks sound sweet when we use dynamics. Good Luck
NewYorkRod

Individual anecdotes aside. The vast majority of pro studios run 24bits at 44.1kHz and only use higher sampling rates for "technical" reasons or if some poncey "prodooosha" insists upon it.

24bits allows levels to average at -18 or -20dBFS and thus tracking can be done "clean and dry" .

"Pro" converters generally produce +4dBu for -18dBFS and thus have a maximum output/input capability of better than +22dBu, well in excess of anything available to the Home Jockey!

Pros also calibrate their monitors to an 83dB average level. A whole OTHER ball game!

Dave.
 
Aren't you happy? 24bit is a big improvement. The Hz does matter. Since the CD will be 44.1 It is better to set it at 88,2 than at anything not divisible by 44.1. Otherwise there is dithering to get it to 44.1. Now here's a kicker. A major engineer did an upsampling from 16 to 24bit. The sound actually improved. Even on the bench you could see it was better. He had no idea why...it just sounded better.

Remember while recording to watch your levels, make every9one play as luod as they can while setting the levels and if necessary put a limiter on the input to protect your headroom. We thrash it out for a minute while setting things just below "0" and the tracks sound sweet when we use dynamics. Good Luck
NewYorkRod

First off, you'd probably be better saying "sample rate" rather than Hz. Hertz are used to measure the frequencies of a lot of different things.

Second, if you can HEAR a difference between 16 bit and 24 bit, you're talking yourself into it--sort of an aural placebo effect. The only difference between 16 bit and 24 bit is the dynamic range available. The actual sound at the levels you can actually hear is identical..

Third, I'm highly sceptical of your un-named "major engineer" and his results. RECORDING at 24 bit doesn't provide a better sound, just more dynamic range--and upsampling from 16 bit still leaves your tracks with the dynamic range you started with even if you have more to play with during the mix.

Finally, ecc83 gives you an excellent overview of how to set and use your levels.
 
if you can HEAR a difference between 16 bit and 24 bit, you're talking yourself into it--sort of an aural placebo effect. The only difference between 16 bit and 24 bit is the dynamic range available. The actual sound at the levels you can actually hear is identical ... upsampling from 16 bit still leaves your tracks with the dynamic range you started with

Indeed. Further, this is just wrong:

Otherwise there is dithering to get it to 44.1.

--Ethan
 
I follow the "NyQuil Rule" which states, if you have a good night sleep and your sinuses are clear everything sounds better the next day. (And yes, this has been approved by at least one, "Major engineer.")
 
Well the link that will explains everything including the testing system is:
24/192 Music Downloads are Very Silly Indeed

For people that just want the straight forward nswer:
Tweakheadz · 16 Bit vs. 24 Bit Audio
Moulton Laboratories :: 24 Bits: Can You Hear ?Em? 96 kHz.: Can You Hear It?

Boom

I always liked Tweakheadz.

He says go 24bit-44.1,
but also our stoned friends and grandparents and infants wont notice a difference at CD quality 16/44.1.
and the video gang often likes 24/48.

thats how I get it.
 
Aren't you happy? 24bit is a big improvement. The Hz does matter. Since the CD will be 44.1 It is better to set it at 88,2 than at anything not divisible by 44.1.

Sample rate conversion is not simple division. There are interpolation algorithms that are employed so whether you downsample from 96 or 88.2 the main point of concern is in the quality of the converter, be it hardware or software. Read this thread:

88.2 khz - The Womb

You got JJ Johnson and Bob Ohllsson piping in there along with other guys with Ph.D's and platinum records so it would probably do you well to read it thoroughly.

Otherwise there is dithering to get it to 44.1. Now here's a kicker. A major engineer did an upsampling from 16 to 24bit. The sound actually improved. Even on the bench you could see it was better. He had no idea why...it just sounded better.

As Ethan said, this is wrong. In fact, most of the above quote is nonsense. Firstly, upsampling is when you change a lower sample rate to a higher sample rate, not a change in bit-depth, as you are describing. Secondly, converting a 16-bit file to 24-bit will not cause an improvement in measured quality, i.e. noise floor, etc. The noise floor of the 16-bit file will still be represented at the same level in the 24-bit version. What it WILL do, however, is allow better amplitude resolution for processing such as reverb or delay IF working in a 24 bit environment but the processing has to be applied POST conversion. I would wager that your buddy heard a difference because he told himself he did.

Dither is randomised low level digital noise added to a signal when LOWERING bit depth. It combats truncation distortion due to signals decaying past the LSB (Least Significant Bit).

Remember while recording to watch your levels, make every9one play as luod as they can while setting the levels and if necessary put a limiter on the input to protect your headroom. We thrash it out for a minute while setting things just below "0" and the tracks sound sweet when we use dynamics. Good Luck
NewYorkRod

You are living the past, Rod. "Just below 0" is an old technique from a bygone era. There's a sticky around here about gain staging and digital levels. Read it. It'll help you. Also, read Bob Katz's "Mastering Audio". It will clear up most of the misconceptions you seem to be suffering from.

Cheers :)
 
We seemed to have strayed off the path to a discussion about whether people can hear the difference between 16-bit and 24-bit, 44.1 KHz and 96 KHz. I'm willing to accept that most people can't hear a difference. I'm even willing to accept that all people can't hear a difference. However, I don't think that provides a complete answer (though I'll preface this by saying that I'm just an hobbyist, and not remotely close to the competency of a recording engineer or professional producer).

I don't record a single dry stereo track that I then put on CD. My music consists of multiple tracks, frequently dozens of tracks, to which I apply a variety of effects as needed and then mix down to a stereo master. It seems to me that the greater the "granularity" of a single track, the more likely it is to distort or produce unwanted harmonics when effects are applied and it is mixed with other tracks. I have heard the difference on vocal tracks processed with Melodyne -- 16-bit, 44.1 KHz tracks absolutely will generate more unwanted, non-existent-in-the-original harmonics than 24-bit, 96 KHz tracks when pitch-shifted. I also frequently wind up playing with the length of sustained notes, both vocal and instrumental. Whether stretching or shrinking, I've found that fewer samples of less depth produces a harsher sound than greater samples with more depth. Another extreme example: I've got one song that opens with the sound of a ticking clock. I had sampled the "tick tock" sound about 20 years ago with an Ensoniq Mirage -- an 8-bit sampler. Applying reverb to the clock sound to get a sense of depth resulted in completely bizarre harmonics and distortion when I tracked it at 16-bit/44.1 Khz, but a nice, natural sound when tracked at 24-bit/96 KHz.

Finally, there are the natural harmonics generated by the instruments themselves. Pianos use 3 strings each for all notes but those in the lowest register. Anyone who knows acoustic pianos knows that a talented piano tuner does not tune each of the 3 strings exactly to pitch; leaving a miniscule amount of difference in the tuning of each string causes them to "beat" against each other and it is that which produces the characteristic piano sound. That's why professional piano tuners, the ones who tune for symphony orchestras, don't use electronic tuning aids but still rely on tuning forks that provide the pitch they match by ear. The difference amount between each string must be miniscule -- a matter of 1 Hz or less -- or the piano will sound out-of-tune. Having played pianos all my life, I can hear the difference between a piano piece tracked at lower resolutions and one tracked higher.

I'm neither mathematician nor engineer enough to provide mathematical proof that, when mixing tracks and/or applying effects to tracks, different resolutions will result in different sounds. However, I've heard enough individual instances in my own music to be convinced that it happens. Could it be the result of my own poor recording technique and general ignorance? Absolutely, but isn't that all the more reason for me to use the highest-available sampling rate? All it costs me is disk space, and I have plenty of that. I never delete a take (you never know when you might need it, or a piece of it), I never overwrite a master, and I always keep each iteration of mix. With all of that, my current project, about an hour of continuous music (though not all of it has wound up in what will be the final result -- I've cut about half the songs I wrote from the project), along with various sketches, experiments and accidental duplicates, occupies less than 1/2 a terabyte. As my recording drive is 3-terabytes of RAID 5, and I have two back-up NAS devices, each of which is 3-terabytes of RAID 5, storage space is a complete non-issue.

I simply can't think of any reason why I would want work in anything other than 24-bit/96 KHz.
 
Pianos use 3 strings each for all notes but those in the lowest register. Anyone who knows acoustic pianos knows that a talented piano tuner does not tune each of the 3 strings exactly to pitch; leaving a miniscule amount of difference in the tuning of each string causes them to "beat" against each other and it is that which produces the characteristic piano sound. That's why professional piano tuners, the ones who tune for symphony orchestras, don't use electronic tuning aids but still rely on tuning forks that provide the pitch they match by ear. The difference amount between each string must be miniscule -- a matter of 1 Hz or less -- or the piano will sound out-of-tune. .
having been a piano tuner for forty years and one of the busiest piano tuners anywhere (3-5 a day 6 days a week for the last 20 years of it before I moved to Florida a few years back) and having tuned pianos for symphony and large touring acts I can definitively say that is totally wrong.

The reason pro tuners use their ears instead of a tuner is because you have to 'stretch' the tuning towards the ends of the keyboard and the amount you need to stretch it can vary from piano to piano because of different scale design. It has nothing to do with leaving beats in a trichord.

Further .... NO good piano tuner leaves beats in a note between the three strings in a note (or two in much of the bass) . Those beats are the very thing they try to totally eliminate
We do use beats to get the 4ths and 5ths right ...... 4ths and 5ths should have 3 beats in 5 seconds ... but there should never be any beats in unisons if possible.

The fact is that it is usually impossible to get a note totally beatless because often a single string can have beats all by itself because of variations in its diameter.
There is nothing that can be done about that.

And lastly using a tuning fork to set your temperment is no different than using an electronic tuner to set your temperment. Yes ..... crummy amateurish tuners will tune every string by tuner and get a crummy tuning ( but because of the stretch issue and NOT because of the reasons you gave) but a good tuner uses a tuning fork to set a single string by ear and goes from there ..... you can do the same thing with an electronic tuner and get the same result. Either way you're setting an individual string and the rest is by ear so there's no difference.

I use an electronic tuner because I do lots of church pianos and, except for Hammonds which are always around A=441/442, church organs can vary a lot as to how they're tuned. I've seen big Baldwin organs that were a quater step off! :eek:
So I like an electronic tuner because I can set it to the organ ..... it's like having a variable tuning fork.

But anyway ... your comments about pianos are wrong.
 
Back
Top