24 bit in and around, 16 bit out?

  • Thread starter Thread starter jedblue
  • Start date Start date
ha. I hate when I do that.
I've done it two or three times (last night for cryin out loud!) adjusting the wrong thing in this case, or something in 'bypass-- Each time I would swear 'heard it.
:rolleyes:
 
Did you ever make a mix that sounds great, only to have it sound like crap the next day? Did you ever hear someone else's commercial mix sound amazing, only to have it sound not so amazing on a different day? Did you ever tweak a snare EQ to perfection only to discover later you were actually adjusting the EQ on a muted BG vocal track?
--Ethan


yeah but I was drunk.
 
But - as I understand it - when you go to record this situation, something different happens. If that sound is captured (and preamped) at a level lower than the composite analog noise level of the downstream chain, the analog noise will in effect mask the lower level stuff because it will get buried in the circuit noise itself. By the time it gets to digital, it's too late, all the digital will do is more or less faithfully reproduce the analog noise.

Of course, a digital recording should accurately record the entire analog input signal ("real" signal plus analog noise) down to it's effective LSB level, since the digital recorder has no way of knowing what is the "real" signal and what is "analog noise". IOW, the real signal detail should be there down to the LSB in a 16-bit recording, which should give 4 or 5 bits of detail below Glen's figure for analog noise level. The "masking" effect of the analog noise is the tendency of the (louder) analog noise to mask our hearing of that quieter detail on playback.

That is my interpretation of Ethan's essential point in this thread: that even 16-bit recordings have enough detail to adequately record and play back all the detail needed, because the additional detail below 16-bit resolution doesn't make an audible difference in the ultimate mix playback, since our perception of that lower level detail is masked by other, higher noise levels. Nicht wahr? :)

Cheers,

Otto
 
Last edited:
However, many VSTi Synths now generate 32-bit audio directly.
That's the key here isn't it?

See, most people on this board think from the prespective of using the computer as a sort of tape recorder, where you capture some external sources. Many forget that with the computer you can generate the sounds within the computer itself, thus avoiding the room noise, mic noise, preamp noise, line-in noise and all that stuff.

As I demonstrated in an earlier thread, when you are dealing with sound sources that have been generated INSIDE THE COMPUTER, then higher bit depths AND sample rates make a vast difference.
 
A long time ago I read that if you play two recordings to a person and the only difference was that one was slightly louder than the other that the listener would almost always say the louder one sounded better.

It must be human nature that louder is better.
 
That's the key here isn't it?

See, most people on this board think from the prespective of using the computer as a sort of tape recorder, where you capture some external sources. Many forget that with the computer you can generate the sounds within the computer itself, thus avoiding the room noise, mic noise, preamp noise, line-in noise and all that stuff.

As I demonstrated in an earlier thread, when you are dealing with sound sources that have been generated INSIDE THE COMPUTER, then higher bit depths AND sample rates make a vast difference.
That makes sense to me if you could hear unprocessed digits but don't you have noise on the way out too?(D/A)
 
A long time ago I read that if you play two recordings to a person and the only difference was that one was slightly louder than the other that the listener would almost always say the louder one sounded better.

It must be human nature that louder is better.
Absolutely. That's one of the basic hearing bias' that an engineer should learn to recognize and account for during critical listening.

This is also a phenom long known and used by sales people (I used to be one many, many years ago before I saw the light). Various loudspeakers (including "studio monitors") operate at various efficiencies; i.e. the actual sound level per watt of juice you pump into them varies. If you wanted to sell the chump...er...customer the high efficiency speakers, you'd always play them second after a pair of lower-efficiency ones with the amp volume control untouched, making a point to mention that you were leaving the volume alone so that it's a "fair" test. Gets 'em every time. You switch from low- to high-efficieny speakers and the high one always sounds "better" to the untrained ear, simply because it's louder. If you wanted to sell the lower efficiency speakers, you'd find an excuse not to play the higher-efficiency ones.

G.
 
That's the key here isn't it?

See, most people on this board think from the prespective of using the computer as a sort of tape recorder, where you capture some external sources. Many forget that with the computer you can generate the sounds within the computer itself, thus avoiding the room noise, mic noise, preamp noise, line-in noise and all that stuff.

As I demonstrated in an earlier thread, when you are dealing with sound sources that have been generated INSIDE THE COMPUTER, then higher bit depths AND sample rates make a vast difference.

I'd be interested to read that thread. Can you give us a link?

Cheers,

Otto
 
That makes sense to me if you could hear unprocessed digits but don't you have noise on the way out too?(D/A)
Hearing unprocessed digits is not the point :p

The point is, when you are generating sounds through software ITB using soft synths, the soft synths have easier time generating more accurate waveforms at higher bit depths and sampling rates. This is irrespective of any AD/DA process.
 
I'd be interested to read that thread. Can you give us a link?

Cheers,

Otto

Here. Although, this is about using higher sample rates rather than comparison between 16 and 24 bit audio generation.

I've read an article some time ago where they were arguing that higher bit depths come into play when generating lower bass frequencies. I haven't specifically tested this particular hypothesis. Something that would be interesting to do.
 
Just to toss another fly into the ointment... ;)

I know the majority of debates about "16bit VS 24bit" and "44.1 VS anything higher" all center on human hearing and what we can hear concsiously...and there is an impliead assumption by the "lower-is-good-enough" crowd that if we can't hear it conscioulsy, then it's not imporant.

Thing is...we DO hear all those sounds oustide of the "conscious" human hearing range, our ears don't shut down at 22kHz...
...we may be processing those sounds differently......
 
Thing is...we DO hear all those sounds oustide of the "conscious" human hearing range, our ears don't shut down at 22kHz...
...we may be processing those sounds differently......

Yes, then there is the theory about the natural resonant frequency of a woman's cli... uhh... nevermind :D
 
Well...AFAIC...it's also just a theory that nothing happens above 22kHz with human hearing..... ;)

All sound waves enter you ear....some are heard consciously, and the rest....???
 
I know the majority of debates about "16bit VS 24bit" and "44.1 VS anything higher" all center on human hearing and what we can hear concsiously...and there is an impliead assumption by the "lower-is-good-enough" crowd that if we can't hear it conscioulsy, then it's not imporant.

Thing is...we DO hear all those sounds oustide of the "conscious" human hearing range, our ears don't shut down at 22kHz...
...we may be processing those sounds differently......

Yeah, I actually wonder more about the impact of the bandwidth limiting than the issue of resolution and number of bits. What I wonder about is whether a properly bandwidth limited signal for a 44.1K sample rate undermines the accuracy of some phase information that our ears can detect in the location of sound. I need to reread Streicher's book on stereo sound and see if he has any good references on the issue.

Cheers,

Otto
 
As I demonstrated in an earlier thread, when you are dealing with sound sources that have been generated INSIDE THE COMPUTER, then higher bit depths AND sample rates make a vast difference.

Vast difference? Not likely. But tell you what. Please post a 10-second render of some music at 32 bits, then I'll download and convert it to 16 bits without even using dither. I'll post the results here, and everyone can tell us if they hear a "vast" difference. Deal?

--Ethan
 
Just to toss another fly into the ointment... ;)

I know the majority of debates about "16bit VS 24bit" and "44.1 VS anything higher" all center on human hearing and what we can hear concsiously...and there is an impliead assumption by the "lower-is-good-enough" crowd that if we can't hear it conscioulsy, then it's not imporant.

Thing is...we DO hear all those sounds oustide of the "conscious" human hearing range, our ears don't shut down at 22kHz...
...we may be processing those sounds differently......

I believe that what you're saying is not only true, but key.

Decades ago I read that in Russia scientists did hearing tests where they found that people could tell if a sound was on or off by putting the sound producing mechanism right against their heads, up to 200KHz!

Over 1/2 a cymbals energy is above 100KHz, that's why cymbals never record as they sound.

My guess, and it's just a guess, is that we have an aura surrounding our bodies and that sound waves affect that aura well above 20KHz, maybe up to infinite, and that those sounds have an effect on how the emotional aspect of how we feel music.
 
Vast difference? Not likely. But tell you what. Please post a 10-second render of some music at 32 bits, then I'll download and convert it to 16 bits without even using dither. I'll post the results here, and everyone can tell us if they hear a "vast" difference. Deal?

--Ethan

Did you listen to the examples? If you can't hear the differences in the examples in that post, then your ears are made of wood.
 
Yeah, I actually wonder more about the impact of the bandwidth limiting than the issue of resolution and number of bits. What I wonder about is whether a properly bandwidth limited signal for a 44.1K sample rate undermines the accuracy of some phase information that our ears can detect in the location of sound. I need to reread Streicher's book on stereo sound and see if he has any good references on the issue.

Cheers,

Otto

The New Stereo Soundbook: www.stereosoundbook.com reminds me that our ears use phase differences for stereo location for long wavelengths (under 500 Hz) and head shadow intensity differences for short wavelengths (over 2K) and a mixture of both in the region where the wavelength is comparable to the diameter of the head. The fact that we use shadow effects for localizing high frequencies leads me to suspect that the absence or presence of original content over 20K may not be particularly necessary for accurate localization perception. I'll read through the references and see if any look promising for tests of localization over 20K.

Cheers,

Otto
 
Last edited:
Did you listen to the examples? If you can't hear the differences in the examples in that post, then your ears are made of wood.
Didn't know which thread to post this in :D

I'm looking at the 2 wav's through a Wavepad Sound Editor and the B wav has higher peaks and more detail(resolution). It looks like the A wav is more compressed. Which would sound louder? Why is that?

Don't the differences become undetectable when you have complex wavs?
 
Back
Top