24 bit vs 16 bits

  • Thread starter Thread starter turtle_michael
  • Start date Start date
1) Signal-to-noise ratio alone is not bit depth, and there is no easy comparison between analog and digital in this regard.

I know you're a smart guy, so I'm surprised you don't understand that s/n is directly related to bit depth. Monty even proves that tape hiss and digital hiss (without noise shaping) are the same:

Monty Montgomery explains digital audio

2) With noise reduction of various types, analog tape, even many cassette-based systems can meet or exceed the practical signal-to-noise ratio of 16-bit and even 20-bit digital devices. The theoretical S/N ratios of given bit-depths do not occur in real digital devices.

Bobbsy already explained that this is incorrect. It's common, especially among audiophile types, to hold up bogeymen such as truncation distortion and jitter as damaging, even though they're 90 and 110 dB below the music. But with analog tape noise reduction, the tape hiss is never farther below the music than the inherent s/n affords. Yes, the noise pumps up and down with the signal level, so in practice tape noise reduction reduces noise subjectively. But the underlying s/n is not really improved.

3) There are many other benefits to higher bit-depth, not the least of which is more effective error correction, which reduces the number of unrecoverable errors in a digital sample.

Dude, whatever you're smoking, please pass some over this way. :D

If anything, having more bits in a data stream only increases the odds of bits being dropped. But how often does that happen between a computer sound card and the hard drive? Answer: Never. The only thing bit-depth affects is s/n. If you have any evidence to the contrary I'd love to hear it. But you'll have a very hard time disproving Shannon et. al. :rolleyes:

--Ethan
 
Bit depth discussions are Beck's Bat-light.:D

BatLight2.jpg
 
I know you're a smart guy, so I'm surprised you don't understand that s/n is directly related to bit depth. Monty even proves that tape hiss and digital hiss (without noise shaping) are the same:

Monty Montgomery explains digital audio

Bobbsy already explained that this is incorrect. It's common, especially among audiophile types, to hold up bogeymen such as truncation distortion and jitter as damaging, even though they're 90 and 110 dB below the music. But with analog tape noise reduction, the tape hiss is never farther below the music than the inherent s/n affords. Yes, the noise pumps up and down with the signal level, so in practice tape noise reduction reduces noise subjectively. But the underlying s/n is not really improved.



Dude, whatever you're smoking, please pass some over this way. :D

If anything, having more bits in a data stream only increases the odds of bits being dropped. But how often does that happen between a computer sound card and the hard drive? Answer: Never. The only thing bit-depth affects is s/n. If you have any evidence to the contrary I'd love to hear it. But you'll have a very hard time disproving Shannon et. al. :rolleyes:

--Ethan

LOL Ethan, you have a way of rephrasing things people say to be exactly how they did not say it. :)

My contribution to this thread is 100% correct even if above and beyond the level most people are accustomed to having these discussions, which I affectionately call, "The wiki level." Good luck with that, but I'm not following the rest of you there. I'd have to get a lobotomy to do so.

I didn't say bit-depth didn't have anything to do with S/N. I said S/N is not all that bit-depth is about. And my reference to theoretical vs practical S/N ratios is all about how devices with resistors, capacitors, op-amps, etc, function in the real world. The theoretical (on paper) performance of a given bit-depth is not how it ends up in a given product.

As for error correction, you couldn't be more wrong and even a beginning programer could tell you... in the way error correction works for the applications under discussion the greater the word length the better.

And stop making vague references to Shannon (or Nyquist) like its some magic incantation to settle every argument... in your favor! Or this Monty fellow, whose video made me laugh out loud when I first saw it. Debate should be so easy. LOL :D

Fact is I don't smoke anything or drink, and never have. So now I'm in the somewhat troubling position of caught in between people that used to know what they were talking about before they forget half of what the know, and people that are just starting out and don't know anything yet. Clowns to the left of me, jokers to the right... and I'm watching it all unfold while stone cold sober!

Cheers! :drunk:
 
My contribution to this thread is 100% correct

Ah, yes, a legend in his own mind.

I didn't say bit-depth didn't have anything to do with S/N. I said S/N is not all that bit-depth is about.

Yet you seem unable to explain what else bit depth affects beside noise. :rolleyes:

So you think Shannon and Nyquist were wrong? Oh, please tell us more about that!

--Ethan
 
24/16 experiments

And because it really doesn't matter in the context if a mix.

I recorded all of my samples at 44.1/24 bit, chopped them up and made both 16 and 24 bit sets. I could not tell the difference. The only reason my samples are sold in 24 bit is because I didn't want to add an extra step to the process which could lead to accident getting the samples out of order and not being able to match the ones that go together (direct snare and overhead of the same hit, for example)

My first digital trial (analog prior) was with Tascam and a 20 voice early music vocal ensemble. I figured CD's are 16 bit @ 44.1, so obtained recorder and mixer at 16 bit, 44.1k sample. I figured the limiter would be the final CD, so why go 24 bit on the record/mix? All was well until the mix . . . artifacts making me cringe! After lots of experimentation, I copied the 16 bit originals to 24 bit. No change in sound quality UNLESS CHANGES IN LEVEL OR EQ OR COMPRESSION ADDED IN THE DIGITAL POST PROCESSES. Seems that when you change level, as simple as adjusting volume level, the samples representing the waveform had to scramble for the nearest bit representing the waveform. At 16 bits, there just were not enough steps on the bit ladder to form an accurate waveform at the new level, whether up or down. By switching to 24 bit, enough rungs on the bit ladder were available to keep the waveform relatively accurate and much of the audible artifacts went away.

So in my experience, recording and playing back 16 bit vs 24 bit makes little difference. But when changes are introduced (EQ, compression, volume change) it makes a big difference to my ear.

Backstage Recording (still Retro)
Dean
 
If you're hearing artefacts that severe at 16 bit/44.1 sampling then there's something wrong with your gear. Digital simply doesn't work that way and, contrary to the myth, there are no steps or samples to jump between.

THIS video (already widely distributed on HR) explains all.
 
Seeing how much action the video is getting....they could have gotten a nice looking babe to narrate instead of Monty. :D

Absolutely. Monty gets my nomination for "worst beard of the month" for August.

I'd be happy to give Miranda Kerr tuition in digital theory if she was willing to re-do the video...
 
This old chestnut.

Uy yuy yuy. :facepalm:

Learned my lesson. The lesson:

Who gives a shit? It's not about the gear or the bit depth or the linearity or the sample rate or the s/n or the dither or the truncation distortion or the blah blah blah blah blah.

Technology is incidental to the creation, recording and mixing of music. Be the conduit. If you notice the "engineering" you're either doing it wrong or the guy listening to it is an engineer, in which case you shouldn't care what he thinks.

Today with the converters and software we have at our fingertips - which is in an increasing state of narrowing margins of quality, by the way - it's easy to get caught in the frantic debate around the merits and demerits of the gear. Don't. Close Firefox and get on with making music. If you do it enough, you will get better at it. It's in your hands, ears and brain, not in the gear.

The end.

Cheers :)
 
Yet you seem unable to explain what else bit depth affects beside noise. :rolleyes:

So you think Shannon and Nyquist were wrong? Oh, please tell us more about that!

--Ethan

There you go again putting words in people's mouths. No, Nyquist and Shannon were not wrong about the implied theory in the context and scope which they addressed it, but you are wrong in how you interpret, invoke and misapply the significance and meaning of Nyquist-Shannon. You seem to think you're their representative on earth like the Pope is (Thinks he is) for Christ. LOL Whenever we disagree with Ethan we're accused of disagreeing with Nyquist and Shannon or some other professor X. Y or Z. That's not debate, but religion. We're talking about bit depth here (word length) not sampling frequency anyway. You seem to be confusing the two.

But if bit-depth is only about S/N and dynamic range as you say, then why do we even need 16-bit? With today's limited dynamic range, AKA loudness wars, then why not 14, 12, or even 8-bit? After all 0 dBfs is 0 dBfs, is it not? So if we're mastering within a 3 dB range, as many people are, you tell us what the advantages of 16-bit are over something like 12-bit. In the process who knows, you might even answer your own question. ;)
 
The only disadvantage to 12 bit is the noise floor would be around 24db louder.

Anyway, even if all the peaks are within 3db of each other, the waves still have to cross from positive to negative through that noise floor.

But you're correct, even 12 bit would be quieter than cassette was, which was quieter than the road noise in the car I was listening to it in.

It's always interesting how both artists and engineers argue over the smallest details, when the people they are attempting to serve/entertain clearly don't care or even notice.
 
But if bit-depth is only about S/N and dynamic range as you say, then why do we even need 16-bit?

Memory chips and computer buses are organized into groups of 8 bits, so it would take much more code to handle word lengths that aren't a multiple of 8.

For all your caterwauling and obfuscation, you still haven't explained what more there is to bit depth other than dynamic range. :D

--Ethan
 
I was reading on this a few minutes ago and one comment was that 16bit has much less noise issues than a 2" reel to reel, and QJ's Thriller was made on 2" tape.

theres a bunch of threads on this these days, and it all comes down to noise it seems..imo

and then for even a moderately well done Home Recording environment, 16/44.1 is fine and the noise floor is still better than Quincy Jones Thriller noise floor (technically).... and I'll save a bunch of memory space.
 
16/44.1 is fine and the noise floor is still better than Quincy Jones Thriller noise floor (technically)

Yes! When I owned a professional studio in the 1980s I would have killed to have a modern DAW setup like I enjoy now with SONAR. We had more than $100,000 worth of gear, yet SONAR with a 16-input interface running on a modern PC for a total cost of a couple thou kills what we had back then. Not just much better fidelity, and no need to bounce tracks which adds yet more noise and distortion, but also for the sheer convenience and capability of editing and total recall etc.

--Ethan
 
Back
Top