CD Burner Question

  • Thread starter Thread starter Bonz
  • Start date Start date
SouthSIDE Glen said:
Unfortunately it's the consumer that loses on this one. The use of the phrase "20-bit CD" by the record people really hacks me off.

It not only implies a total falsehood - the CD's are not 20-bit in any way - that fools fine folk like yourself, but it also makes it sound like there's something particularly special about these CDs.
Well, there can be 20 bits worth of musical signal there as music can exist below the noise floor of the system (96dB in a 16 bit system). With proper noise shaping, you can achieve a 20 bit-like noise floor at 16 bits.

The reason I take issue with it as the consumer is generally ignorant about sampling and thinks it is something that will sound better when in reality, the only difference is below -96dBFS...

I don't know about their dithing algorithm, maybe that is a fine product; but what's so damn special about using a 20-bit DAC? That wasn't even that big of a deal when they put them into ADAT/XT-20s 5 yeas ago...
Of course, there are different 20 bit DACs - some simply use 20 bit chips but have typical 16-17 bit noise performance while some really push the envelope and achieve 20 bit dynamic range. Guess which ones are in your typical consumer player...
 
bblackwood said:
Well, there can be 20 bits worth of musical signal there as music can exist below the noise floor of the system (96dB in a 16 bit system). With proper noise shaping, you can achieve a 20 bit-like noise floor at 16 bits.
The "noise shaping" you're referring to is basically the dithering down to 16 bits, right? No matter how it's pushed, poked and packed, there's still only the 16 most signifigant bits making it to the CD. Calling them 20-bit CDs is just plain snake oil salesmanship. I might as well call the CD-Rs I burn "24-bit CD-Rs" because they're mixed, mixed down and pre-mastered in - and converted to the 16-bit burn from - 24-bit (some stages are even 32-bit sometimes) in my DAW software. Thats baloney no matter how it's looked at.

That in fact is what I really don't undersatnd about that whole scam. Don't folks like you and Tom and John use 24-bit or higher converters without even thinking twice about it? What's the big deal even supposed to be about the use of the 20-bit converters?

G.
 
SouthSIDE Glen said:
The "noise shaping" you're referring to is basically the dithering down to 16 bits, right? No matter how it's pushed, poked and packed, there's still only the 16 most signifigant bits making it to the CD. Calling them 20-bit CDs is just plain snake oil salesmanship.
Well, if they approach 20 bit performace, what's wrong with it? Whether the dynamic range exists in hardware or through psycho-acoustics, it still exists.

Again, the real issue is that it makes zero difference above -96dBFS!

I might as well call the CD-Rs I burn "24-bit CD-Rs" because they're mixed, mixed down and pre-mastered in - and converted to the 16-bit burn from - 24-bit (some stages are even 32-bit sometimes) in my DAW software. Thats baloney no matter how it's looked at.
Actually, that's different - they aren't calling it 20bit because they used 20 bit conversion through-out, but because the performance supposedly rivals a 20 bit wordlength's dynamic range.

That in fact is what I really don't undersatnd about that whole scam. Don't folks like you and Tom and John use 24-bit or higher converters without even thinking twice about it? What's the big deal even supposed to be about the use of the 20-bit converters?
More bits = greater dynamic range, that's all. What most people miss is that it's not extra 'head-room', per se, but rather more 'foot-room'.

The reason I capture at 24 bits is for two reasons:
1] further digital processing (anything from gain changes to editing to limiting) will generally be improved as quantization errors will be farther down in the noise floor, and
2] greater dynamic range for future release formats.
 
bblackwood said:
Well, if they approach 20 bit performace, what's wrong with it?
The way is see it, whats "wrong" is that it's false advertising. They're not saying "our CDs approach 20-bit performance" or "20-bit-equavalent" or anything like that. They are litterally advertising them as - nay, calling them - "20-bit CDs." They are not. They are 16-bit CDs just like everyone else's. They even had someone like ektronic, who is no babe in the woods when it comes to audio technology, fooled.

bblackwood said:
More bits = greater dynamic range, that's all. What most people miss is that it's not extra 'head-room', per se, but rather more 'foot-room'.
I understand all that perfectly. The part I don't get is that, unless I'm misundertanding the signal chain somewhere, 20-bit conversion is actually inferior to what most mastering houses use as standard for the bit width on their converters these days; i.e. standard equipment these days is at least 24-bits wide. What am I missing here?

G.
 
SouthSIDE Glen said:
The part I don't get is that, unless I'm misundertanding the signal chain somewhere, 20-bit conversion is actually inferior to what most mastering houses use as standard for the bit width on their converters these days; i.e. standard equipment these days is at least 24-bits wide. What am I missing here?
There are no converters made that actually achieve (much) better than 20 bit performance, so on your typical high end 24 bit ADC, the last four bits are dominated by noise. Take a 20 bit ADC feeding a 24 bit system and the system will simply add four 0's to the end of the 20 bit word, making it 24 bit. The final four bits in either case are unused by signal as system noise is rarely lower than -120dBFS.

Heck, few 24 bit ADCs actually achieve 20 bit performance - the extra four bits are typically referred to as 'marketing bits'...
 
bblackwood said:
Heck, few 24 bit ADCs actually achieve 20 bit performance - the extra four bits are typically referred to as 'marketing bits'...
Nothing personal, Brad - I mean that - but that answer just further begs the question. Maybe I'm not going a good enough job asking the question or something, but I feel like I'm going in circles here.

Let me cut right to the chase: Why the don't the marketing bozos who are hawking "20-bit CDs", which they admit in their own documentation referrs to the word length of their converters and not the actual word length on the CD or the "simulated quality" of the CD, just go ahead and use the same 24-bit converters the rest of us do and call their CDs "24-bit CDs"? Why make a big deal out of only 20 bits, which in 2005 is several years the obsoleted technology?

And if there's "nothing wrong with" them using the terminology they use, then why can't the rest of us make the same claim when we dither our 24 bits down to a 16-bit CD-R (regardless of whether those last 8 bits are all zeros or all ones or anything in between)? I can give the answer to that last question: because we know that it is wrong on both a technical and an asthetic level, not to mention the subjective ethical level.

On a more analytical point: To say that a 16-bit CD has a "20-bit quality" to it because that is the width of the converter before dithering and/or because of the quality of the dithering algorithm itself is, by definition, incorrect at best, and meaningless at worst. The reduction of word length combined with the addition of dithering is a two-stage compounding of noise to the signal (even if the last four bits are all zeros.) The noise added to the signal by dithering reduces the "unplesantness" or "unnaturalness" of the sound caused by mathematical curiosities involved in the truncation of the word length is nothing more than a "trick" that takes advantage of how the human ear and mind processes sound. In strict mathematical terms it is still adding a level of noise that takes the bits one step further away from their original 20-bit value. The dithering may make it sound "better", but it does so in a way totally unrelated to the way that a wider word length would. Dithering does not add resolution or dynamic range to the signal; it just "smears" the current resolution in the last bit(s) to take the edge off the truncation.

That actually brings up another question I've had for some time now: why the powers that be in designing these systems decided to use the extra bits to increase dynamic range instead of to increase the resolution within a dynamic range. But that's a question for another thread...

G.
 
Last edited:
SouthSIDE Glen said:
Nothing personal, Brad - I mean that - but that answer just further begs the question. Maybe I'm not going a good enough job asking the question or something, but I feel like I'm going in circles here.
Indeed. But hey, I've got all day - I love discussing this stuff...

Let me cut right to the chase: Why the don't the marketing bozos who are hawking "20-bit CDs", which they admit in their own documentation referrs to the word length of their converters and not the actual word length on the CD or the "simulated quality" of the CD, just go ahead and use the same 24-bit converters the rest of us do and call their CDs "24-bit CDs"? Why make a big deal out of only 20 bits, which in 2005 is several years the obsoleted technology?
I have no idea. And remember, I'm simply offering up why they may be saying it - for all I know they're simply lying out of their butts in an effore to sell more discs...

As for JVC's K2 system - it was developed at a time when 20 bit was the best thing going (this isn't 'new' technology).

And if there's "nothing wrong with" them using the terminology they use, then why can't the rest of us make the same claim when we dither our 24 bits down to a 16-bit CD-R (regardless of whether those last 8 bits are all zeros or all ones or anything in between)? I can give the answer to that last question: because we know that it is wrong on both a technical and an asthetic level, not to mention the subjective ethical level.
Partially true, but remember, they have spent tons of time and money researching their noise-shaped dither so it yields a better dynamic range. You can't argue this - in fact it's the reason that anything besides TPDF even exists!

On a more analytical point: To say that a 16-bit CD has a "20-bit quality" to it because that is the width of the converter before dithering and/or because of the quality of the dithering algorithm itself is, by definition, incorrect at best, and meaningless at worst.
You are suggesting it is impossible to have a useable dynamic range that exceeds the system noise? It's been done for years...

The reduction of word length combined with the addition of dithering is a two-stage compounding of noise to the signal (even if the last four bits are all zeros.) The noise added to the signal by dithering reduces the "unplesantness" or "unnaturalness" of the sound caused by mathematical curiosities involved in the truncation of the word length is nothing more than a "trick" that takes advantage of how the human ear and mind processes sound. In strict mathematical terms it is still adding a level of noise that takes the bits one step further away from their original 20-bit value. The dithering may make it sound "better", but it does so in a way totally unrelated to the way that a wider word length would. Dithering does not add resolution or dynamic range to the signal; it just "smears" the current resolution in the last bit(s) to take the edge off the truncation.
Actually this is incorrect - dithering doesn't 'cover up' or 'smear' the resulting truncation distortion - it avoids it entirely! And if you don't believe that noise-shaped dithers can actually reduce the noise floor in the audible band, then we need to go back to square one and break out Pohlmann's 'Principles of Digital Audio'.

That actually brings up another question I've had for some time now: why the powers that be in designing these systems decided to use the extra bits to increase dynamic range instead of to increase the resolution within a dynamic range. But that's a question for another thread...
The short answer is "math won't allow it"...
 
Back
Top