Why must CD's be 16-bit?

  • Thread starter Thread starter Kasey
  • Start date Start date
K

Kasey

New member
Why must i dither if i have enough space left on the CD to not dither? is there a technical problem here, like it wont play in typical CD players or what? Why cant there be 24-bit CD's?
 
It won't play in a normal CD player. The standard that was set up for CDs and CD players is 16 bit 44.1k. That's just the rules.
 
Kasey said:
Why must i dither if i have enough space left on the CD to not dither? is there a technical problem here, like it wont play in typical CD players or what? Why cant there be 24-bit CD's?

That's just the standard that was set long ago. You can try to burn a 24-bit .wav on a CD, but that ain't redbook standard, so unless your CD player can support .wav format, it won't play. If you want a CD to be playable on every CD player in existence, you need a redbook standard CD, which is 16/44.1.

(PS some older CD players don't like some types of CD-Rs either)

Having said that, there are newer formats: SACD and DVD-A. SACD is a little tougher for the home recordist, but there are several programs that will burn DVD-As; Wavelab is one. Many DVD players will play the DVD-A format; that gets you up to 24/192 in stereo and I believe 24/48 in 5.1 surround.
 
Most DVD players don't play DVD-A actually.
You're better off just encoding in DTS if you're doing the 24/96 DVD route. Then it actually will play in every DVD player.
 
The CD audio standard only supports 44.1 kHz@16 bits/sample. If you were providing content for computers, you could provide higher quality audio in the form of an audio file (i.e. a data or hybrid audio/data disc), but no CD player in the world could play that.

Basically, the people who designed the CD audio format were hindered by the same problems that bite a lot of consumer electronics companies now---they are too closely tied to the recording industry. They locked themselves into 44.1 kHz in large part to prevent compatibility with consumer DAT decks because of paranoia over copying. Some consumer DATs eventually got 44.1 kHz audio support, but by then, they had already thoroughly killed consumer DAT's popularity through obscenely high costs and laws that forced SCMS down out throats....

Oh, the thing about 44.1 kHz being a convenient rate relative to video frame rate in NTSC... well, that's true, but there was plenty of 48 kHz audio hardware out there by the time the CD format was standardized.... That's not really an excuse for the standard being chosen for CDs....

*grumbles*
 
dgatwood said:
.

Basically, the people who designed the CD audio format were hindered by the same problems that bite a lot of consumer electronics companies now---they are too closely tied to the recording industry. They locked themselves into 44.1 kHz in large part to prevent compatibility with consumer DAT decks because of paranoia over copying. *


Uhhh.....CDs were around for a few years before DAT players. There were restrictions put in place, but they applied to the DAT player/recorders.
 
Last edited:
easychair said:
Uhhh.....CDs were around for years before DAT players.
And 48k is the one that works out well with video. Your kind of right, just backwards.
 
44.1= the minimum rate to achieve "accurate"' 20K (top of human hearing range) reproduction with a little room left over to filter out everything else above it.

16 bit= a decent sounding bit depth that allowed CD's to be long enough (74 minutes) to remaster most tapes and records on only 1 CD. Thus most existing albums would be 1 CD and double albums would be 2. It also allowed adequate reproduction with relatively inexpensive hardware.

Then, once it was decided upon, it was locked down and hasn't changed since.

Still sounds better than cassettes. :)

-Chris
 
Farview said:
And 48k is the one that works out well with video. Your kind of right, just backwards.


Is 48 khz a standard video sample rate? I was referring to this, which I found on some site, and have found a ton of the same elsewhere. It suggests 44.1 was chosen because it worked using digital video tape as a storage medium, and 44.1 was the lowest number that fit the needs of 20-20k audio stored digitally.


"44.1kHz was chosen to fit a digital audio signal onto video tape,
in the area used to store the picture. Video was the digital audio
storage medium before we had CD, and the rate of 44.1
is a logical result of that and the need for a safe rate
that could include up to 20kHz, which was considered to be
the human threshold of hearing back then. The first rate
that simply worked (and was interchangeable with video,
since CD-mastering was done on video) was 44.1 kHz.
The 44100 Hertz comes from the calculation
using video-frames, where you can have
3 samples per field of 490/2 lines;
3 x 245 x 60 Hz = 44100 Hz"
 
You could be right. The last time I was worried about video was a long time ago. I can't remember why I was under this impression, so I will defer to your quote.
 
Farview said:
You could be right. The last time I was worried about video was a long time ago. I can't remember why I was under this impression, so I will defer to your quote.

LOL. One of those things you'd prefer to forget, maybe? :p

I just switched from PC to Mac. There are many things I look forward to forgetting, myself. Spyware updates, the cool websites with trojan removers, "plug and pray", etc.
 
easychair said:
Is 48 khz a standard video sample rate? I was referring to this, which I found on some site, and have found a ton of the same elsewhere. It suggests 44.1 was chosen because it worked using digital video tape as a storage medium, and 44.1 was the lowest number that fit the needs of 20-20k audio stored digitally.
That's kind of sort of backwards revisionist history.

The oversimplified story in proper order:

Engineers knew from the Nyquist therom that the sample rate would have to be at least 40kHz if they wanted to get good reproduction up to 20kHz without artifacting. The problem was they'd have to throw low-pass filters on the chain to filter stuff out above the top desired frequency, otherwise you'd get unwanted digital artifacts called "aliasing". The problem is, there's no such thing as a low-pass filter that throws up a brick wall at a certain frequency; they couldn't just stop stuff right at 20kHz where the 20kHz stuff would get through by 20.1kHz wouldn't. All filters have a slope to them. For this reason, to fully let 20kHz signals through they had to have filters that also let some stuff above 20kHz through as well. They found that an agreeable fast-sloped low-pass filter required at least an extra 2kHz before it reached full attenuation. This put the actual frequency response that the system had to handle at around 22kHz, which would bump up the sample rate to around 44kHz.

Refining that number to 44.1 in order to fit the math of NTSC video may have been a surrendipitous tweak made after the above was worked out. It may perhaps have been a contributing factor in the .1 part of 44.1kHz, but it had nothing to do with the decision to go to the realm of 44kHz; that decision was made by pure math theory and component physics alone.

At least that's the version of history that I remember living through ;). But one must always remember that "history" and "the past" are usually two different things altogether. ;)

G.
 
  • Like
Reactions: apl
16 bits gave 96 dB of dynamic range, way more than you could ever need in a final product, and lots more than the standard LP playback system which was around 45 dB (only 7.5 bits! :eek: ) on a good day. 24 bits is handy for recording because you should never have to worry about overloads or recording at too low a level.
 
Back
Top