mixsit said:
Just to confirm, a cd-a file is just a 16/44.1 wav file with some extra tags on it for cd play-back? No difference quality wise for these purposes?
Wayne
Not exactly. The data starts out as a stereo 16 bit 44.1 kHz file nearly identical to a .wav, but it's not represented on the CD the same way it's represented on, say, your hard drive.
The .wav file first gets divided into six samples of each channel to form (12) 16-bit frames, which is then encoded into a total of (24) 8-bit words (half of each 16-bit sample forms a single 8-bit word, with an odd "first half" and an even "second half").
These words are interleaved by delaying all even words by two blocks, forming a new 24-byte word. 4 parity bytes are added to the 24-byte word, and the resulting 28-byte word is interleaved over a total of 112 bytes (several words occupy any one strip of data, being interleaved together). The bit stream is now divided into new 28-byte frames.
These 28-byte frames get another 4 parity bytes added at this step, resulting in a 32-byte word. Another interleaving is performed, this time just delaying by one block. At this point all parity bits are inverted.
An eight-bit subcode is added to the front of the 32-byte word, which is then modulated using EFM (eight to fourteen modulation); a modulation which actually inflates every eight bits to now occupy fourteen bits. This is an algorithm designed to minimize the number of 0 to 1 or 1 to 0 pit translations by using an NRZ scheme (non-return to zero), which represents a binary "zero" as "no change" (pit to land) and represents a digital one as "change from pit to land" (or land to pit).
A 24-bit synchronization word is added to the beginning of each frame, and every 14 bits are grouped using three merge bits to form the final 588-bit frame (containing the original 192 bits of data... well, actually containing 192 bits of total original information that came from several different "original" frames).
Pretty straightforward, isn't it?