J
Justus Johnston
New member
This is a two-part question.
I often read conflicting viewpoints about digital resolution, and I've seen the pages upon pages of argument for both sides.
On the one hand, some people say that if you're recording to a CD, then it's not worth recording or processing at any resolution higher than 44.1 Khz, 16-bit, since that's the target resolution, and the only thing that recording at a higher rate does is introduce the possibility of quantization error or alias frequencies, or if you take steps to eliminate those, anti-aliasing artifacts and unwanted dithering noise.
On the other hand, it's also said that it's worth doing everything at the highest resolution possible, since this will keep the signal closer to analog for all the digital processes, and sounds clearer with more headroom, and that darned dithering noise is negligable anyways. This is also my personal preference based on my personal experience having tried both (the higher resolution mixes sound way better, even when downsampled for a CD).
OK, here's the thing. When I master, I use a lot of outboard analog gear, so the audio's already going through a D to A and then an A to D, unless the mix comes in on tape, which still happens occassionally. But I'm talking about the situation where the audio comes as 96 KHz, 24-bit audio files, and the target is CD audio.
If you use one interface to play the audio and perform D to A, and then send it through the analog part of the signal path, you could record it with a second, completely separate audio interface, acting as A to D, and there should be no way for the second interface to know that the signal was digital. It'll consider it just like any other analog signal.
So the question is: if I use a D to A playing signal that's 96 KHz, 24-bit, and record using an A to D clocked at 44.1 KHz, 16-bit, doesn't that more or less effectively bypass aliasing, quantization errors, artifacts, AND dithering noise? And is this not a superior way to master compared to recording the processed audio on the same clock and then downsampling?
Of course, you're adding some low-level noise anyways via the analog signal path, but that's the (small) price I pay to get the sound I want anyways.
OK, the other part of the question. I'd like to get two extremely high-quality A to D converter inputs for a laptop that only has cardbus and USB. What are my options? I'd like something on par with my RME HDSP9632 on my desktop.
I often read conflicting viewpoints about digital resolution, and I've seen the pages upon pages of argument for both sides.
On the one hand, some people say that if you're recording to a CD, then it's not worth recording or processing at any resolution higher than 44.1 Khz, 16-bit, since that's the target resolution, and the only thing that recording at a higher rate does is introduce the possibility of quantization error or alias frequencies, or if you take steps to eliminate those, anti-aliasing artifacts and unwanted dithering noise.
On the other hand, it's also said that it's worth doing everything at the highest resolution possible, since this will keep the signal closer to analog for all the digital processes, and sounds clearer with more headroom, and that darned dithering noise is negligable anyways. This is also my personal preference based on my personal experience having tried both (the higher resolution mixes sound way better, even when downsampled for a CD).
OK, here's the thing. When I master, I use a lot of outboard analog gear, so the audio's already going through a D to A and then an A to D, unless the mix comes in on tape, which still happens occassionally. But I'm talking about the situation where the audio comes as 96 KHz, 24-bit audio files, and the target is CD audio.
If you use one interface to play the audio and perform D to A, and then send it through the analog part of the signal path, you could record it with a second, completely separate audio interface, acting as A to D, and there should be no way for the second interface to know that the signal was digital. It'll consider it just like any other analog signal.
So the question is: if I use a D to A playing signal that's 96 KHz, 24-bit, and record using an A to D clocked at 44.1 KHz, 16-bit, doesn't that more or less effectively bypass aliasing, quantization errors, artifacts, AND dithering noise? And is this not a superior way to master compared to recording the processed audio on the same clock and then downsampling?
Of course, you're adding some low-level noise anyways via the analog signal path, but that's the (small) price I pay to get the sound I want anyways.
OK, the other part of the question. I'd like to get two extremely high-quality A to D converter inputs for a laptop that only has cardbus and USB. What are my options? I'd like something on par with my RME HDSP9632 on my desktop.