Using two interfaces for mastering

  • Thread starter Thread starter Justus Johnston
  • Start date Start date
J

Justus Johnston

New member
This is a two-part question.

I often read conflicting viewpoints about digital resolution, and I've seen the pages upon pages of argument for both sides.

On the one hand, some people say that if you're recording to a CD, then it's not worth recording or processing at any resolution higher than 44.1 Khz, 16-bit, since that's the target resolution, and the only thing that recording at a higher rate does is introduce the possibility of quantization error or alias frequencies, or if you take steps to eliminate those, anti-aliasing artifacts and unwanted dithering noise.

On the other hand, it's also said that it's worth doing everything at the highest resolution possible, since this will keep the signal closer to analog for all the digital processes, and sounds clearer with more headroom, and that darned dithering noise is negligable anyways. This is also my personal preference based on my personal experience having tried both (the higher resolution mixes sound way better, even when downsampled for a CD).


OK, here's the thing. When I master, I use a lot of outboard analog gear, so the audio's already going through a D to A and then an A to D, unless the mix comes in on tape, which still happens occassionally. But I'm talking about the situation where the audio comes as 96 KHz, 24-bit audio files, and the target is CD audio.

If you use one interface to play the audio and perform D to A, and then send it through the analog part of the signal path, you could record it with a second, completely separate audio interface, acting as A to D, and there should be no way for the second interface to know that the signal was digital. It'll consider it just like any other analog signal.

So the question is: if I use a D to A playing signal that's 96 KHz, 24-bit, and record using an A to D clocked at 44.1 KHz, 16-bit, doesn't that more or less effectively bypass aliasing, quantization errors, artifacts, AND dithering noise? And is this not a superior way to master compared to recording the processed audio on the same clock and then downsampling?

Of course, you're adding some low-level noise anyways via the analog signal path, but that's the (small) price I pay to get the sound I want anyways.


OK, the other part of the question. I'd like to get two extremely high-quality A to D converter inputs for a laptop that only has cardbus and USB. What are my options? I'd like something on par with my RME HDSP9632 on my desktop.
 
Seriously, no takers? Or is it just that obvious an idea?
 
I run different sample rates all the time.

Slight clarification: A lot of people (70-80% of full-time professionals) record at the target *RATE* -- Study after study, white paper after white paper, yada, yada, etc., etc. But not in 16-bit. 24-bit, no matter the rate, all the time, every time, no reason not to. That "more analog" has much (MUCH) more to do with the resolution than the sample rate.

Me? I don't even load in using 16-bit. I bring in at 24 (which gives me *and* the client access to the higher resolution files if they're ever needed) and dither before authoring the disc only.
 
Yes, I've seen this a lot, getting 44.1 KHz, 24-bit files.

If given 16-bit files, upsampling to 24-bit makes sense to me.

But after the analog signal path, doesn't recording the final digital print at 16-bit eliminate the need for dither altogether?

Or are you saying I should still print to 24-bit, and then dither anyways? A lot of the time, the only digital processing I have left to do after the analog signal path is just a digital limiter to catch that last dB or two that I can't ever seem to count on an analog limiter to catch right. Is it really worth the dither noise to do that at 24-bit resolution? I'm guessing yes, since dither noise is practically inaudible. I just want to make sure I understand that's what you're saying.

And when you say you run different sample rates, you're saying that you use this trick with using two different converters with different clocks? Because I've honestly never read or heard about that anywhere before. It's just something I kind of thought of the other day.
 
But after the analog signal path, doesn't recording the final digital print at 16-bit eliminate the need for dither altogether?
One drawback I see to that idea is that you are never actually saving a 24-bit premaster anywhere; i.e. you are pretty much printing to a 16-bit master on-the-fly and assuming there's nothing more to be done.

What happens if you or a seperate client decide there's something they want to tweak? Or if you are working on a CD, in which case processing fadeouts/cross-fades, assembling track order, matching track levels, etc. still need to be done? In those cases you're stuck with doing final processing on 16-bit versions.

Dithering is such a minor variable, and if/when audible at all, it's usually for the better (if the ME has the right tools and knows what he's doing), that I personally would not want to sacrifice having a 24-bit safety print just to try to avoid dithering on the downsample.

G.
 
Glen:

Yes, that makes a lot of sense. The dithering is really not that bad and dithering algorithms are getting better all the time. In fact, I'm relatively certain that the noise from dithering is far less audible than the noise the signal picks up going through my analog componants (which is already all but negligable). It's not worth stressing it when there might still be a world of benefit to be had from any number of digital processes at a higher resolution that may need to be done before writing the disk.

But you think it might be worth thinking about as a viable method of downsampling, since changing sample rates wreaks so much more havoc on audio? Like, if I'm given a 96KHz 24-bit file, ideally I should print the analog portion of this through at 44.1KHz, 24-bit with a plan to drop down to 16-bit using a dither just before writing to CD, right? Assuming, of course, that CD is the target medium (which it most often is for me right now).

Also, I've been on the lookout, and I've found some firewire interfaces to suit my needs, but nothing that uses USB or Cardbus. I guess that doesn't really surprise me. It's looking like my most economical option for matching my HDSP on this laptop is to get the HDSP Cardbus with a Multiface, even though that's way more I/O than I would need. I guess I could always look for a Cardbus with firewire. Maybe that would be the better option.
 
How about thinking of it this way - try to set up the signal and mix path with a conscious intent to minimize the number of A/D/A conversions? Adobe Audition can work with different bit depths simultaneously in the same project, so conversion isn't always necessary in any event until the final product. (Either that or it converts automatically without announcement).

The higher bit depth you can work with, the better. Bit depth equates to amplitude value. So a higher bit depth, such as 24 bit over 16, will provide the possibility to capture a much greater dynamic range - greater headroom, minimal signal concatenation. This also allows it to lower the apparent noise floor because each piece of gunk is a smaller component of the whole. Noise reduction and compression are more effective and less violent.

So I try to keep it simple - focus on the highest bit depth I can attain while tracking and seek to avoid unnecessary D/A conversions throughout the process, converting down only once to the CD. I also save the projects in their original forms on my hard drive, but I expect I may live to regret doing that. I ought to be archiving them in 24 bit on DVDs.
 
Sure, of course. However, I get it about bitrate. I'm talking sample rate now.

Also, I'm going to use my outboard gear for mastering, so there WILL be one D to A, and one A to D involved. That is going to remain true whether I'm changing sample rates or not. In fact, right now I am NOT changing sample rates because I'm using only one interface, and it has to record at the same clock timing it for playback.

Since I'm doing that anyways, my question is this: In the event I'm mastering a 96 KHz file with the intent to write a CD (at 44.1 KHz), would it be a good idea to use a second interface (with a separate clock) in order to capture the resultant print in 44.1 KHz directly, thereby bypassing completely the issues inherent in digital sample rate reduction?

Or are you telling me in an indirect way that I need to ditch my analog gear and do everything with Waves plugins? :eek:
 
But you think it might be worth thinking about as a viable method of downsampling, since changing sample rates wreaks so much more havoc on audio?
IMHO, this entire discussion really hinges the most on the actual quality and characteristics of the converters themselves.

First, keep in mind that with most modern-design oversampling converters, there's a whole lot going on inside the converter itself, including one or more stages of it's own internal application of dither and filtering. Second, different makes and models of converters tend to have unique characteristics as far as which sample rates and bit depths they sound best at. One may work better in one way or another at 44.1/16 than it does at 96/24, whereas another one's "sweet spot" may be elsewhere. Third would be how this all applies to the quality of he converters in your CD burner; only you can tell if the quality of your last stage of conversion is better then the resulting quality of your sample rate conversion software.

The way I personally look at it - and others may have valid disagreement with this - such variabless can so so totally wash over the details and affect the outcome, that it's impossible to generalize a yes/no answer. Add to that the idea that we're talking about sich a small factor in the overall chain of quality to begin with, that I'd be tempted to say, just keep it simple. If one way seems to actually produce better results, go with it. If not, then don't worry about at and pick the most efficient workflow.
Like, if I'm given a 96KHz 24-bit file, ideally I should print the analog portion of this through at 44.1KHz, 24-bit with a plan to drop down to 16-bit using a dither just before writing to CD, right?
I'm not sure what you mean by "print the analog portion". If you are given 96/24 files, in a perfect world it would be best to just leave them there and not SRC anything untl you have to.

If you want to run them through outboard analog iron, then I'd fall back on the above paragraph: which sounds better at the output of your DAC, info that's been downsampled to 88.2, 48 or 44.1 and then converted to analog from there, or a digital stream that's converted to analog direct from it's original 96/24? It can go either way.
Also, I've been on the lookout, and I've found some firewire interfaces to suit my needs, but nothing that uses USB or Cardbus. I guess that doesn't really surprise me. It's looking like my most economical option for matching my HDSP on this laptop is to get the HDSP Cardbus with a Multiface, even though that's way more I/O than I would need. I guess I could always look for a Cardbus with firewire. Maybe that would be the better option.
It depends upon how many channels of I/O you want and how much you have set your budget to. If you want more than 2 channels, you won't find much, if anything at all, in the way of USB.

I'm not all that sure what is out there for Cardbus, I haven't really done much looking there. There is not as much available in general because it's more expensive to develop and support versus just relying on native USB or Firewire hardare and drivers, and unlike USB or FW, means a product that limits its own market pretty much to the laptop user only. Not an attractive combo for many manufacturers.

G.
 
just trying to help

I had asked my self this same question.
why record on 24 bit using more resources and significantly more HD space, when i will be burning a CD at 16 bits anyways.
Until a podcast called Inside home recording gave me the answer.

While recording, for every 12 DB you loose 1 bit of resolution.
Imagine that if you are recording at 16 bit, and you loose 1 bit, you would be talking of a 15 bit quality.

There is another thing!!!
Some people here said, they did not see the problem, of recording in 16 bits and then "dithering" to 24 bits.
THAT IS NOT POSSIBLE!!!

Is just like taking a photograph at a certain resolution, and then trying to modify the picture that was already taken in a lower resolution to a better resolution.

If you record at 24 bits, you have room for mistakes, mishappenings, and whatever.
You make sure, that at the end of the game, you will have at least REAL 16 bit resolution.

wish it helps!!
 
Isn't what you're proposing actually the reason why we have dither in the first place??

If you're running a 24/96 out to analog and then back into digital at 16/44 you're going to have a completely different signal. The new format won't have room for the dynamic range that it's being sent so you're going to lose low level information.

As I understand, this is called simple truncation. You have to lose bits (obviously) so it just loses the lowest levels of data.

Dither uses very low level noise to boost this data above the truncation point so that it is not lost when we chop the bits off. This noise (in today's modern dither plugins/hardware) is essentially inaudible because it gets chopped off anyways.

Right?

But let your ears be the judge. Run a mix your way, and then run a mix bouncing through your dither plugin at the last step. Listen to the low level information (eg reverb tails) and see if you can hear a difference.
 
Yes, I've seen this a lot, getting 44.1 KHz, 24-bit files.

If given 16-bit files, upsampling to 24-bit makes sense to me.
I'm not saying to upsample to 24 bit -- If you have a 16-bit files, your DAW is very likely throwing calculations in (at least) 32-bit float.
But after the analog signal path, doesn't recording the final digital print at 16-bit eliminate the need for dither altogether?
Sure - But as mentioned, you have no high-res if they need it for something else later. Or if a fade needs to be adjusted or what not - Which is very common - If you capture in 16-bit, just adding fades is going to drag it back to 32BFP and then it's going to need to dither *again* to go back to 16.

Or are you saying I should still print to 24-bit, and then dither anyways? A lot of the time, the only digital processing I have left to do after the analog signal path is just a digital limiter to catch that last dB or two that I can't ever seem to count on an analog limiter to catch right. Is it really worth the dither noise to do that at 24-bit resolution? I'm guessing yes, since dither noise is practically inaudible. I just want to make sure I understand that's what you're saying.
There's the issue again - If you're doing *ANYTHING* at all - Limiting, fades, volume adjustments of any kind - you're going back to 32BFP and then it's dithering again. Sure, dithering ONCE is usually almost silent...

And when you say you run different sample rates, you're saying that you use this trick with using two different converters with different clocks? Because I've honestly never read or heard about that anywhere before. It's just something I kind of thought of the other day.
It's not a trick - at least 50% of the projects I work on are in different sample rates than the target. Every converter in the chain here can run the DA's and AD's at different rates. The DA is clocked by the source, the AD is decided by the target. But the bit depth (DEPTH - not "bit rate" - totally different thing) stays high-res pretty much up until the laser turns on.
 
Glen:

If I'm reading your response correctly, you're saying that on some converters, it's possible that a lower sample rate could yield a higher fidelity signal. I hadn't considered that. I can think of a number of scientific reasons that would be possible, but I don't necessarily want to derail the conversation with all of that. Like you said, it's such a small factor, maybe science can sit this one out and I'll just use my ears and decide what sounds best.

My RME converters definitely perform best at the higher resolutions and sample rates.

I still somewhat fancy the idea of using a separate interface to record than to play. If nothing else, I would think this would spread the workload out across two different converters and clocks, and might make for a steadier print. I suppose it's worth a shot. I'll get an interface off of Ebay, and if I don't care for it, I'll sell it to somewhat else.

For other people stumbling onto this subject, I think I should stress that this ONLY makes sense if you're involving an analog stage. If there's no analog stage, then you shouldn't be involving converters in the signal path at all!

leandrogonza said:
I had asked my self this same question.

That's not the question I was asking, but thanks for playing. I'm a huge advocate of recording at higher bitrates.

Dax:

I don't think converting from A to D and D to A is the purpose of dither. When you speak of "truncating bits", you're speaking of dropping LSBs from each digital word. When you convert from D to A, there ARE no bits to drop. It's analog! When you convert from A to D, it requantizes the audio at whatever bitrate it's recording, so there would be no truncation that takes place.

Dither is, like you say, a compensation for LSB truncation, but I think it has more to do with the LSBs that get dropped when digitally down-converting from a higher bitrate to a lower one.

EDIT: Oh, and I don't need a lot of I/O at all for the laptop interface. All I want is two really high quality analog inputs attached to really high quality A to Ds. It doesn't need any digital I/O; it doesn't need MIDI; it doesn't even need any analog output. But I was thinking a more realistic interface to look for would have 2 ins, 2 outs, and probably SPIDF and MIDI. USB is fine, but I don't want some cheapo M-audio, Lexicon, or Emu interface. Again, the big thing is that it has to have at least as good converters as my RME HDSP.
 
If I'm reading your response correctly, you're saying that on some converters, it's possible that a lower sample rate could yield a higher fidelity signal. I hadn't considered that. I can think of a number of scientific reasons that would be possible, but I don't necessarily want to derail the conversation with all of that. Like you said, it's such a small factor, maybe science can sit this one out and I'll just use my ears and decide what sounds best.
In this case it's not necessarily the science, the theory, but rather simply the manufacture design and quality. Some manual transmission cars have their power band in 2nd gear. Some don't hit it until 3rd gear.
I still somewhat fancy the idea of using a separate interface to record than to play. If nothing else, I would think this would spread the workload out across two different converters and clocks, and might make for a steadier print. I suppose it's worth a shot. I'll get an interface off of Ebay, and if I don't care for it, I'll sell it to somewhat else.
Knock yourself out, Justus. It souunds like you got the bug to do it, so go ahead and try it out. :) Whatever works for you, works for you :).

Persoanlly I'd feel much more comfortable with everything slaved to a master clock, even if I used seperate converters. There's no "workload" to spread, a clock signal is a clock signal, and having everything moving along synched to the same timing make sense to me. Then again, having spent a signifigant part of my career in video studios and working by timecode, that just kind of got into my blood.
EDIT: Oh, and I don't need a lot of I/O at all for the laptop interface. All I want is two really high quality analog inputs attached to really high quality A to Ds. It doesn't need any digital I/O; it doesn't need MIDI; it doesn't even need any analog output. But I was thinking a more realistic interface to look for would have 2 ins, 2 outs, and probably SPIDF and MIDI. USB is fine, but I don't want some cheapo M-audio, Lexicon, or Emu interface. Again, the big thing is that it has to have at least as good converters as my RME HDSP.
If you got the bucks, UA 2192. Or, a bit less, and in a more portable form factor, check out the Mini products from Apogee and Benchmark.

G.
 
That UA looks inviting. But probably still just a bit too rich for my blood at the moment...but someday soon maybe.

The Apogee mini-me looks a bit more my speed for right now. It looks like there's a USB option for it, so maybe that'll work.

Thanks for your responses. I might figure this mastering thing out yet.
 
I don't think converting from A to D and D to A is the purpose of dither. When you speak of "truncating bits", you're speaking of dropping LSBs from each digital word. When you convert from D to A, there ARE no bits to drop. It's analog! When you convert from A to D, it requantizes the audio at whatever bitrate it's recording, so there would be no truncation that takes place.

Dither is, like you say, a compensation for LSB truncation, but I think it has more to do with the LSBs that get dropped when digitally down-converting from a higher bitrate to a lower one.
[\quote]

Makes sense. Try it out and let us know how it sounds! :)

The way I was thinking about it was that, yes, even though dither has nothing to do with AD-DA conversion, when you go from 24 to analog, you'd get a signal with more dynamic range than your 16 bit conversion on the other end will be able to see, so you'd be losing stuff. Whether you want to call it truncation, or sampling error or whatever, aren't you going to be losing stuff?

I mean, if I wanted to master stuff through analog gear (if I owned anything worth using for this), I'd record back into 24/48 and then dither. That way, the stuff you re-record in will be able to use the advantages of higher bitrate.
 
The way I was thinking about it was that, yes, even though dither has nothing to do with AD-DA conversion, when you go from 24 to analog, you'd get a signal with more dynamic range than your 16 bit conversion on the other end will be able to see, so you'd be losing stuff. Whether you want to call it truncation, or sampling error or whatever, aren't you going to be losing stuff?

I mean, if I wanted to master stuff through analog gear (if I owned anything worth using for this), I'd record back into 24/48 and then dither. That way, the stuff you re-record in will be able to use the advantages of higher bitrate.

Well all of that is basically true. Yes, you'd be recording with a lower resolution than what you played back at, thus losing dynamic information.

Here's what I was thinking: Your ultimate goal is a CD with audio captured in 44.1KHz, 16-bit. In my mind, the (completely unrealistic) ideal way to record something in this format would be that the band plays their music, and the audio goes through the shortest signal path possible to attain the desired sound and is delivered directly to a CD burner in the desired format. I'm not proposing to try doing that (I do understand the benefits of post-production ;) ), but I was thinking this might be a way to get one step closer to that ideal. If you read this thread, I've changed my mind about doing this with the bitrate, but I still think it might be a viable method for downsampling. Like Glen said though, it's all a matter of what resolution the converters sound best at.

So it remains to be seen, after I get the Apogee. Will a master that started at 96KHz sound better if recorded after the analog stage at 44.1, or at 96 with a digital downsample to 44.1? I'm fully expecting the Apogee to sound better when recorded at 96K of course, but the question is whether or not that will offset the disadvantage of using a digital downsampling process later on. Digitally downsampling audio is far more damaging than reducing bitrate, and it's kind of cool to think that there might be a way around that - a way to reap the benefits of audio recorded and mixed at higher sample rates without the detriments of downsampling.
 
even though dither has nothing to do with AD-DA conversion...

It really does. Like REALLY does.

Dither is invariably added before the converter quantises the samples (in A-D Conversion). This is done to randomise the spectrum of the quantisation error. Hence the quantisation error is no longer correlated to the input signal, and has a noise-like (rather than distortion-like) characteristic.

I suggest reading John Watkinson's "The Art of Digital Audio".
 
Back
Top