Resolution of sound decreases by pulling down faders .....is there any such concept ?

Was OP talking about the channel slider in the DAW or the pre-amp pot on the interface?
If it's the former - as has been said several times - it doesn't matter.
If it's the latter, turning down his pre-amp would affect his signal's noise floor, wouldn't it?
 
Yes and no. Chances are, as you turn down the preamp level on the interface, you will be turning down the self noise of the preamp. It will bring the peaks closer to the digital noise floor, but at 24 bit, it is so low that it isn't a practical problem.
 
"Things are much worse in analog." ;)

It ultimately depends on the preamp and exactly where in the path the worst of its noise is injected. Some have a baseline noise level that seems to come after the gain element. Others the noise is either before or in the gain stage. In the former, more gain would mean less noise, in the latter it's kind of the opposite. You can't usually know until you plug something in and try it. In most real world situations, it's a bit of both, and you have to find the best balance.

All that said, I don't often bother with analog gain, and it's not really a problem because the noise in the source is usually louder enough to make the whole thing moot, and in most mixes the noise is masked sufficiently to not be an issue. But then, I started on cassette tape, so my tolerance for a little hiss is probably greater than some. :)
 
When we listen to anything but dueling Pippers on nearfields, we can get 10-percent distortion. But that has nothing to do with the resolution. You loose resolution from overcoming the motors resistance to movement. Overprinting to tape, generally, increases the resolution if you have usable headroom. Small signal is a reduction of resolution. As we record digital tracks closer and closer to hospital flatline, what is it we can expect to do with them ?
 
When we listen to anything but dueling Pippers on nearfields, we can get 10-percent distortion. But that has nothing to do with the resolution. You loose resolution from overcoming the motors resistance to movement. Overprinting to tape, generally, increases the resolution if you have usable headroom. Small signal is a reduction of resolution. As we record digital tracks closer and closer to hospital flatline, what is it we can expect to do with them ?

:)
 
ashcat_lt said:
It's dynamic range, which is the only real meaningful definition of the term "audio resolution".


Gotta respectfully disagree. In digital audio, resolution is largely based on the bit depth. To a small extent it can also be based on sample rates, but that has mostly to do with Nyquist level frequency limits and the "stretch" if you will, of the limits of a reconstruction filter slope. Which is not even really the point. The simple observation by many people is that plugins often sound better at certain sample rates. The big tradeoff with sample rates is drive space and overall processing power.

Bit depth is easier on both, and often used as a synonym of resolution with PCM audio.

In theory, 16 bit fixed point provides 96 dB range. This is better than an Ampex or Studer. With a super clean system and good dithering practice the range can be extended to around 108 dB. This would be equivalent to the theoretical range of 18 bit. At 20 bits you have 120 dB range. And a physical barrier at the thermal limit. Nothing we can make can surpass this range.

And yet we have converters that can record in typically 16 or 24 bit. Why would we need the resolution of 24 bits when the range is outside the thermal limit? There is more going on than dynamic range, perhaps.

Add to that the ADC/DAC processes typically ONLY operate at 16 or 24 fixed bit depths. This is different than processing.

I can get a 16 bit mix going with a bunch of compression, some reverb, maybe some EQ or M/S processing or whatever, and throw a dither plugin on the 2 buss. We know that quantization errors in 16 bit (or any bit depth for that matter) cause distortion, and dither kills the distortion but adds noise. But when I toggle the dither on and off, I can't hear any noise or distortion at either setting. What I can hear is the snare. With dither on, it sounds full and I can hear the whole reverb tail. It has its own distinct place in the mix. As does everything else. When I bypass the dither, half the decay of the snare is gone. The stereo image collapses. Things sound less distinct and more like cardboard. This is the onset of harsh digital crap from artifacts and the only difference is a plugin at the end of the chain that you barely have to pay attention to.

It's been described as laziness to avoid it. The Pro Tools manual I have doesn't help either. It says don't dither 24 bit tracks. What it doesn't say is that you might not be able to notice for a while, but the artifacts left behind could stack up after multiple processes and come back to bite you in the ass.

At the extreme end of the scale you have MP3's that have been invaded by the swarming space goblins. Same problem, left unchecked through multiple stages. The sonic decimation remix.

A DAW will run its internal processes at higher resolution to both the source and target formats for processing. Typically 32 bit float, but could also be 48 bit fixed, like Pro Tools HD systems before version 10, or 64 bit double precision as is becoming more popular now. This helps a lot with applying gain changes. If all you do in a DAW mix engine is change a level, you're forcing the audio to be requantized. 32 bit float doesn't matter so much, it's basically the same as 24 fixed, but with 8 bits that can scale the resolution up or down as needed for volume changes. If the change in volume is precisely 1 bit or 6 dB, dither isn't necessary because the samples will line up in the same spot. No quantization error. Anything else will cause problems unless your DAC can work with 32 bit float.

Add to that any plugins you might be running will probably not be using a floating point system for processing as it becomes difficult to impliment. So they might change the incoming signal to 64 bit fixed on the fly. When it's done, dither should be applied when it goes back to the mix engine. It just happens (or not), they don't tell you. It's just that some plugins sound better than others.

If I want I can quantize Pi to an interger value of 3. Couldn't possibly affect the circumference of a circle, could it?

Data resolution from bit depth is real, and it has a real effect on the processing capability of your audio. It's EASY to have errors in the math downstream and while in any practical sense the dynamic range is still governed by the thermal limits, there is just more going on than dynamic range. I've never heard of a 20 bit fixed point mix engine, but that's the practical limit of dynamic range we have to deal with.

There's a reason.
 
I disagree that you're actually disagreeing with me. :)

I do think that some of that information is old, that most decent plugs nowadays run floating point just like the DAWs that host them, but that's kind of an ancillary point.
 
If resolution is largely based on bit depth, and bit depth is linked to dynamic range, then you are actually agreeing with him.

Sometimes plugs will sound better at higher sample rates because of of things like pitch shifting and time stretching, which manipulate the speed of, or the number of samples. So more samples will help in that regard. Either way, the audio ends up truncated post pricessing.
 
Farview said:
If resolution is largely based on bit depth, and bit depth is linked to dynamic range, then you are actually agreeing with him.

Sometimes plugs will sound better at higher sample rates because of of things like pitch shifting and time stretching, which manipulate the speed of, or the number of samples. So more samples will help in that regard. Either way, the audio ends up truncated post pricessing.


Sometimes plugs will sound better at higher sample rates because of attention to detail in the code. Could be that if they're written to work at 96kHz or whatever then you're eliminating an extra scaling factor by running that rate. Above that you hit a wall of diminishing returns. I'm inclined to think of 192 kHz rates as marketing fluff that does more harm than good.

Given a 20 bit practical limit of dynamic range and real world benefits of higher level processing, I'd say there's more going on with resolution than simply dynamic range.

You say you wanna resolution, well, you know...

(ba-oom shoo be doo wap, ba-oom shoo be doo wap)
 
This is more of a problem in the A to D conversion process.
Lower volumes (gain) have fewer bits in conversion to work with. The faders themselves should not have anything to do with it, unless perhaps they are digital faders that present fewer bits of resolution at lower levels.
The idea of recording as close as possible to 0db is to a) lower the noise floor (signal to noise ratio), and b) have the maximum bits available for the conversion process.
Your process should be:
1) Record individual tracks as close to max (0db if it is digital recorder)
2) Mixdown to taste, with the total completed project output volume near max level - but leave a little room if you are going to master the project after your mixdown to give the mastering engineer room to work his magic.

Another tip is when you mixdown, first bring in tracks that have the highest sound energy levels, i.e. Bass and Drums. Set faders for these to allow room for the rest. Then bring in Guitars and keys, then vocals. On top of all that put "sweetening" tracks, such as strings, minor sound effects, lower level stuff etc. Make the totality of all your tracks should come close to maximum. If you are using a digital recorder, NEVER go over 0db in tracking nor Mixdown! Digital distortion is not forgiving like analog recording is.
This is just basic stuff and sometimes rules are made to be broken.
 
This is old and terrible advice, based on a misunderstanding of how the entire system works.

I really wish this wives tale would die already.

(I'm specifically referring to the "record as close to zero as possible" advice)
 
This is more of a problem in the A to D conversion process.
Lower volumes (gain) have fewer bits in conversion to work with. The faders themselves should not have anything to do with it, unless perhaps they are digital faders that present fewer bits of resolution at lower levels.
The idea of recording as close as possible to 0db is to a) lower the noise floor (signal to noise ratio), and b) have the maximum bits available for the conversion process.
Your process should be:
1) Record individual tracks as close to max (0db if it is digital recorder)
2) Mixdown to taste, with the total completed project output volume near max level - but leave a little room if you are going to master the project after your mixdown to give the mastering engineer room to work his magic.

Another tip is when you mixdown, first bring in tracks that have the highest sound energy levels, i.e. Bass and Drums. Set faders for these to allow room for the rest. Then bring in Guitars and keys, then vocals. On top of all that put "sweetening" tracks, such as strings, minor sound effects, lower level stuff etc. Make the totality of all your tracks should come close to maximum. If you are using a digital recorder, NEVER go over 0db in tracking nor Mixdown! Digital distortion is not forgiving like analog recording is.
This is just basic stuff and sometimes rules are made to be broken.

Nope...
 
Yeah? How would you control those 100 steps? The fader smoothly goes through those steps.
How would it be handled if hardwired?
Lol.��
actually some audiophile gear does use individual resistors for a volume control. They usually use as rotary knob and there is no smooth transition between resistors ... the level changes in discreet steps.
Big money per knob though.

here's one .... I guess you have to load it with resistors of your choice so even more money because if you're going for this you're gonna use hi grade resistors.
GOLDPOINT PRECISION STEREO V24 STEPPED ATTENUATOR POTENTIOMETER, 50K









.
 
Last edited:
Well, I use the "fade" as an effect doing the music room live synth rave-up - also tremolo. I don't know of all that many people messing with hard faders as routine recording a track, or, two at a time. The stepped level on MIC preamps isn't supposed to be a mixing thing and that often the case with slider levels - mixing in the DAW, anyway.
 
actually some audiophile gear does use individual resistors for a volume control. They usually use as rotary knob and there is no smooth transition between resistors ... the level changes in discreet steps.
Big money per knob though.

here's one .... I guess you have to load it with resistors of your choice so even more money because if you're going for this you're gonna use hi grade resistors.
GOLDPOINT PRECISION STEREO V24 STEPPED ATTENUATOR POTENTIOMETER, 50K









.

Yeah, I've seen those. Pricey indeed.
 
yeah ..... and a totally useless thing for recording or mixing ..... strictly for audiophile playback gear.

Since when ? Lab gear maybe.

"This special 2-channel version of the Jensen Twin Servo 990 Mic Preamp was originally developed for Sony Classical to provide high resolution switch-selectable gain control. It uses a 22 position rotary switch with gold plated contacts, providing a gain adjustment range of 18-60dB in steps of 2dB".
 
AsciiRory said:
This is more of a problem in the A to D conversion process.

Not so much since delta sigma converters have become the norm within the past 20 or 30 years. Some are implemented better than others but pretty much everything is delta sigma now.

AsciiRory said:
Lower volumes (gain) have fewer bits in conversion to work with.

Yes, and the resolution goes down as a result. Experience has shown that applying triangular or TPDF dither at 2 bits wide upon capture and afterwards any time the fixed point word length gets changed will eliminate the distortion that results from truncating the data. Even if you have to do it 100 times.

AsciiRory said:
The faders themselves should not have anything to do with it, unless perhaps they are digital faders that present fewer bits of resolution at lower levels.

The only way faders would do that is if they're running on a fixed point mix engine. I believe Pro Tools HD 9 and older systems ran on 48 bit fixed point. People on the internet began discussions about how the audio would degrade as the mix progressed so Digidesign unveiled their new, self-dithering 48 bit mix engine which is now obsolete. Non-hardware accelerated versions of Pro Tools up to 9 and including HD at version 10 used 32 bit floating point mix engines that don't have these problems.


AsciiRory said:
The idea of recording as close as possible to 0db is to a) lower the noise floor (signal to noise ratio), and b) have the maximum bits available for the conversion process.
Your process should be:
1) Record individual tracks as close to max (0db if it is digital recorder)
2) Mixdown to taste, with the total completed project output volume near max level - but leave a little room if you are going to master the project after your mixdown to give the mastering engineer room to work his magic.

It's a bad idea because of the design limits of operating at a standard "line level" (typically +4 dBu or some such = -18 dBfs nominal - although there is no formal standard in this regard, this is a common "implied standard", also a common calibration point for many converters on the market and not a bad place to start to keep you out of trouble down the road). Line level measurements are meant to be taken as an RMS value, so peak measurements that reach higher are of no consequence because of headroom. If you gain stage properly, the peaks should be nowhere near 0 dBfs.

Headroom is good.

If you overcook your levels there's no headroom so there's a chance of clipping and aliasing artifacts. If you run steady state signals over line level through cheap preamps you'll distort the preamp and possibly the analog input stage of the converter in a very unflattering way. The system was designed to run with headroom to avoid these things, same as an analog system. These problems can become more apparent than truncation.

Analog tape had much more serious signal to noise concerns and a sweet spot at around 0 VU (= line level, or -18 dBfs) depending on tape formulation and bias settings. Digital has no such sweet spot so there's little consequence to recording a little bit under this level. It may actually help to reduce distortion with cheap gear.


AsciiRory said:
Another tip is when you mixdown, first bring in tracks that have the highest sound energy levels, i.e. Bass and Drums. Set faders for these to allow room for the rest. Then bring in Guitars and keys, then vocals. On top of all that put "sweetening" tracks, such as strings, minor sound effects, lower level stuff etc. Make the totality of all your tracks should come close to maximum. If you are using a digital recorder, NEVER go over 0db in tracking nor Mixdown! Digital distortion is not forgiving like analog recording is.

Well, we work in sound design so there are certain methodologies that everyone likes. People have to settle on their own mix style. Yes, tracks will sum so the total energy of a mix with a high track count can start to climb up. It's a very good idea to keep things in check.

But it should be more about the song than technical geekery. Good gain staging and dithering habits are easy enough to develop and allow everyone to get on with it. Developing an ear for truncation, aliasing and bad digital artifacts can help people to avoid the pitfalls they might run into with certain settings on their own system or if they happen to run into a plugin with bad code.

The only reason left to get as close to zero as possible is the loudness wars, and that should be left to the mastering engineer. If the LUFS broadcast standards take hold as they're supposed to, this will be a thing of the past.
 
Back
Top