The dirty secret of Digital Adders...

  • Thread starter Thread starter donpipon
  • Start date Start date
Also, a lot of 'studio in a box' things like the Roland VS-2480 have fixed point summing busses. It is really easy to overload the summing buss, and turning down the master fader doesn't help in that situation.
 
I don't see it as making a whole lot of sense to finalize the mix volume until you have an actual mix to finalize and a target finiaization "sound" to actually shoot for.

That.
+1
Bravo!
ding-ding-ding

(I need to hit the minimum character count for my post to be accepted)
 
Let's please remember that peak levels don't relate to perceived loudness. The meat of the tone that we hear relates to the average level and that is how we should monitor for setting levels.

You should set your gain staging to work with the I/O structure of the gear and leave enough headroom in the digital world above that average reference level, but don't worry about peak levels first.

No, I disagree. In digiland you don't have to concern yourself with average levels during tracking, so long as your gain staging maximizes dynamic range. That tends to occur at a low level.

Let's accept this as the ideal gain staging:

Acoustic noise > microphone noise > preamp noise > converter noise

That's an ideal world; it doesn't always happen. But certainly the last two stages that would normally hold (unless the source is superloud, then we don't care about the noise floor anyway). If we have a converter with dynamic range of 110dB (not too tough), then we just need to make sure the combined noise from the previous stages exceeds that by 10dB. If it does, then further gain from the preamp is technically unnecessary*. That gain figure can be calculated for a given mic-preamp-converter combination. If the converter is clipping at that gain setting, then reduce the gain until the converter doesn't clip. Average levels need never even be considered.

Here's an example (ignoring acoustic noise):

Mic: -35dBV/Pa, 14dBA** self-noise
Preamp: -120dBA EIN
Converter: 110dBA dynamic range, 0dBFS = +6dBV (set at -10dBV) or +20dBu/+18dBV (set at +4dBu)

Microphone noise floor is -115dBA. Preamp is only 5dB lower; that is suboptimal but unless we shop for a quieter preamp that's life (there are plenty of quieter preamps, this is just an example). Combined noise is thus -114dBA.

Converter set at -10dBV has peak level of +6dBV; that means noise floor is -104dBA. We need to get our noise floor 10dB above that; that's 20dB of gain. A nominal 94dBSPL signal (that's an average level) will result in -15dBV from the preamp, or -21dBFS. Same thing at +4dBu, we just need 32dB of gain instead of 20dB.

That should be plenty of headroom for most sources, but if our source does indeed have more than 20dB of crest factor, then we need to reduce gain. If our source has a higher average level, then we *may* need to reduce gain. We may not. It depends on the peak level, not the average level. If we can avoid it, we shouldn't reduce gain just because the average floats above -20dBFS. If the peaks aren't clipping, we will lose dynamic range if we reduce gain.

What if the source is quieter than 94dBSPL? Do we need to increase gain to get the average back to our nominal -20dBFS? We may if we like, but we don't have to, because we can't add any dynamic range--we are already limited by the noise floor. Changing the level with a digital gain change during mixing will result in the same dynamic range.

So you can see we aren't pegged to an average level, instead we must just be mindful of the noise floor. That was true in tapeland as well, except one had to be mindful in the other direction--if you let average levels fall below nominal, the noise penalty was severe, but excessive levels weren't a big deal. It's the opposite in digital, and that makes the concept of nominal average level not particular useful during tracking.


* Contrary to popular belief, many if not most preamps have lower equivalent noise at higher gain settings, so one might need to further increase preamp gain to realize the lowest possible analog noise floor.

** Some people don't like A-weighting. It has its limits; but too bad, this is my example! Write your own example, and you use whatever measure you like!


In the box, you can use a plug in with metering that shows both peak and average, but I prefer to stay out of that mode because it's not how I prefer to work.

http://www.naiant.com/vst/vu.html

:D
 
So you can see we aren't pegged to an average level, instead we must just be mindful of the noise floor
I think Otto and I just went through something similar to this in another thread just the other day, and Otto was taking your position, more or less. You're both right in your fashion, but i think maybe the language - or maybe the intent - is getting muddied a bit.

I don't intend to speak for Otto - and I welcome his response, even if he calls me to the woodshed for this post :o - but I don't think anybody is "pegging" things to average level, any more than an astronomer "pegs" the location of the planets to the location of the sun (if that were the case, I'd need a WAY better air conditioner and some SPF 3,000,000,000 sun block :D ). However, the sun does provide the attractor around which the planets dance, and on a grand scale defines the general location of the solar system. And without that sun, the planets would all just go flying away in their own directions and I'd need to exchange my air conditioner for a nuclear powered space heater.

In this way, line level provides the attractor around which (post-preamp, of course) the solar system of signal chain revolves and the point where the majority of mass of the audio (average levels) will tend to gravitate. That doesn't mean that the RMS level *has to* be pegged specifically to line level any more than it means that the planets need to go crashing into the sun.

Gain structure should be gamed, playing the signal level down the line to get the best combination or balance of low noise/high dynamic range and amount of desirable/undesirable distortion, and these numbers along the way could well deviate the average level off of +4dBu (let's not even bother talking about -10dBV, shall we?), just as for us humans, it's best to keep the Earth some 93 million miles or so away from the sun.

But this is homerecording.com; a forum where most of the folks asking questions have no idea what the difference between dBVU and dBFS is (not to mention all the rest of the dB types out there), or that they even exist as different measuring scales, are taught from the womb that they need to record as hot as possible without clipping, understand nothing about gain structure strategy or converter calibrations, and so forth. They have about as much proper concept of the solar system that the ancient Greeks did.

It only make sense to start describing the solar system of audio and to build understanding of gain structure by stating that the sun is at the center and all the planets, including earth revolve around it, and that the intended design center of the audio chain is the line level around which the I/O of every piece of gear (save the microphone) is meant to operate. Without it as the guiding attractor, none of our gear would work together properly, and our signal levels would go flying off into the aether.

It sure gets people off of the Ptolemeicly wrong idea of 0dBFS being the center of the solar system, and even if the oversimplified gain strategy of "pegging" RMS to peri-line level is not ideal to those who can properly play the gain game, it yields results that are far superior to what they've been doing, usually more than good enough for rock n' roll, and lays the foundation (in more than one way) for the concept and *practice* of optimizing gain structure.

G.
 
Last edited:
way could well deviate the average level off of +4dBu (let's not even bother talking about -10dBV, shall we?),

No, we need to be talking about -10dBV, because it's much closer to what the true physical average level fed to a converter should be. It's also more efficient than +4dBu if the circuit is designed for only the -10dBV operating level.

Of course, I am advocating a -10dBV with professional connectors and balanced lines, don't get me wrong. But the higher level of +4dBu is an anachronism, just as the historical reasons for the "+4" and "u" parts of it are no longer relevant.

One time an objection was made that -10dBV was "wimpy" in comparison with +4dBu; the hotter level was needed to maximize SNR in transmission (where the noise was induced). Better not ever use a microphone then . . . . :rolleyes:

+4dBu is 12dB more than -10dBV, 12dB is a factor of 4, which is also therefore the factor of power wasted in a +4dBu circuit that is intended to feed a converter.
 
No, we need to be talking about -10dBV, because it's much closer to what the true physical average level fed to a converter should be. It's also more efficient than +4dBu if the circuit is designed for only the -10dBV operating level.

Of course, I am advocating a -10dBV with professional connectors and balanced lines, don't get me wrong. But the higher level of +4dBu is an anachronism, just as the historical reasons for the "+4" and "u" parts of it are no longer relevant.
But in this case, aren't "should be" and "is" two different things? How many non-PC-soundcard converters have you come across that used balanced XLR ins and are speced solely for -10dBV?.

Whether +4dBu is an anachronism or not is like arguing whether "clockwise" is an anachronistic term because most clocks are digital these days. It still has relevant meaning. I'd argue that the same is true for +4dBu "line level". Whether it's really necessary to today's technology or not, it's still the "standard" around which the vast majority of our gear is calibrated.

ADDENDUM: You're right that dBu is anachronistic. For the audio engineer, so pretty much are dBv and dBm. Those all really should remain relegated to the realm of EE, and intersect with audio engineering only to the degree that the engineer's expertise intersects with electronic engineering.

For the audio engineer, there are only two calibration scales that primarily matter these days; dBVU on the analog side and dBFS on the digital side. And the calibration standard on the analog side is 0dBVU, or just plain 0VU. It just so happens that most (not all, but certainly the consensus majority) pro-grade audio gear is calibrated (when they actually *do* have meters, which too many boxes woefully don't) for 0VU=+4dBu.

Now, this does NOT mean that one should by dogma (stealing from another thread ;) ) be recording with RMS levels at 0VU. You're absolutely right that in many - if not most - stages along the gain structure they may easily be best at -7 to -10dBVU RMS. When measuring to analog tape calibration, these levels can easily jump higher. And so forth. 0VU is not a magic point at all when it comes to actual gain levels.

But it is a calibration reference. And it's only when understanding that reference - i.e. what 0VU really means - that one can make the transition to understanding digital dBFS recording levels via the calibration of the converter in use.

0VU/+4dBu is not a magic recording level, peak, average or RMS. But it is the Rosetta stone connecting the different dB scales and the point for which any analog metering that does exist in the chain (not counting some ancient analog tape gear that may still use even more anachronistic dBm metering and whatnot) revolves around.

G.
 
Last edited:
Also, a lot of 'studio in a box' things like the Roland VS-2480 have fixed point summing busses. It is really easy to overload the summing buss, and turning down the master fader doesn't help in that situation.

Very true Farview. I did this experiment on my BR1600 last night with a wave generator, voltmeter and oscilloscope to try and see just what it did with various voltages input into its channels. It didn't take much to clip the wave form somewhere in it's structure.

And you're right, once it clipped, turning down the master fader did nothing, just created a lower voltage clipped wave.

I stuck it up on the Roland / Boss thread here with the pictures.

Boss BR1600 gain experiment

GGGGGeoff:)

PS, I liked MS's point in the other thread that the converter IC's are set to 1VRMS anyway so any voltage requirement over that is just padded down and amp'd back up again.
 
Last edited:
But in this case, aren't "should be" and "is" two different things? How many non-PC-soundcard converters have you come across that used balanced XLR ins and are speced solely for -10dBV?.

Probably none, because it's commercial suicide. Has nothing to do with actual technical requirements though. How many V8s stay stuck in traffic everyday? Same thing.
 
How many V8s stay stuck in traffic everyday?
Do people still drink V8 these days? ;) :D.

Jon, educate me a bit on this technical level when it comes to what you were referring to about the input levels of the converter ICs, if you'd be so kind. There's a few things I don't understand or are not 100.0% sure of about that. A few questions:

- When you talk about the input voltage equated to 0VU on the chip, I don't understand. The only metering I've ever noticed on converters is on the dBFS side. I'm not sure how VU measurements even relate to the IC?

- And related to that, what relation is there between the input voltage on the input pin of the XLR connector of the converter's box and the input voltage on the input pin of the IC? Is it 1:1? Or are you saying that there is/are intermediate solid state between the two that steps down the line level coming into the box to the voltage "wanted" by the IC (which would probably answer the first question :) .)

- What is the typical relation of the rated maximum input voltage on the converter circuit (usually rated in dBu) and an output of all ones (0dBFS)? And how, if at all, would this relate to the nominal input voltage calibration asked about above?

As you probably know, I (and many others of published repute greater than I) have always gone under the working assumption that the maximum input voltage of the device would convert to digital saturation (0dBFS). I know there's always some play in electronics and electrical values, but more or less within that kind of margin of error, this has been an assumed starting point. It doesn't seen to make sense that a converter would purposely allow further analog gain beyond teh point where it maxed out on the digital side, nor that the maximum gain voltage would purposely result in something less than 0dBFS.

Assuming that were the case, this is the basis for the calculation of the converter's conversion calibration, as I have always understood it. By taking the published maximum voltage rating, almost always rated in dBu, subtracting 4 from it to indicate the difference between it and +4dBu, and then changing the sign on the value to negative, you wound up with the rough dBFS equivalent to +4dBu analog coming into the box. I have checked a couple of specs along the way (several years ago now) which seemed to more or less verify this method, though I've never actually run physical experiments to verify it in more than a cursory spec-check way.

With this method in mind, I'm not sure if or how the whole idea of the input voltage of the IC plays into it, since what really seems to matter there is the comparison against what is being fed into the box before it gets to the IC. +4dBu on the XLR input pin is +4dBu beind fed by the line into the box, and the maximum voltage in dBu is the maximum voltage. I don't get how any intermediate steps would change that relationship?

G.
 
No, I disagree. In digiland you don't have to concern yourself with average levels during tracking, so long as your gain staging maximizes dynamic range. That tends to occur at a low level....

I think everything you say is accurate. However, I'm much more primitive than you. When I'm recording with my digital standalone in 24-bit mode, I'm not primarily interested in maximizing dynamic range on each track. I'm interested in a simple system of tracking that ensures I record with enough headroom to avoid overs and keeps levels reasonably consistent from track to track with regard to real music power and perceived loudness, while coming reasonably close to maximizing dynamic range.

Exactly what the peak level is is a curious artifact of the particular track and take. I could rerecord with the same settings and same real loudness and get a slightly different result each take. Does that mean I should change the gain somewhere? I don't have time for that, nor is there any real sense in it. It won't meaningfully affect the sound of the track or the mix.

Besides, the real beauty of using the averaging meter to set gain is that you end up producing tracks with consistent loudness, so when you mix, your levels are all comparable. Setting up a rough mix is usually a matter of setting all the faders at the same level, about 1dB below unity for every basic track to be mixed. I also like to mix with plenty of headroom. You can always squash things a few dB more or raise the final level in the mastering stage, if need be.

And of course, I work the same way as far as monitoring and setting levels when mixing... I'm paying attention to the overall average level, which relates to how loud the mix is, and keeping it around the target operating level, making sure that that avoids any overs, but otherwise not worrying about exactly what the peak levels are, whether it's -2 or -5.

That's simply how I like to work. I may not quite maximize dynamic range on every track. I might squander a few dB here and there by tracking with average levels at -24 dB, where sometimes the peaks are -10, sometimes -12, sometimes -4. As long as they aren't over, I simply don't care. Heck, I don't even use noise reduction on my tape machines, why would I care whether the recorder noise floor is down 95 dB or 105 dB? :)

At the same time, I do appreciate your comments about how lower levels would work well in the digital world. I hadn't really thought that much about it, but I can see that that opens up a world of possibilities for really good sounding recording with very small, very light, very low power gear! Small is beautiful in my world... it leaves more room for instruments! :)

Cheers,

Otto
 
Do people still drink V8 these days? ;) :D.

Jon, educate me a bit on this technical level when it comes to what you were referring to about the input levels of the converter ICs, if you'd be so kind. There's a few things I don't understand or are not 100.0% sure of about that. A few questions:

- When you talk about the input voltage equated to 0VU on the chip, I don't understand. The only metering I've ever noticed on converters is on the dBFS side. I'm not sure how VU measurements even relate to the IC?

- And related to that, what relation is there between the input voltage on the input pin of the XLR connector of the converter's box and the input voltage on the input pin of the IC? Is it 1:1? Or are you saying that there is/are intermediate solid state between the two that steps down the line level coming into the box to the voltage "wanted" by the IC (which would probably answer the first question :) .)
...
-G.
Wow. I had a chat quite a long time ago with the US RME tech support guy I believe when I first got my ADI-8s about the three input range selections, which then was 'optimum and closest to 'straight in. It's been long enough that I've actually for gotten which one is :o, but the up shot was that it was ‘way down on the impact’ scale.
Funny thing is, if it hadn’t had the switch there in the first place I wouldn’t have even questioned. Fun geekie stuff though.. :)
Carry on.
 
I think everything you say is accurate. However, I'm much more primitive than you. When I'm recording with my digital standalone in 24-bit mode, I'm not primarily interested in maximizing dynamic range on each track. I'm interested in a simple system of tracking that ensures I record with enough headroom to avoid overs and keeps levels reasonably consistent from track to track with regard to real music power and perceived loudness, while coming reasonably close to maximizing dynamic range.
...snip
Cheers,

Otto
Is this not the beauty of our 24bit wide palette. :)
In Sonar I set the track rec meters ‘peak + rms’, range -24 – 0.

From eight feet away setting up the pres, running phones mix during the tracking, whatever- don’t even need the little numbers.
As soon as the ‘rms part pokes into view you’re there: -18 -20 or so record level, peaks go.. (‘to who gives a rat’s axx. land :p and ..
Rough initial alignment done.
This is fairly cool INHO. :)

Let's accept this as the ideal gain staging:

Acoustic noise > microphone noise > preamp noise > converter noise.
..snip

.. which was not to imply however this is a very powerful concept also to have under ones belt. :)
 
Last edited:
Is this not the beauty of our 24bit wide palette. :)
In Sonar I set the track rec meters ‘peak + rms’, range -24 – 0.

From eight feet away setting up the pres, running phones mix during the tracking, whatever- don’t even need the little numbers.
As soon as the ‘rms part pokes into view you’re there: -18 -20 or so record level, peaks go.. (‘to who gives a rat’s axx. land :p and ..
Rough initial alignment done.
This is fairly cool INHO. :)

I agree completely with this as well as ojafen; it shouldn't be that critical. I was trying to demonstrate that but I guess I failed. Although again it's possible to have a source with >20dB crest factor (drums, live jazz, classical), so it's not quite as easy as -20dBFS RMS and ignore peak.

I have the same RME unit; technically it is slightly quieter at the highest input setting, something like 2dB. That isn't because of the converter though; it would probably be a limitation of its analog input stage. Probably the switch activates a pad that follows the analog input buffer, such that higher levels into the buffer keep the buffer's noise floor to a minimum. Although I think the RME uses 4580s as buffers, which should yield -120dBA noise floor or so; not sure why the noise would change.

Anyway, IC converters max out at 0dBFS between 0dBV and +6dBV. So any higher input into an ADC must be padded, and a DAC output amplified.
 
I agree completely with this as well as ojafen; it shouldn't be that critical. I was trying to demonstrate that but I guess I failed. Although again it's possible to have a source with >20dB crest factor (drums, live jazz, classical), so it's not quite as easy as -20dBFS RMS and ignore peak.

I have the same RME unit; technically it is slightly quieter at the highest input setting, something like 2dB. That isn't because of the converter though; it would probably be a limitation of its analog input stage. Probably the switch activates a pad that follows the analog input buffer, such that higher levels into the buffer keep the buffer's noise floor to a minimum. Although I think the RME uses 4580s as buffers, which should yield -120dBA noise floor or so; not sure why the noise would change.

Anyway, IC converters max out at 0dBFS between 0dBV and +6dBV. So any higher input into an ADC must be padded, and a DAC output amplified.
Failed? Wait I know this one.. ‘Umm, What is ‘No’? ;):)


It's exactly going over this ...stuff (or through..:)) that gets it into our heads so’s it sticks and becomes the basis of our working—
(vs various other assorted bs.. :D
 
Anyway, IC converters max out at 0dBFS between 0dBV and +6dBV. So any higher input into an ADC must be padded, and a DAC output amplified.
Then I confess I don't get the relevance of even talking about this when it comes to talking about gain structuring and recording levels and all that. The padding and amping on each side of the IC - it seems to me - simply make the actual intermediate IC voltages an all-but-invisible intermediate housekeeping step that, as long that the pads and amps are evenly calibrated (which they should be), does not play into gain structure. Maybe I'm missing a point. Wouldn't be the first time... ;)

G.
 
Then I confess I don't get the relevance of even talking about this when it comes to talking about gain structuring and recording levels and all that. The padding and amping on each side of the IC - it seems to me - simply make the actual intermediate IC voltages an all-but-invisible intermediate housekeeping step that, as long that the pads and amps are evenly calibrated (which they should be), does not play into gain structure. Maybe I'm missing a point. Wouldn't be the first time... ;)

G.

The point is that's not how gear should be engineered. It's not how tape was engineered. +4dBu existed as a useful nominal level presumably because it was a reasonable level to feed a tape (back me up here, I'm not a tape guy).

+4dBu continues to exist as a standard in digital studios because . . . we need extra heat? I'm not really sure. Nobody designed gear back in the days of tubes to waste heat. Tubes need a lot of heat, therefore most gear didn't have that many tubes. Many people don't understand that the clipping behavior of transistors is mainly because there are a whole bunch of transistors compensating for other transistors such that distortion is very low until it suddenly gets very high. One can do that because transistors are cheap and efficient.

Do a preamp design with a half-dozen dual triodes and you can do the same thing (alternatively, do a design with two transistors and no feedback, and it might sound somewhat "pleasantly" distorted). But there are few tube designs like that because of cost, heat, and power.

Because of the transistor, the scale of power being wasted is small. If we accept that two-thirds of power in a digital studio is currently being wasted, maybe we will save 40W on the front-end. Doesn't sound like much until you try to run off of batteries . . .

Of course computers have gotten drastically hungrier and let's not even talk about graphics cards.

My point is this doesn't have to be. If you optimize the design around your recording medium, as was done with tape, you end up with gear that looks rather different and costs a bit less than it does today.

Other things puzzle me . . . I sell pads to people that are using preamps with output transformers that they want to saturate. Most high-end audio transformer manufacturers strive for low saturation at nominal levels. So you see people driving output transformers well above +20dBu, more like +30dBu from the sound of it. That takes quite a lot of power.

Instead, reverse what the transformer manufacturers have done and use a smaller transformer. It will saturate maybe at +4dBu, maybe lower. It will weigh less and cost less, and you won't have to buy an external pad to compensate for your excessive operating level.

Sorry, I'm a bit of an iconoclast when it comes to stuff like this . . .
 
Now I understand where you're going with that.


You are calling for a complete re-think of how the world works in a thread created by someone that doesn't know how it works now. In doing so, you seem to have gone over the heads of people who actually do know how the world works now.

What are we talking about again?
 
+4dBu continues to exist as a standard in digital studios because . . . we need extra heat?
LOL, OK, I misunderstood the why of where you were coming from.

I'll back you up. I don't know if you remember (I think it was you that contributed well to the conversation at the time), but a couple of years ago I raised the question in one of the forums about the possiblity of "greening" the recording industry. Using today's technology to reduce standard voltages would help. At the time you had raised some caveat having to do with low-power solid state device types (I don't remember if it was actual transistors or not) being too noisy or something like that, but simply dropping overall line voltage standards would probably help, at least somewhat.

While not quite the same thing, there is one interesting and promising trend starting to happen in many brick studios, largely because of the troubled economy for now, at least according to an article I read in (I think) Pro Sound News just a few weeks ago; and that is that many studio managers are coming to their senses and realizing they don't really need those big coal-burning mixing desks so large that they get their own zip code.

They are actually starting to ask themselves what they need 64 or 72 or 96 tracks of channel strips when 90% of the time they might be lucky to use 16 or 24 of them maximum at any given time in any given session. That's the kind of common sense, screw-the-fancy-gear-list-used-mostly-for-advertising thinking that five years ago I would have thought happened only in my wet dreams.

And apparently that new downsized market demand has companies like Neve and SSL actually responding with top shelf gear in much smaller formats. It's all still using the old standard, unfortunately. But it's at least a proof of concept that there may be a pragmatic future for your iconoclasm :).

Yeah, this is all way OT, but I think the OT had pretty much dead-ended of it's own volition a few pages ago. And I'll be happy to return to it on the turn of a post, if need be.

G.
 
You are calling for a complete re-think of how the world works in a thread created by someone that doesn't know how it works now. In doing so, you seem to have gone over the heads of people who actually do know how the world works now.

Yeah, I'm a bit radical. But hey, here's a not-too-radical idea for Glen's giant console problem. A console manufacturer has to support a +4dBu environment with lots of headroom, because they don't know the end-user application.

However, just like my RME has a calibration switch, so too could any other device, except make that not only a pad but also change the power rails to low voltage. That's too expensive for a small bit of rack gear because the cost of the multitap transformer would outweigh the savings. But for a large format console, it's a no-brainer.

The other thing that's going on is most of the analog audio IC development in progress is for low-power stuff, because portable applications rule the day. Not so much for pro audio, but that market isn't big enough to attract major R&D. Still, the benefits filter down to us. There are much better and quieter low power ICs now or coming soon. TI announced an opamp where I almost wet my pants (I get excited easily), it was truly quiet enough to run as an inline amp for ribbon mics without causing a noise problem, and its current was shockingly low, like 2mA or so. Heck, put the thing in a ribbon mic!

I saw Jim Williams comment the other day on GS that if CA electric rates kept going up, he would replace every IC in his studio with low-power stuff. That's pretty radical from a guy whose thirst for fast video opamps is legendary!

What are we talking about again?

I dunno :confused:
 
Back
Top