The dirty secret of Digital Adders...

  • Thread starter Thread starter donpipon
  • Start date Start date
They are actually starting to ask themselves what they need 64 or 72 or 96 tracks of channel strips when 90% of the time they might be lucky to use 16 or 24 of them maximum at any given time in any given session.

That's a pretty strong argument for a digital console; you can scroll through channel groupings across your 24 channels of control surface, most of the board runs on 5V and only the analog interfaces need run at higher voltage. I don't know from experience but I bet their power consumption is drastically lower than the analog option.
 
That's a pretty strong argument for a digital console; you can scroll through channel groupings across your 24 channels of control surface, most of the board runs on 5V and only the analog interfaces need run at higher voltage. I don't know from experience but I bet their power consumption is drastically lower than the analog option.
They'll still want the analog gear on the business end and not just digital emulations, of course. (They may want a scaled down Neve desk right now, but they still want Neve.)

Which is why I've been waiting for a loooong time for the powers that be to get off their asses with multi-point/multi-contact LCD or OLED touch screens. It's finally happened. Now what I envision is that any and all hardware will be racked out of sight and all networked to a central large-format touch screen which will provide digital control over the analog boxes, including dynamic power management. You want a 73-track Neve? Fine, you have rack hardware modules providing those 73 strips, with your touch screen is now a fascimilie of a large Neve control surface that dynamically and automatically powers and pulls in any analog channel strips only as required. Every compressor, reverb, etc. will also still be hardware rack mounts, but will also be networked to the mixing station for device control, and will have their own popup or sidebar displays on the touchscreen "desk", much like digital plugs do now, except the displays will be proving dynamic digital control over a real analog box, with automatic power management.

You'll still have "real" Neve or SSL or whatever analog circuitry, but only those modules needed by the project/session will be sucking up any power at any given time, automatically managed by the digital control software. with only one central configurable control surface screen for the whole ball of wax.

Of course, this control desk would/could also work like a standard digital DAW computer, working PT or Cubase or whatever you like, eliminating the need for seperate systems and power requirements for the analog and digital mixing sides. In fact, one could seamlessly inter-mix the two technologies. With your one mixing surface you could be intermixing your analog bear and your digital software seamlessly on one surface. Want to strap that analog UA box across your DAW master bus? You got it. And the UA will not even power up until/unless it's called up on the desk, and vice/versa.

G.
 
Last edited:
TI announced an opamp where I almost wet my pants (I get excited easily), it was truly quiet enough to run as an inline amp for ribbon mics without causing a noise problem, and its current was shockingly low, like 2mA or so. Heck, put the thing in a ribbon mic!
But is it as "warm" as toobs :confused: We want TOOOOBS! :o
 
Which is why I've been waiting for a loooong time for the powers that be to get off their asses with multi-point/multi-contact LCD or OLED touch screens. It's finally happened. Now what I envision is that any and all hardware will be racked out of sight and all networked to a central large-format touch screen which will provide digital control over the analog boxes, including dynamic power management.


Oh, yeah! I'm totally waiting for that. Think Jazz Mutant Lemur on a bigger scale! You could have modules for every kind of eq, processing, faders, synths, etc. just pop up in the desired places. Yeah, that will be cool indeed. I really don't want it to be too big. Probably about 30" wide would be good enough. Connections to I/O and storage and extra processor units via Gigabit Ethernet or something faster.

Cheers,

Otto
 
Perhaps this is a well known issue amongst engineering community... but some home-studio owners might not be completely aware of...

I've recently discovered (because I'm a newbie after all...) an amazing bug when mixing down several tracks of digital audio on a desktop-computer-based DAW:

Unless youre using the latest HD accel core... the ALU system on most (each and every...????) audio cards, is rather shitty.... I had doubts in the past but now that I've made some experience I have absolute certain of this..


What does this mean??

that when youre mixing multiple tracks on a mixing software on PC (and it doesn't matter if you have the best of the quedruple-octuple-core-superpowermac...) the summing of this audio tracks to a single 2-track master fader produces a hard, brickwall digital compressing effect, very audible and harmful for the final audio quality. ALWAYS. The only thing you can do to avoid this is to perform a mixdown peaking at -10dB (or less!!) on the final .WAV file .AIFF or whatever...

Do the test for your own!!! take any of your mix on any multitrack software and perform 2 mixdowns: one peaking near the ceiling (-1, -2 dB) and other at -6 or -10, and hear the difference!!! the reason is simple: Theres no concept of headroom on digital audio, like there is on analog consoles. yes even at 24 or 32 bits resolution... the problem is the same: the budget on digital adders is always poor on desktop computers sound cards. And if you're planning to take your mix to a professional mastering studio, the final level of that mixdown can really make the difference!

Ive been recently to some mastering studios to clear any doubts: they ask for -6dB peaking digital mixdowns.

Did you know this secret?
Here's one of the responses from 'the Cake guy I was refering to earlier. Post #3 ish.
http://forum.cakewalk.com/tm.aspx?m=913060&high=lower+buss,faders

He is mostly addressing the OP's (there- but here also :)) first point.
 
+4dBu continues to exist as a standard in digital studios because . . . we need extra heat? I'm not really sure.

I think you answered your own question two posts later:

A console manufacturer has to support a +4dBu environment with lots of headroom, because they don't know the end-user application.

If all the analog tape decks and large format consoles and "vintage" (sic) outboard gear suddently -- or eventually -- disappeared, then one could certainly make the case for retiring +4dBu as a standard. Until that day, one needs to accomodate as many possible end users as practicable in order to keep ones product viable.
 
I think you answered your own question two posts later:



If all the analog tape decks and large format consoles and "vintage" (sic) outboard gear suddently -- or eventually -- disappeared, then one could certainly make the case for retiring +4dBu as a standard. Until that day, one needs to accomodate as many possible end users as practicable in order to keep ones product viable.


Doesn't work for me. Every home studio, project studio, professional studio is not running tape. Maybe 5% are. Most of these won't have large format consoles. This is like saying every car must be engineered to drive 150mph because you might suddenly magically be transported to the German Autobahn or if you are really lucky, the Nürburgring.

Most of the 95% who aren't running tape think +4dBu is better than -10dBV because "it's 14dB more". Really more like 12dB, but never mind that. Why not +40dBu as an operating level? Just rectify the wall AC and feed that directly to high-voltage MOSFETs. Anything less and you're just a big sissy :p
 
Doesn't work for me. Every home studio, project studio, professional studio is not running tape. Maybe 5% are. Most of these won't have large format consoles.
No, but many if not most of the big box studios will. Are you suggesting that for home recorders or smaller studios a third, hybrid "prosumer" format that runs, say -10dBV balanced, and is therefore not quite directly compatible (though it could be fudged) with either the +4 balanced or the -10 unbalanced formats that exist today? Are you that much of a newb forum masochist to want to deal with that, too? ;) :D
Most of the 95% who aren't running tape think +4dBu is better than -10dBV because "it's 14dB more"
I'm not sure I completely agree with that, at least not based upon my personal experience. It could be that your's is different, of course. But for me, most of the beginners who are aware enough to know that the difference exists, but not enough to understand what it truly means on the technical level, simply know them as the "pro" level and the "consumer" level, and that "pro" is better not because of the extra dBs, necessarily, but simply because that typically is what the better gear uses, and because they tend to use balanced lines and better connectors.

G.
 
Doesn't work for me.

It's not for you exclusively. It's for all of us. Some of us are in that 5% (or, frankly, the much more than 5%, ime) who need to interface existing +4dBu equipment with whatever else they procure subsequently. "Backwards compatibility" isn't just ludditism, it's a stab at compatibility, at coexisting with an established method.

Why not +40dBu as an operating level?

Because nothing else on the planet (to the best of my knowledge) runs at that level. Think of it this way: if you were going to bring a new beverage to market that offered 900% more health benefits than any OJ, Jamba Juice, vegi-blend or what have you on the planet...but it tasted exactly like human feces...how successful do you think your market penetration would be?

You can't pretend there's not an existing customer base when you talk about changing "standards".
 
It's not for you exclusively. It's for all of us. Some of us are in that 5% (or, frankly, the much more than 5%, ime)

There is no way that pro tape gear comprises more than 5% of the studio market. Giant pro studios, yes. But home studios? No way. The Tascam Porta stuff doesn't run on +4dBu. If you disqualify A Reel Person I'll bet less than 5% of the stuff in the Clinic was tracked to tape.


Backwards compatibility at what cost? What % of new preamplifiers are sold to customers running pro tape decks? Call me crazy, but most of what I see the Analog board boys interested in is old mixers, too. The vintage Tascam stuff etc. I don't see many of them leaping to buy the latest Presonus rack pre to drive their decks.

So if 1% of new preamp purchases run pro tape decks, then that's still a waste of 2/3s of the power the 99% draws. There might be an entire power plant dedicated to worldwide excess supply voltage to gear that feeds converters.

Might as well keep on selling leaded gasoline, it's exactly the same point. Except for cars that have to have unleaded, make them buy an equivalent amount of lead fumes and release them directly into the air.
 
The point is that's not how gear should be engineered. It's not how tape was engineered. +4dBu existed as a useful nominal level presumably because it was a reasonable level to feed a tape (back me up here, I'm not a tape guy).

+4dBu continues to exist as a standard in digital studios because . . .

I really don't think the use of tape recorders dictated any particular nominal level, nor even the use of balanced lines. As old as I am, probably I need to contact some folks with longer beards to be sure... but my impression is that the whole +4 dBu (+4 dBm back in the day, referenced to 1 mW and 600 ohm circuits) level thing hearkens back to early telecommunications circuitry standards when circuits were balanced, transformered and moved signal by power transfer with load and output impedance matching, versus the voltage transfer method we have now with low output impedances and higher input impedances. Indeed, I think a lot of that gear operated at +8 dBm. I suspect that standard engineering practices and inertia led to continuing the higher levels, while the need to send signals all over studios and concert halls and what not led to the need to continue to use balanced lines.

There is really nothing intrinsic about even pro level tape recorders that mandates high level audio signal I/O. There are tape recorders capable of excellent results that operate at lower levels and of course many pro units have both input level controls and output level controls that allow recording at normal operating level with low level inputs and the option of a low level output. I had an Ampex AG-440 for a while that had a little plug in module that allowed me to record directly into the unit with a mike signal. My Teac 4-track offers a similar type of input.

Perhaps the best tape machines of all were made by John Stephens and his machines use unbalanced I/O (to save a lot of weight and expense and make the sound more accurate). That's a big part of why he sold so many mobile recorders for a while... his machines were small, compact, modular, lightweight, sounded great and could run all day on two car batteries. They also were capable of putting higher signal levels on the tape than other machines of their day, all without using balanced I/O.

Bottom line is that tape was not really much of a factor in requiring higher levels. I strongly suspect it was just the inertia of standard engineering practice in telecommunications being translated into the audio realm. Tape recorders were merely designed with I/O to let them thrive in that environment, but they could have been designed to work with lower levels and performed just as well.

Cheers,

Otto
 
News flash: it's not just tape decks that run at +4

Dude, you're a 34 post newb, don't news flash me. You read this whole thread? How about this: explain to me what NEEDS to run at +4dBu, and why. Then we'll talk more. All I can come up with is a tube compressor where a large amount of gain reduction is desired. There are probably a few others, but they don't represent the majority of pro audio requirements (vs. desires) in 2009.

ofajen said:
I really don't think the use of tape recorders dictated any particular nominal level, nor even the use of balanced lines

Of course that's true, whatever level the deck actually needs, it is free to do the last bit of amplification itself. Converters are different; they are required to do attenuation. Either way, the operating level in the entire chain before the final stage is excessive.

Of course live sound is going to require some voltage out of a sufficient low impedance power amp to drive whatever speakers are required. Again, that does not place a mandatory level requirement on the preceding gear. There is no law of science that says a power amp may only be a current amplifier (and I suspect most aren't, but I don't spend a lot of time staring at power amp schemos).

My understand of +4dBu is the dBu part is based upon 1mW power with a 600 ohm load (the telecom thing) and the +4 is the extra bit of voltage required for the necessary power drop into an analog VU meter.

At a minimum, the standard ought to be 0dBV, just so educating newbs on operating levels doesn't involve an 80 year history lesson.
 
Wait, I just realized--you're not Bob Ross, you're the anti-Bob Ross! Bob Ross lived one town over from my hometown. Everybody watched his show! I still do, every Saturday morning! Bob Ross broke down the edifice of fine art, and told people just to have a good time painting, make some happy accidents and experience the joy of painting. OK, he sold his own line of paints, and you know you wanted phthalo blue and van dyke brown, or you just couldn't be like Bob Ross! But last time I was in an art store, his paints were pretty much the price as the others, and his had his picture on them! Why would you buy any other paints? OK, I painted with acrylics, because I liked to impasto until I had haut-relief, but other than me, you're buying the Bob Ross paints!

You are defending the edifice of 50 years of accumulated pro audio gunk. I am telling people it doesn't have to be so; we can toss out all of that historical baggage and paint happy trees with our low voltage audio fanbrushes! I am Bob Ross! :cool:
 
Well, if you can be Bob Ross, then I get to be Bob Bell.

Heeey, that's Meeee! Who-hwa-ha-ha-ha-ha-ha!

:D

G.
 
My understand of +4dBu is the dBu part is based upon 1mW power with a 600 ohm load (the telecom thing) and the +4 is the extra bit of voltage required for the necessary power drop into an analog VU meter.

At a minimum, the standard ought to be 0dBV, just so educating newbs on operating levels doesn't involve an 80 year history lesson.

Probably we should go ahead and define the terms, so we are clear about the different descriptions of level.

0) dB of course is a dimensionless ratio of power (intensity) levels: dB (power level) = 10 log (P2/P1). dB is also approved for use with field quantities (voltage, etc.) where dB (voltage level) = 20 log (V2/V1)

1) dBm - the generally old school unit measured as dB referenced to 1 mW into the (then) normal 600 ohm load. Under those conditions, this produced a reference voltage of 0.775 V

2) dBV - dB referenced to 1 V.

3) dBv - dB referenced to an unloaded voltage, with that reference voltage picked based on 1 mW into 600 ohms = 0.775 V. Due to confusion between dBV and dBv, dBv morphed into...

4) dBu - same as dBv, it is dB referenced to an unloaded voltage of 0.775V, which is the holdover reference to the old school dBm world.

Note that 0 dBV = 1.79 dBu. I'd argue for a new voltage standard no higher than 0 dBu, and indeed no one should wait around for gear to change. Unless everything you use plays nicely outputting and inputting peaks at +28 dBu, I'd suggest you operate at no more than 0 dBu if at all possible without producing (additional) chaos and mayhem in the studio. :)

Indeed, switching to digital gear has created a possible need to reduce studio operating levels, because the possibility now exists for higher crest factor recordings. When tracking and mixing to tape were the norm, a crest factor of 14 dB was about the most you'd expect to see due to the MOL (maximum output level) of most tapes, which left a nice cushion from +4 dBu + 14 db = +18 dBu up to the +28 dBu maximum balanced level of typical older studio gear.

Especially when you have gear with 15V power rails or less with maximum levels of +22 dBu or maybe lower, providing say 6 to 8 dB of cushion and allowing the headroom needed for a crest factor of 20 dB means that operating levels could be lowered to -4 dBu with justification and I know I have gotten good results operating down there. I find it maximizes the sound quality of gear that lacks the capacity for really high levels, like my little Mackie mixer.

Cheers,

Otto
 
Especially when you have gear with 15V power rails or less with maximum levels of +22 dBu or maybe lower, providing say 6 to 8 dB of cushion and allowing the headroom needed for a crest factor of 20 dB means that operating levels could be lowered to -4 dBu with justification and I know I have gotten good results operating down there. I find it maximizes the sound quality of gear that lacks the capacity for really high levels, like my little Mackie mixer.

The case for lower levels gets even stronger and may call for even lower levels when you note that lots of 9V-powered gear has unbalanced ins, outs, sends and returns that max out at +16 dBu or so. In that case, the -10 dBV "consumer" level makes a lot of sense, or maybe even -10 dBu (1.79 dB lower).

Otto
 
You're all going to be forever embarrassed that you posted in a thread called The Dirty Secret of Digital Adders.

I know I am.
 
Good posts. Let's talk a minute about voltage and levels. Let's presume we have gear that can go rail-to-rail. Dangerous assumption, but it will make the discussion simpler. Rail-to-rail means an amplifier is capable of passing a signal with peak voltage right up to its voltage supply, or very very close.

So if we have gear with a +/-15V power supply (which is typical), that means our peaks can be +15V or -15V maximum. That is peak-to-peak voltage of 30V. But operating levels are calibrated to RMS voltage, not peak. RMS is defined for a sine wave as (p-p/2) / (2^.5). So our 30V p-p becomes 10.6VRMS. Using ofajen's handy formula, 20 * log (10.6/.775) = 22.7dBu. Give back a bit because the amp probably isn't rail-to-rail, that's +22dBu.

But wait! We have one more filthy trick up our sleeve! If we feed that output into an inverting amplifier, and use both amps to drive an active-balanced output, we get a free extra +6dB! Clever, eh? So now our gear has +28dBu headroom.

Or does it? The problem is that poor differential amplifier in the next bit of kit has to be able to cope with that input. So it would need a +/-30V power supply. Those aren't real common. So they would have to provide the ability to pad their input, which is what the differential amp would be designed to do. So realistically our operating level isn't going to exceed +22dBu because that's the "internal" level at which the gear is probably going to max out.

We can increase headroom further with another trick: a step-up output transformer. Basically, have a very low output impedance amplifier into a transformer that trades current for voltage. That can get the output up to stupid levels (while wasting power), so long as transformer saturation doesn't become too much of a problem. But the only way for the next bit of kit to handle that signal is to have a step-down transformer at its input, or a ridiculously high voltage supply (which wastes still more power).

OK, that is how we get headroom and lots of it. Why do we need it? Well, as I said very early in the thread, it's important to maintain proper gain staging by ensuring to the extent possible that the accumulated noise of a prior stage exceeds the input noise of the next stage. Back in the olden days gear was noisier, so higher operating levels were more important. These days, opamps with -120dBV (-122dBu) unweighted noise are jellybeans, so it's hard to imagine why +28dBu is necessary.

I saw an argument once that +4dBu levels were "manlier" in that they minimized induced noise with respect to signal, and while that is true per se, if that's a practical problem in a particular studio then dynamic microphones would be completely unusable. Mic cables are very commonly longer than line level cables. Is this a real problem for a proper balanced line? I don't think so.

What happens to the world if we standardized on +/-5V power rails rather than +/-15V? First off, our power transformers shrink greatly in size and cost. So do the filter capacitors in our power supply. Our power consumption is 1/3 what it was before. Maximum level is +10dBu/+8dBV. Remember that's still 4dB more than any IC converter is looking for, and plenty of headroom for -10dBV level. Dynamic range of 130dB is still possible with common opamps, which exceeds the dynamic range of any recording device and in all likelihood the actual dynamic range of any source.

So why again +4dBu in the 21st century?
 
Back
Top