Bus compression

  • Thread starter Thread starter Noah Nelson
  • Start date Start date
What I'm actually getting at is reducing the volume isn't reducing the voltage swing from positive to negative - It's reducing the output. Compression reduces the voltage swing.

As much as I generally don't agree with using your eyes, take a track (SOURCE) -- Anything -- a vocal track, bass, an entire mix (whatever) and control the volume with a volume curve and render it out to a new track. Then compress the (SOURCE) to give more or less the same volume output and render that to a new track.

They won't look the same -- They won't sound the same.
 
But why are you doing this? By itself this move does nothing but turn the signal down and bring it closer to the noise floor.
Skip 'noise floor. It moves up and down with the content in either methods.

snip..
It is exactly manual compression. I would maybe call it "manual RMS compression.

Lately I've just been using ReaJS with a rather long RMS time and a look ahead time somewhere around half the RMS. I kinda feel guilty about it. I slap it on early as a way to kick the can of automation down the road a bit. Then I work on the more fun parts of the mix (read every other damn thing!) and by the time I get back around to automating the vocal level I realize that the compressor is doing exactly what I need, and sounds great, and I've pretty much built the rest of the mix around it, so I leave it. I will automate that compressed signal sometimes to get it above a louder section of the arrangement or whatever, but the comp does a fine job of handling most the nitpicky busywork of word-to-word and syllable-to-syllable leveling.
Sounds kind of similar to the various vocal’ or gain rider plugs.
The thing that needs to get nailed in these threads about 'comp vs fader moves, changing 'the dynamic range' and such.. is recognizing the time windows we might be thinking of, and the differences in the scale of how and where ‘wave shaping' is taking place.
Typically (or maybe exactly' per the definition of wave shaping?), the actions of a relatively fast compression is both automatic, static in its response, and alters the waveform in the way that’s normal thought of.
You can also go in and do specifically placed gain automation at the word or syllable level. Wave shaping again. And a short side trip, why would you do this (instead of a comp)? Because it does a specific shaping the compressor can’t -- It sounds different!
By the same token a specifically tailored slow compression or our gain riders’ can act like fader moves. It can be made to not touch the waveform (would ‘in the micro’ be the way to differentiate this?)
But again automation more often perhaps, does wide moves like that as well, but brings with it the targeted specific sonic results.
Is doing these slower longer moves technically not changing the dynamic range’? (IDK exactly. The 'dynamic range of the track?)
"What I'm actually getting at is reducing the volume isn't reducing the voltage swing from positive to negative - It's reducing the output. Compression reduces the voltage swing. "
Suffice to say, this makes perfect sense and differentiates it well enough for me.

At any rate I’d offer better to get past the narrow time/window definitions. Once there, it’s easier to see the remaining constant is in choosing the sound of ‘compression, vs the sound of fader/ gain moves.
 
What I'm actually getting at is reducing the volume isn't reducing the voltage swing from positive to negative - It's reducing the output. Compression reduces the voltage swing.

OK, my mastering buddy has talked a lot about gain voltage and I've heard some people say that when reducing or increasing the main output of a project reduces or increases the bit depth of the project. What is all that about? Do they actually mean gain voltage or what? Does that matter?
 
Does that matter?
So little that it's not even worth mentioning. 24-bit audio is capable of a theoretical 144dB dynamic range. 16-bit is 96dB. I'd submit that the vast (VAST) majority are dealing with signals nowhere near that sort of range.

That was the beautiful thing about digital at the beginning -- And that was the reason that 24-bit was celebrated even more than digital audio itself -- having a dynamic range that exceeded the end user's ability to take advantage of it -- a dynamic range that exceeded basically all the available gear and the human capability to hear it. A range so incredibly wide that the full-potential of every signal could be captured with near perfection -- where the dynamic range of every possible sound in a recording would be faithful to the source (assuming the gear was capable of capturing it with that same fidelity).

And look what we've done with it. :(

If it wasn't for the occasional fade-out and reverb trail, we could be using 4-bits half the time.
 
Skip 'noise floor. It moves up and down with the content in either methods.
I agree that it's not what we're talking about right now, but this statement is false. The noise in the vocal track will move up and down, the actual floor of the mix system stays where it's at, and attenuating by any method brings you closer to it. In digital, the floor is so low that we can usually ignore it unless we're doing something extreme, but in analog you pretty much always have to keep it in mind.

And while we're on that for a brief moment, by strict definition "dynamic range" is the distance between the noise floor and the distortion ceiling. Turning down a track (by whatever means) usually reduces the dynamic range because it brings you closer to the noise floor and later (final mix, or at mastering) you will turn the whole thing (noise and all) back up to get it closer to the distortion ceiling. What we're actually talking about here, though, is better described as crest factor - the difference between the average RMS of the track and it's highest peaks. But this is a "common usage" of the term dynamic range, so I'm gonna just leave it.

Massive Master said:
What I'm actually getting at is reducing the volume isn't reducing the voltage swing from positive to negative - It's reducing the output. Compression reduces the voltage swing.
I know what you're getting at, but this is just wrong. If you take a signal that swings from -1V to 1V, and reduce the volume by 6db, it will now swing from -0.5V to 0.5V. It has changed the voltage swing.

What you're trying to say is that compression can change the actual shape of the wave, and reduce the crest factor of the signal. This is true to an extent. But keep in mind that most of the time the compressor doesn't really change gain fast enough to act as a true wave shaper. If it did, you'd hear distortion. That is, it's usually not just turning down the very top peak of the wave that pokes up above the threshold, it turns down the whole wave for a whole wiggle or two depending on attack and release times. Try turning attack and release all the way down to 0 on any ITB compressor and you'll definitely hear the difference!

I also think that you're a bit stuck on the idea of relatively fast, peak detecting compression, and rightfully so I guess seeing that it's what most people think of when they're talking about compression. With long RMS detection (even better with look ahead), and careful setting of threshold and (especially) very low ratios, you can in fact get pretty damn transparent and natural sounding leveling. No, it won't sound or look exactly the same as if a human automated it, but it will actually preserve the "short term" crest factor - the attack peak will have the same relationship to the sustain average - while bringing the "long term" crest factor into a more reasonable range - the track-long average will be closer to the track-long maximum peak.

Now we can argue which way is best. Is compression quicker and easier or is it lazier? Does manual automation show more attention to detail, or does it waste time you could spend on other aspects of the mix? Do you get paid by the hour? ;) All that really matters is how it sounds, and as with everything else that depends on the skills and taste of the person applying the technique. It is at least as easy to fuck up a fader ride as it is to fuck up a compressor setting. Neither are these methods mutually exclusive. The way I use ReaComp only really works with extremely low ratio settings. If that small ratio still leaves too much swing, I may need to get in and automate a few spots pre-comp.
 
I know what you're getting at, but this is just wrong. If you take a signal that swings from -1V to 1V, and reduce the volume by 6db, it will now swing from -0.5V to 0.5V. It has changed the voltage swing.
Yes, some days I can never have quite enough coffee. I should have added "relative to the original" or something of that sort.
 
Just to give my take on it, compression and "moving the fader" are two different tools in the arsenal and both have their place since the effects are also different.

"Moving the fader" is, if we're honest, mixing. I use that to control the balance among multiple channels to make things sound good. If I'm honest, I'll sometimes also use it to reduce the overall level of multiple tracks so that, when mixed together, I still have adequate headroom at the master fader (though this becomes less necessary/relevant with 32 bit floating point mixing). Moving the fader doesn't change the overall sound of an individual channel; it just changes the balance between that channel and others.

Compression, on the other hand, makes changes (sometimes subtle, sometimes blatant) in the way your source channel sounds. We've been talking like the peaks and quiet bits can be viewed (or listened to) in isolation. They can't. If I've miked, say, an acoustic guitar there will be the loudish fundamentals but, going on at the same time, lots of harmonics that give the instrument it's full rich sound. These are at different levels but occur at the same time on the same track. If I apply compression, it brings the fundamentals and harmonics closer together in level terms thereby changing the character of the sound.

This is not so say the compression is evil or bad. It's not and, in both multitrack mixing or live sound environments it's often necessary. As soon as you add other sounds (be they other tracks in a recording session or just air conditioning, moving lights and crackling sweet packets in a live setting) you start to lose any sounds below a certain level. Compression that narrows the dynamic range coupled with some make up gain makes sure the subtleties can still be heard. The trick is to apply the compression with sensitivity and being aware of the effect you're having on the overall sound.

A final comment is that, for me at least, fader moves controlled by ear, not by watching meters, are absolutely essential to the mixing process. Yeah, in my time in broadcast sound I've used ducking amps and the like but they're no substitute for the human input manual mixing gives. Similarly, using compression to control dynamic range can also be essential, especially in the noisy world we now inhabit.

Although I enjoy them from time to time, discussion of voltage swings and RMS levels are good for background understanding but they don't tell you how something actually sounds.
 
Back
Top