Bus compression

  • Thread starter Thread starter Noah Nelson
  • Start date Start date
N

Noah Nelson

New member
say i have 3 tracks of vocals,
and i bus all three of tracks to one aux lane with a compressor on it,
Will the compressor effect the tracks differently than if i put the compressor on all three tracks individually?
 
Compression will affect the volume of the mix on the bus as a whole. If one voice gets louder than the others it will be the one triggering the compressor and the other voices will be momentarily pushed down in volume, like a ducking effect.
 
Listen to your Aux buss. If you find the relationship between the 3 vocal tracks varies in a way you do not like I would suggest compressing each track.
If the balance is good then compressing the aux buss is fine.
Brad
 
Some of the worst mixes I ever heard did a bus compression on Background voices panned. It gave a completely undefined Stereo Image (as the singer sang quite exactly) cause every time one voice was a Little louder, the other one was being ducked. When listening to the mix over headphones, I had the Impression of a filled but not closed ballon with the singer on it was flying randomly across the stage...
OTH, if you use three completely different voices (e.g. spoken, narrative stuff) it may be a different Thing, as the ear and brain do some 'compression' by themselves (if the loudest voice is the most important one)
 
If you do bus compression they will sound more "glued". If you compress them individually they will sound more separate.
 
Some of the worst mixes I ever heard did a bus compression on Background voices panned. It gave a completely undefined Stereo Image (as the singer sang quite exactly) cause every time one voice was a Little louder, the other one was being ducked. When listening to the mix over headphones, I had the Impression of a filled but not closed ballon with the singer on it was flying randomly across the stage...
OTH, if you use three completely different voices (e.g. spoken, narrative stuff) it may be a different Thing, as the ear and brain do some 'compression' by themselves (if the loudest voice is the most important one)

I do this all the time with no issues. FWIW, I also know of some pros who do this. Maybe the balance was drastically off in the mix(es) you heard? Or there was too much compression? If you get the balance right it shouldn't be an issue but I think the main idea is to just supply a little glue to the voices.

Another approach is to put your favoured buss compressor on the BVs and mix "into" it. It will often give a totally different result than just slapping it on because the "sound" and reaction of the compressor will guide your balancing decisions.

Cheers :)
 
Maybe the balance was drastically off in the mix(es) you heard? Or there was too much compression?

Or someone thought it was a substitute for individual channel compression, which it's not.
 
.. Maybe the balance was drastically off in the mix(es) you heard? Or there was too much compression? If you get the balance right it shouldn't be an issue but I think the main idea is to just supply a little glue to the voices. ...
Yes...
Listen to your Aux buss. If you find the relationship between the 3 vocal varies in a way you do not like I would suggest compressing each track. ..
.. or perhaps some mixing (leveling) of the tracks before reaching for yet more compressors.
 
You could use compression on each track to tame the peaks and then bus all three to help it sit in the mix. This is a little more work, but I usually do "manual compression" with volume automation on individual tracks before I do any compression. It gives me more control without losing dynamic range and allows natural increase and decease of the song.
 
Does anyone else do this to their tracks? I mostly specialize in classical and jazz, so maybe it's not as common with the engineer who hyper compress their pop music, but I still manually compress this way when mixing pop groups.
 
I definitely edit first if needed, then compress tracks if needed, then compress a bus if needed.
 
I definitely edit first if needed, then compress tracks if needed, then compress a bus if needed.

This. 'Mix first. Plus, you always have the choice to pick for leveling (edit moves)- they can always be pre track compression ('clip automation) or post. (That goes for bus compression and leveling too for that mater
 
You could use compression on each track to tame the peaks and then bus all three to help it sit in the mix. This is a little more work, but I usually do "manual compression" with volume automation on individual tracks before I do any compression. It gives me more control without losing dynamic range and allows natural increase and decease of the song.

Does anyone else do this to their tracks? I mostly specialize in classical and jazz, so maybe it's not as common with the engineer who hyper compress their pop music, but I still manually compress this way when mixing pop groups.
That's not manual compression - That's volume control. And that should be 95% of the process of leveling out the audio.

Not trying to get caught up in semantics - And no doubt, you're doing (IMO/E) "right" what so many people do "wrong" (by using compression when they should be adjusting the level). One controls the level of the signal - The other changes the dynamic range of the source.

Many think it's the same, but it's not.
 
MM, you say that a lot, but it doesn't make sense to me.

If you manually automate the fader up during quiet parts and down during loud parts, the dynamic range is now smaller. Because you're changing the level of the signal.

Compression is the same thing but with set attack and release times instead of a human deciding how fast or slow to bring up or down any given syllable.

Help me understand your thought process. (This isn't me being a sarcastic ass, I honestly want elaboration.) =P
 
MM is correct and it's not so much a thought process as a difference in how the two techniques work.

A compressor only affects the part of the signal that is above threshold, thereby changing the balance between the quiet and loud parts of a track. The compressor does NOT change the level of parts of the sound below the threshold, making the range between quiet bits and loud bits smaller.

Moving the fader moves all the levels up and down equally...the sound gets quieter but the relationship between the loud and quiet bits stays the same. Say your track peaks at -3 and the quietest bit is at -21, i.e. an 18dB difference. If you reduce the fader by 6dB the loud bits will be at -9 and the quiet at -27, the difference (as in the dynamic range) stays 18dB.

Hope this helps.
 
Does anyone else do this to their tracks? I mostly specialize in classical and jazz, so maybe it's not as common with the engineer who hyper compress their pop music, but I still manually compress this way when mixing pop groups.

depends on if it actually needs it or not.

in live mixing I might do this. but the quantity of compression is small on the bus compared to the channels (usually). the amount depends on my mix. and what board I'm using.

take for instance a theater production where there is 16 channels of actors and 24 channels of symphony-orc. the actors have thier own compression, then their bus has the compressor set as more of a peak limiting (threshhold high but really high ratio and quick release and attack)

the orc sections might not be compressed individually, but bussed and that section gets compressed as a whole. I might dedicate a bus just for solos which may be compressed more than the section buss. so I can punch them over the others in the pit. usually solo mixes are about the same level as actors, the rest lower. the amount of how far apart depends on what type of scene is going on.

in recording, I just don't see the use of it in smaller bands. maybe a mastering process but that would depend on special cercumstances and the M.E.'s call.
 
Moving the fader moves all the levels up and down equally...the sound gets quieter but the relationship between the loud and quiet bits stays the same. Say your track peaks at -3 and the quietest bit is at -21, i.e. an 18dB difference. If you reduce the fader by 6dB the loud bits will be at -9 and the quiet at -27, the difference (as in the dynamic range) stays 18dB.
But why are you doing this? By itself this move does nothing but turn the signal down and bring it closer to the noise floor. In the context of this thread, you'd be turning down on word or syllable or whatever. This is presumably because most of the rest of the vocal track is already sitting down around -27, right? So where before the average level was down around -27, but there was one peak up at -3, it now sits at -27 average and -9 peak. The crest factor of the track as a whole has been reduced. It is exactly manual compression. I would maybe call it "manual RMS compression.

Lately I've just been using ReaJS with a rather long RMS time and a look ahead time somewhere around half the RMS. I kinda feel guilty about it. I slap it on early as a way to kick the can of automation down the road a bit. Then I work on the more fun parts of the mix (read every other damn thing!) and by the time I get back around to automating the vocal level I realize that the compressor is doing exactly what I need, and sounds great, and I've pretty much built the rest of the mix around it, so I leave it. I will automate that compressed signal sometimes to get it above a louder section of the arrangement or whatever, but the comp does a fine job of handling most the nitpicky busywork of word-to-word and syllable-to-syllable leveling.
 
That's not manual compression - That's volume control. And that should be 95% of the process of leveling out the audio.

Not trying to get caught up in semantics - And no doubt, you're doing (IMO/E) "right" what so many people do "wrong" (by using compression when they should be adjusting the level). One controls the level of the signal - The other changes the dynamic range of the source.

Many think it's the same, but it's not.

This is exactly the response I expect from a mastering engineer!

This might help explain the differences of why I do it this way. Compression only attenuates the peaks, but I may not be focusing on just the loudest peaks. There might be a very soft section, well below the apex, that is "peaking." I'm just reducing a note, sound or word that is popping out a little too much for what that section needs. That will not reduce the dynamic range at all.

In fact, I also do the exact opposite sometimes. I'll take a very soft section that isn't soft enough and make it even quieter. That increases the dynamic range even more!

Most of use will hear it in pop songs will do that for verses and choruses or between sections (to create what I think is a false sense of contrast or emotional excitement). There's a Filter or Jane's Addiction song that comes to mind where I think it's unreasonably present.

Does this make sense why "level/volume control" vs. compression is different. If they both did just the peaks I'd say it's the same. I just call it "manual compression," but it's just "level control."
 
Back
Top