Automating the threshold...

timvracer

Member
Ok, this goes into the "new guy trying to use common sense" department.

As I get more and more samples back from mastering houses that offer a free preview, I swear, they all seem to simply be EQ'd and then limited to death. While some offer a tradeoff between "loudness and quality", I wonder if really all that is going on here is some EQ, and deciding how far to slide the Threshold control on the limiter. (Note, one master house that gave me a sample did not do this, offered various samples at different settings, and overall have been great).

Again... I'm new to this, but I think I'm a pretty smart guy...

What I did in my own master is simply automate the Threshold (not to mention I also automated the priority for which freq bands the threshold is applied to in the L3-Multi). This allows me to maximize the softer parts of the song, and then back off the gain for the louder parts of the song so I keep the limiting to a minimum. The song *IS* a loud in your face song, so I am not shy about getting the thing loud, but in the samples I got back from some of the mastering houses, they slam the hell out of it and the big crescendo ending is pumping and heaving all over the place, and the kick drum is getting mashed.

Or should I be volume equalizing my mix? However, it seems that really is for the mastering phase, so if my mix will have variance (i.e. first verses are just one instrument and a vocal, last verses an all in fire and brimstone in your face rage), then it seems I need to boost the first verses to fill the radio. If I boost the first verses to get to -0.3db, then of course, my last verses will be crushed unless I automate the threshold.

So... this is what I am doing. It sounds great honestly, but what am I missing here, or doing wrong?
 
Last edited:
"Or should I be volume equalizing my mix? However, it seems that really is for the mastering phase, so if my mix will have variance (i.e. first verses are just one instrument and a vocal, last verses an all in fire and brimstone in your face rage), then it seems I need to boost the first verses to fill the radio. If I boost the first verses to get to -0.3db, then of course, my last verses will be crushed unless I automate the threshold."

I'll sometimes automate thresholds for various reasons.
Your situation can be solved in the mix -at the tracks or on the main mix bus, or on the mixed song in a mastering phase. The main dif is more options and different potential results doing things back at the tracks and mix.
It would seem if the song builds towards the end and you don't want it to be 'crushed, then you would have to reduce / automate the level (down) rather than raise the threshold. -assuming the level is up near the top.
Just as often you could end up using a combo of all three.

Not sure your comment 'fill the radio'. Old rock n roll -that has some dynamics does just fine for example. 'In your face always' isn't necessarily the only flavor in town.
 
You really should be doing this in the mix. Actually preferably in the arrangement, performance, and tracking but...

You should be controlling the dynamics a bit better at the mix stage. If the chorus is too loud compared to the verse, or the crescendo is actually too much louder than the quiet intro, you've got to figure out why and fix it. A good mastering engineer can help get you there, and if they actually gave a damn they would find a way to manage without completely destroying it, but like most things, it really can be done better and easier earlier in the process.

You expected this answer, I'm sure. It's become a cliche because it's true. Anytime you've got nothing but a stereo mix, the question becomes "How much do I want to fuck that up in order to fix this?"

Then again, I'm not sure you really have to use all the bits all the time. Maybe it's really just natural and fine and that end part is supposed to be absurdly loud compared to the rest of the thing. Most of the time we'd rather not have the listener reaching for the volume knob for every different section of a piece, but sometimes we do, and anyway we do get a little bit of wiggle room and get to set the perspective to a certain extent. "This is what we're calling quiet, and THIS is loud." I do really mean things to my listeners sometimes, but there aren't many of them. :)

All that said, I use long RMS compression with lookahead for this. It evens out the average level and drags the peaks around with it. A little bit goes a long way, but it's kind of scary how well it works. In my case, most mastering is exactly that with a pre- and post- EQ and a curvy clipper. Of course, I will have handled most of my dynamics way before this point, so it doesn't have to work all that hard.
 
Ash,

Thanks for the feedback. The reason the verses are quiet is because it is only a guitar and vocal, but later we get bass, drums, violin lead guitars, etc. However, to your point, the mix itself represents how it would be heard in performance. It sounds just fine to me without the artificial volume pumping of the early verses... but... the song does have to compete out there. That is the key really... and it is reality, when the song first comes on, if it is quiet, it just won't compete in the musicsphere... so we boost the verses so when the song comes on the radio (in the artist's dreams I might add) it doesn't seem really quiet in the early verses.

It creates all kinds of stuff I hate... like if I jump from "verse 2" to the "final chorus", well, the vocals are waaaaay softer of course in the final chorus, really highlighting the artificial nature of it all... but if you listen all the way through, you don't really notice.

I know some folks hate this reality, as it causes us to do some artificial things, but for driving rock, it seems that is required to be competitive (and frankly, how all the commercial stuff I hear seems to work).

I will try and apply your advice regarding working more on the mix to make mastering have to do less work. No doubt, this kind of focus will really help me just get the mix better anyhow, as it seems too many things still competing for some frequency space.

Thanks!
 
Three things:
1. An actual radio station will compress and limit the hell out of it, which will keep the verse and chorus volume in check. So there is no need to make the master sound like it's on the radio, because being on the radio causes hat effect.

2. The difference in volume between the verse and chorus doesn't have to be that much to create the appropriate dynamic effect. It sounds like you could use some automation to set each section where it should be, and smooth the transitions.
For example, if the entire song was the acoustic guitar and vocal, would it be as quiet as you have it now?

3. If the vocal is way out front in the verses and buried in the choruses, automate it so that doesn't happen.

You are in control of the mix, make it what it needs to be at every point of the song. Once you get that right, there won't be much for mastering to do/fix.
 
3. If the vocal is way out front in the verses and buried in the choruses, automate it so that doesn't happen.
Yes, but this is happening because we're pushing into the limit to begin with, and then trying to get louder. You can't really turn up the vocals without smashing the whole thing even worse. I would try to keep the vocals (and the drums) around the same level through the whole thing and then don't let the instruments get quite so loud. You might need to bring that one guitar down a bit when the rest of the stuff comes in. Sometimes you can automate an EQ to take out some of the low end when low end instruments come in. It can work in other areas too.

But again this really is an arrangement/performance issue first, and a mix problem next. You should not kick this can any further down the road.

I actually use the long RMS compression trick at multiple stages along the way. Some instrument tracks like kick, bass guitar, always vocals. On bus tracks so that when guitar 2 comes in, guitar 1 gets jostled down a bit, and then eventually on the mix bus.
 
Ash,
Can you explain in a bit more detail how I'd do the long RMS compression approach? It would be great to learn a new technique, whether it works in this particular case or not (certainly would love to try it).

What do I need for this in a compression plugin (recommendation?) and can you just explain a little how you set it up?
 
What do I need for this in a compression plugin (recommendation?) ...
Something that lets you set a long RMS time. ;)

But actually, for what I do, you need it to have a lookahead or pre-comp parameter that can go pretty long as well. I only ever use ReaComp which comes with Reaper, but I think it can be downloaded for free to use in other DAWs. Part of the trick is to set the pre-comp halfway through the RMS time, and since ReaComp's maxes at 250ms, the longest RMS I can use is about half a second. I'd actually like that to be a little longer, but it works well enough in most cases.

And really, that's the whole trick. I whack the pre-comp all the way up, put RMS up to 500ms, turn attack and release down to 0. On a mix I usually keep the ratio down as low as it can go (1.1:1), but on other sources I'll sometimes go higher. Almost never as high as 2:1, though. ReaComp has a variable knee, and softer knees are generally better for this. What I usually do is leave knee at 0 (hard as it gets), lower the threshold til it just barely triggers the reduction on the louder parts and then I drag the knee out so that it's almost always doing something, but never actually gets to the full ratio. Adjust by ear and as necessary from there.

Now, this does tend to respond to low frequency content a bit more than the highs, which is why I wrap it with EQs. Before the comp, I'll shelf down the low end a couplefew db, and then shelf it back up by about the same amount after the compressor. This helps quite a bit in getting the note-to-note smoothing that I'm looking for while still letting low frequency transients to retain some of their thump.

For me, this is a much more natural and transparent way to inflate the "loudness" of very dynamic material. It allows the peaks to hit the limiter or really any other dynamic process after it a lot more consistently. A lot of people talk about two-stage compression where you limit the peaks first and then do a slower compression to even out the note-to-note stuff, but that has never made sense to me. When you do it that way, the attack/sustain ratio is very different for louder hits than for softer. In the extreme, you'd have some notes distorted and others not even though they're all the same volume. Worse yet, since you didn't smash down the quieter peaks, they could end up louder than the peaks that used to be loud! How does any of that ever make sense to anybody?
 
I get that the vocal is lower because the other instruments got louder/thicker, but I don't get why the vocal needs to be the same throughout the mix.

What I do on songs that swing wildly, dynamics wise, is mix each section as if it were it's own thing, the work on the transitions.

This assumes that you perceived the vocal sitting lower in the loud parts as a problem.

I lso tend to get the loud parts of the song where they need to be, then deal with the quiet parts. If you do it the other way around, you can run out of headroom because you started with the quiet part too loud.

But either way, the loud parts early don't have to be that much louder than the quiet parts. Even if they around rent hat much quieter level wise, they will seem quieter because of the lack of extra instrumentation.
 
...but I don't get why the vocal needs to be the same throughout the mix...
Well, I guess because the vocal is usually the focal point of the mix. In general, I feel like it should stay consistent or even get a little louder in the louder parts of the song. It just seems like it will be kind of unnatural if the vocal gets noticeably quieter in the crescendo part of the song. I think it's okay to let it get buried - to be quieter compared to the rest of the mix - when things are supposed to be loud because that's how you know those other things are loud, but if the vocal itself gets significantly quieter, it kind of ruins that whole thing. Even when mastering a set of different songs for an album, I try to get the vocals to sit around the same level all the way through, and kind of let the rest of the mixes fall where they may around that. It's not any kind of rule, of course.
 
Well, I guess because the vocal is usually the focal point of the mix. In general, I feel like it should stay consistent or even get a little louder in the louder parts of the song. It just seems like it will be kind of unnatural if the vocal gets noticeably quieter in the crescendo part of the song. I think it's okay to let it get buried - to be quieter compared to the rest of the mix - when things are supposed to be loud because that's how you know those other things are loud, but if the vocal itself gets significantly quieter, it kind of ruins that whole thing. Even when mastering a set of different songs for an album, I try to get the vocals to sit around the same level all the way through, and kind of let the rest of the mixes fall where they may around that. It's not any kind of rule, of course.
Who said anything about the vocal getting quieter in the loud parts?

I guess that since I mix things part to part, I don't really consider leaving anything to get buried or jump out because I left it a consistent volume for the whole song.

I also tend to keep my vocals a little further back in the mix than most pop-type tunes would be, so I have to stay on top of where they sit, since they are just about to be buried in the first place.
 
Ummm....
It creates all kinds of stuff I hate... like if I jump from "verse 2" to the "final chorus", well, the vocals are waaaaay softer of course in the final chorus, really highlighting the artificial nature of it all...
But I guess this...
...but if you listen all the way through, you don't really notice.
Is all that really matters most of the time.
 
Well... quick update. Ash, based on your advice, I went back to the mix and scrutinized, and that was absolutely what was needed. It was not ready for mastering. I spent the day really working on it (did the trick of bouncing the mix, importing the waveform, and looking for peaks, etc.). It led to lots of fixing on things that made the base mix much better. Once I felt the mix sounded great, I tried mastering it again. I am now actually running my limiter at the same threshold throughout (except for one awesome bass solo where I give it more gain). I found lots of little issues where I needed to better balance the instruments, which gave me more headroom in critical areas that means the limiter is not limiting in those sections like it was before (and squashing it).

Of course, things like the vocal db being variant is still the same, but now there is not areas where the mix seems to "compress" or pulse, which is what I was trying to fix with variable threshold -- but that was the wrong thing to do in this case.

I don't have a lookahead compressor, instead I went in and manually edited the offending snare drum hits (which I found using the waveform and my ear). I was able to tame the drums this way (yeah, manual.. but it works). I only reduced the gain on the transient edge, and kept all the punch after so it did not hurt the feel too much.

So part of my learning/viewpoint based on this:

a) When you think your mix is done, master it yourself (or try to anyhow) so you can discover issues, and fix it in the mix. While several mastering engineers will offer this iteration, there is no way I could have iterated with someone remotely to find all the issues I found today. I will still use a pro mastering house (most likely), but this exercise has helped me make the mix a ton better.

b) When trying to get it louder, make it softer!

--------------------------
Interestingly.. I am about 70% deaf in my left ear, which makes all this extra challenging. I have to use mono mode to really test out the balance of my instruments. I recently got a hearing aid for my left ear, which may work out (the problem is the frequency profile of my left hear is jacked up too... so I get too much HF in that ear relative to normal). I do a lot of head turning as well while listening.
 
Back
Top