recording vox without a compressor

  • Thread starter Thread starter paresh
  • Start date Start date
P

paresh

Member
Is there a way to get more consistent levels without an outboard compressor? I can add vst compression afterward but it doesn't help much. Thanks.
 
Sure. Mic technique.
You ever watch the really good singers on stage when they "work the mic"? They'll get up close to it when they're singing a soft line then back off when they hit the loud passages.

Takes some practice but do-able. ;)
 
Is there a way to get more consistent levels without an outboard compressor? I can add vst compression afterward but it doesn't help much. Thanks.

Forget about using a compressor to even out vocals. Most vocal tracks I work with need fader automation or manual fader rides at mix time. Give that a shot to even it all out. If one were to use enough compression to even out a vocal take, you'd squeeze the life out of the whole thing.

*Fader rides to even out levels
*Compressor to shape the tone, density, and color of the sound

Sure. Mic technique.
You ever watch the really good singers on stage when they "work the mic"? They'll get up close to it when they're singing a soft line then back off when they hit the loud passages.

Takes some practice but do-able. ;)
That too.
 
In addition to the above suggestions, try giving the performer a really good monitor mix. Singers will automatically respond to what they hear and track any unevenness. Put some time into building a monitor mix that's as close as is practical to a finished sound.
 
Sure. Mic technique.
You ever watch the really good singers on stage when they "work the mic"? They'll get up close to it when they're singing a soft line then back off when they hit the loud passages.

Takes some practice but do-able. ;)

+100000000000

And as somebody else said, you should rely on a compressor to color the sound more than to even it out.
 
I gotta say that while I back the "work the mic" idea one hundred percent as being the #1 answer to the question, I think the whole idea of not using the compressor to limit the dynamics in the vocal waveform is rather unrealistic.

While one should definitely not count on compression alone to hold back vocal dynamics, and is definitely not a complete replacement for proper mic and vocal technique in that regard, even with the best vocalist, an outboard tracking compressor will often have a major effect on the resulting recorded waveform dynamics.

For example, it's not unusual for us to track vocals with a good 3:1 - 4:1 ratio and a threshold set to a good half of the crest factor, even on an experienced vocalist. Those kind of settings are most certainly going to have a major and very noticeable effect on the dynamics of the recording. On a lesser experienced vocalist that can sometimes climb to 5:1 with an even deeper threshold, explicitly because the dynamics need controlling.

Ignoring those truths do not make them untrue. If all one needed were the sound of a compressor's circuits without taming the dynamics, we could just the signal through uncompressed. But the fact is it's the act of compression itself that adds most of the "coloration" that people so admire. The "coloration" is mostly in the *way* the circuit compresses, whether it's in the type of harmonic distortion introduced by the tube or tubes at differing voltages or the "ballistic" response slope of the opto-electronic senor, etc., and not just in the passive sound of the circuitry at 1:1.

Should compression be used to substitute for good vocal technique and mic technique? Absolutely not. There's no excuse for abandoning or ignoring good technique.

Does it sometimes need to be used to help make up for inexperienced technique? Yes, in the real world it unfortunately does.

Does it have an effect on the dynamics even on vocalists with perfectly good technique? Absolutely. That's why it's called "compression" ;). and we should plan on it.

I am not a witch.

G.
 
Last edited:
Thanks to the power of DAWs, besides mic technique and compression, you have a third option to "even out" the vocals...

...editing. :)

I use mic technique, and sometimes even apply very mild/gentle limiting just to catch the wayward spikes...but I still edit the vocals in the DAW because I can get EXACTLY the level of each word that's needed. It's totally under my control...unlike the first two techniques.

Yeah...it takes a little more time/effort than just slapping a compressor across the whole track, but the results are more accurate (and more predictable ;) ).
 
Ignoring those truths do not make them untrue. If all one needed were the sound of a compressor's circuits without taming the dynamics, we could just the signal through uncompressed.
I think you misunderstood what I was saying. The color doesn't come from the circuits. The color comes from the actual compres...
But the fact is it's the act of compression itself that adds most of the "coloration" that people so admire.
Ah. You did understand.

Yeah, vocals need compression. My point was I compress until I hear "now this is dense enough". I don't compress until I hear "now this take is even".
 
but I still edit the vocals in the DAW because I can get EXACTLY the level of each word that's needed. It's totally under my control...unlike the first two techniques.
Forget about using a compressor to even out vocals. Most vocal tracks I work with need fader automation...
:( There were three techniques up there.

But yeah, I'm with you, miro. :)
 
Yeah, vocals need compression. My point was I compress until I hear "now this is dense enough". I don't compress until I hear "now this take is even".
I didn't mean to make it sound like I was responding to you specifically, Chibi; it was more of a general response to the topic.

What's interesting to me though is that, for me anyway, it's almost impossible to separate the two. I agree that I don't compress the vocal tracking with the expressed intention of leveling the waveform, it's all about the sound. But at the same time, it's easy after a while to see the correlation between the sound and the general dynamic character of the signal. I can pretty easily set the meters on my VLA to "output" and "see" pretty much when the vocals are going to sound right based upon the ballistics of the signal on the meters.

One still has to use there ears as the judge, because every vocal is different and there will be differences in the vocal's dynamics from session to session with or without the compression. The meters are not an absolutely perfect indicator in that regard, but they sure do work well for ballparking the signal. With practice, on a good quality compressor with good VU meters on the output, one could dial in the sound to within a good, say, 80-90% just by getting a good look at the character of the dynamics.

I kinda hate to say that here, because it may cause some newbs to be tempted to "mix by sight" instead of by their ears, which would be a huge mistake. Don't do it, guys. Not until you can, that is ;).

G.
 
"Work the Mic" is the correct answer.

However, it's not as easy as it sounds for singers, and you have to consider the proximity effect of the mic in question. Singers need to practice this technique in a way that doesn't create major sonic differences beyond level, as they back off the mic for loud passages and "eat it" for quiet passages.

If those differences are extreme, it will be difficult to work the mic without having pronounced proximity effect for quiet passages, then losing that effect and introducing lots more room sound with loud passages. The effect can be good, in some cases, but generally sounds weird.

Alternatively, for a good singer with consistent delivery, you can have the engineer "ride the fader" and compensate for known predictable changes in delivery.

Just my $.02
 
:( There were three techniques up there.

But yeah, I'm with you, miro. :)

Yeah...I saw that...but I was not talking specifically about track fader automation when I said I edit the vocal levels.

Maybe it's two sides of the same coin...but what I do is actually slice up the track into sections (or Objects as they are called in Samplitude)...and then I adjust the level for each Object as needed, which is not quite the same thing as track level automation.
Samplitude also has track level automation...but I never use it because working with the Objects is more detailed, and I can also adjust other things (EQ, FX, or any processing) just for a given Object rather than across the whole track.
It's all non-destructible...so I can always Undo if needed.

"Objects" are unique to Samplitude (though some other apps now have their own concept of that), and until you work with Objects...it's kinda' hard to appreciate that approach VS track processing or other methods, though I'm sure you can still slice up a track in most apps and kinda do the same thing I'm talking about...I just don't know how their options compare to Samplitude Objects...?
 
If those differences are extreme, it will be difficult to work the mic without having pronounced proximity effect for quiet passages, then losing that effect and introducing lots more room sound with loud passages. The effect can be good, in some cases, but generally sounds weird.

Yeah...that's a concern...which is why I now (thanks to DAW capability) prefer to just edit the individual words/phrases for level (and other stuff too as needed)...though when I sing, I will still "work to mic" a bit as needed, but that's not always easy for singers who are only use to singing live in a cover band where they basically "eat the mic" for the entire night! :D
 
Yeah...that's a concern...which is why I now (thanks to DAW capability) prefer to just edit the individual words/phrases for level (and other stuff too as needed)...though when I sing, I will still "work to mic" a bit as needed, but that's not always easy for singers who are only use to singing live in a cover band where they basically "eat the mic" for the entire night! :D

But be very careful!!! ...... You are what you eat. :D







:cool:
 
Maybe it's two sides of the same coin...but what I do is actually slice up the track into sections (or Objects as they are called in Samplitude)...and then I adjust the level for each Object as needed, which is not quite the same thing as track level automation.
One of the engineers I work with (Jay) does basically the same thing in Cubendo. He just splits the take where needed to cut it into separate chunks and adjusts the overall volume for each chunk, occasionally fading or cross-fading between chunks to smooth the transition.

It works great and he does it nice and fast and all that. I personally don't do it because I personally work around an automation-centric method. For me, it's just the digital version of fader jockeying; the automation track is just the digital version of riding the fader - in fact, in Nuendo you can draw the automation curve by riding a fader as the track is playing if you wanted to. So for me, basing my digital mix methods around automation is just the digital continuation of good old-fashioned analog methodology, which is right in the wheelhouse of my comfort zone. The "divide and conquer" method used by you and Jay is more of a pure NLE digital construct.

One method is not better than that other, it's just two different ways of skinning the same cat. Jay does just as good and as fast of a job with his method as I do with mine, and I'm sure you're right there with both of us. It's just choosing which tools and techniques one personally prefers.

G.
 
"Cubendo" :D

The main reason I do the Objects method is because I usually slice up a track anyway into Objects for a multitude of other reasons -- comping, spot EQ/Processing, etc --- so at that point, the level adjustment of Objects easily becomes part of all of that, and there's no need for me to then switch gears to a global track perspective do track level automation.

When I do my stereo mix OTB...that's when I consider the global track adjustments, while all my DAW adjustments are done only on the individual Objects, never the whole track...but because I've already done all those Object level adjustments, I then don't need any track level automation even when I get to the OTB mix.

Yeah...just different ways of getting to the same place. :)

One thing that I wonder about when working with global track adjustments in any DAW...like what if you pooch the track settings and you can't UNDO them back...you kinda mess up the whole track, whereas with the Objects...it's always minimized to individual elements rather than the whole track.
 
One of the engineers I work with (Jay) does basically the same thing in Cubendo. He just splits the take where needed to cut it into separate chunks and adjusts the overall volume for each chunk, occasionally fading or cross-fading between chunks to smooth the transition.

It works great and he does it nice and fast and all that. I personally don't do it because I personally work around an automation-centric method. For me, it's just the digital version of fader jockeying; the automation track is just the digital version of riding the fader - in fact, in Nuendo you can draw the automation curve by riding a fader as the track is playing if you wanted to. So for me, basing my digital mix methods around automation is just the digital continuation of good old-fashioned analog methodology, which is right in the wheelhouse of my comfort zone. The "divide and conquer" method used by you and Jay is more of a pure NLE digital construct.

One method is not better than that other, it's just two different ways of skinning the same cat. Jay does just as good and as fast of a job with his method as I do with mine, and I'm sure you're right there with both of us. It's just choosing which tools and techniques one personally prefers.

G.

When I want the level changes to happen before the inserted compressor I slice up the block and adjust there, driving the signal into the compression. When I want to adjust the level after the compressor I use fader automation. The first is "correction" and the second is "expression".
 
"Work the Mic" is the correct answer.

However, it's not as easy as it sounds for singers, and you have to consider the proximity effect of the mic in question. Singers need to practice this technique in a way that doesn't create major sonic differences beyond level, as they back off the mic for loud passages and "eat it" for quiet passages.

This one reason I like the RE20.
 
working with the Objects is more detailed, and I can also adjust other things (EQ, FX, or any processing) just for a given Object rather than across the whole track.
It's all non-destructible...so I can always Undo if needed.

This is exactly what I do. i.e. You can apply EQ just on one word or phrase if the proximity has caused boom. Lower peaks, raise drop outs.
 
Back
Top