A series of small questions

  • Thread starter Thread starter Maddox
  • Start date Start date
M

Maddox

New member
Hello,
A series of questions:

QUESTION 1
When recording an acoustic guitar using 2 mics techniques, some tutorials tells to hard pan one to the right and the other to the left. This will result in two tracks.

But what if, in my mix, i want to pan the entire acoustic guitar to, let's say, 70% right.

Should I pan each track respecting their right-left oriented pan ?

For instance, if i want the acoustic guitar at 70% Right, and i have two tracks, one 100% right and the other 100% left, I would:

- let the 100% Right at somewhere like 70 or 80% Right,
- and the 100% left panned track at somewhere like 30% or 20 % Left

And in this case, would it be better to equalize and add reverb and compression to both tracks in the same amount, or to do it individually ?


Or should export both tracks to one single left-right hard panned stereo track and pan this new track at the intended 70% right. And only then apply eq, reverb and etc. ?

I know that almost everytime the settings of eq, reverb, compression, etc., will depend on the mix, and on what you want to do, but can anyone just give me some basic directions. I'm just looking for some guidelines.


QUESTION 2
This question is about annoying frequencies, de-essing and equalizing:

Sometimes my vocals sound uneven, in a way that, at some parts, it is pearcing, as in opposite to smooth and nice.
But almost never this is due to sibiliance - the "s" sounds fine, but sometimes the vocals will almost hurt my ear.

And even though the compression seems to be working for the dynamics of almost the entire vocals, i still find this pearcing annoying frequencies. Then, If i try to change the compression, i end up with a different, "not as good as the previous" compression.

I tried dealing whit this using equalizers - i would swept the eq bands, find those pearcing frequencies, and cut narrow some high frequencies, sometimes some middle-low frequencies, and, even thought it works for those annoying parts, i noticed it also makes it hard - sometimes impossible - to obtain the vocals i'm looking for the mix, after cutting these frequencies.

Then I read somewhere that compression can help with annoying frequencies derived from sibilance. However, I never read that people applies De-essers or compression to treat annoying frequencies that aren't a result of sibilance.

So my first thought was that this annoying frequencies are derived from either

- bad mic technique - maybe pulling away from the mic on parts with more energy would help;

- bad singing technique - maybe the problem is with the dynamics of the singing, like uneven, poor voice control, and compressors can only do so much.

- poor mixing skills - wich seems to be very possible.


But then (oh, yes, there's more), after hearing some pre-mix and/or demo versions and/or bootleg versions of songs that doesn't seem to present this type of problem on their original, album version (like Lou Reed, David Bowie, Nirvana, Michael Jackson, and even The Beatles) I came to wonder if these pearcing annoying frequency problems at some parts of the song - and not related to the letter "s" and sibilance - aren't something natural to be treated.

Since I'm hoping that this is the case, i ask:

Is it comum the use of de-essers to remove annoying frequencies that aren't resulting from sibilance ?

Should i deal with these annoying frequencies by equalizing them off, then bouncing this to a new track, and then try to achieve the results I'm looking for ?

Am I using the term "sibilance" wrong when i'm assuming that it is applicable only to the phonetics of some letters like "s" and "f", and to high pitch frequencies like cymbals high frequencies ?



QUESTION 3​
This question is kinda related to the first one:

I'm using Sonar 8.5 PE with Reason 4.0. I would record the "analog instruments" i have (such as guitar, acoustic guitars and vocals) using Sonar, and then, via Rewire, I experiment with Reason for instruments I don't have.

Then, when i think the result is satisfactory, i'd export the Reason tracks to Sonar, to do the mixdown.

But most of the time, even before I export the Reason tracks to Sonar, I end up having to clone some Sonar Tracks because their levels are too low comparing with the Reason tracks.

And now I have, sometimes, 3 copies of the same vocal, and each track will have their on eq, compressor and other plugins - regardless if with the same or different settings. And this is very CPU consuming.

I wonder if that happens to anyone else, and how they deal with it. This question is regarding the results each method will give.

Do they clone the tracks, bounce it to a new track without any plugin or pan/volume settings (everything on 0dB and center panned), and only then add effects and volume/pan treatment to the single bounced track?

Do they use the Process audio - Gain, to even the levels of the one single low level track ?

Or would it be risky to treat each track individually (you know, with their own effects and stuff), and then, in order to save CPU usage, bounce this treated tracks to a single track ?

Is there any other way to deal with this ?



QUESTION 4​
This question is about Mono and Stereo:

Is there any problem in working with Stereo and Mono tracks in the same project ?

Because, after I export the Reason tracks to Sonar, they will be in Stereo, and the Sonar Tracks will be in Mono.

I heard that we should check the final mix to see if they will work both in Mono and Stereo systems - althoug i really can't see the point, since 99% of the time we listen to music using Stereo Systems, regardless if we use a CD player, T.V, car radio, iPod, etc. (but i migh be wroing about this).

How should this be handled ?

All-tracks-mono or All-tracks-Stereo ?
Or just leave Mono tracks in mono and Stereo tracks in Stereo in the mixdown ?



QUESTION 5​
This question is more like a curiosity:

How often do you guys check for phase invertion ?

I'm really new at this, so I'm asking because, until now, i don't think i've ever had any problems with phase-invertions and combfiltering. And I suspect this is due to the fact that i still have little experience, and might be missing something.

How important is it to check for phase issues, if, at first sight (or first listening, for that matter) you are not picking up any signal of this ?

And how do you guys do it: do you have some kind of routine to deal with this ?



--------------------

I know that it´s a really long thread, and i apologize for it.

I really am a newbie, and i don't know anyone out of the internet i can talk to about this stuff, hence the size and digressions of the questions.

I really hope you guys can help me.

Thanks.
 
whoa. lot of text here :D
Q number.1: there aren't any rules how much to pan your guitars, tutorials are just pointers. it also depends of your song. same goes with effects - you could add one reverb to left channel and other kind of reverb to right channel.
Q number 2: i would try to use combination of eq and de-esser. don't cut too much with eq.. just 2-3 db.
Q number 4: just leave mono tracks to mono and stereo tracks to stereo. and about checking in mono: some clubs play music as mono and also more importantly these little kitchen radios are also mono.
Q number 5: if you dont hear any phasing sound (if you dont know how it sounds just load a phazer effect to one of your track and you'll find out :P).
 
I think Seidy has most of the answers to your questions. I'm just going to add to what he said on a couple questions:

QUESTION 1: I would do the second option. Doing the first option would have 30% volume in the left channel and 100% volume in the right for the right-panned track, plus 100% in the left channel and 70% in the right channel for the left-panned track.

LEFT CHANNEL:
30% + 100% = 130%

RIGHT CHANNEL:
100% + 70% = 170%

You would still hear a little bit less volume in the left channel, but not the volume difference you are looking for; panning a mono track to the left doesn't take away from the volume in the left channel, only the right channel. Same for panning to the right, if all this makes any sense. I hope I explained it well.

You can experiment with some of these different outcomes by recording a scratch mono track of something in a simple DAW such as Audacity and duplicating it so you have two identical mono tracks. You can then begin panning them to see what I mean.


QUESTION 3: It is usually a good rule to never push the gain on a track over 0dB and to only duplicate it if you are purposely going to slide the duplicate forward a few milliseconds to give it a "doubled" effect. To solve this problem you can either turn down the Reason tracks or add compression to the quiet tracks. Inside your compression plug-in there should be an option to turn up the gain. I would use this option to get those tracks louder.
 
The point of using two mics on an acoustic is usually to get a big full sound that sits across a certain amount of the stereo field, depending upon your needs.

It gives it a bit of spaciouness as the two mic tracks will be different. I rarely do this and I track lots of acoustic guitars in acoustic guitar heavy music.

More likely I will record the track twice, with two mics each and put both mics on one side - one mic hard left/right, the other half way, and the same on the other side... now that gives you big sound.

If you want an acoustic guitar at 70% one side, try using a single mic, or just place the tracks on top of each other and blend volumes to suit. I assume you're using two mics because you can't get the sound you want with one.

If you can get the sound you want with one. Use one.
 
On Question 2: what are you using to listen to the recordings where you hear these "piercing" frequencies? Are they good flat-response monitors?
 
To solve this problem you can either turn down the Reason tracks or add compression to the quiet tracks. Inside your compression plug-in there should be an option to turn up the gain. I would use this option to get those tracks louder.

and

Q number 2: i would try to use combination of eq and de-esser. don't cut too much with eq.. just 2-3 db.

Here's a thing: i tried using heavy compression (like 10:3 to 15:3 ratio) and turn up the compressor output to compensate the lost of volume, and i think it really helped.

I was afraid of loosing the dynamics (because i read a lot of times that we should apply compression with careful, otherwise the song might get "flat", as in "with no emotion".)

But as i said, i think it really helped me. The vocals came out a lot smoother, and even with more definition.

For instance, in some parts, before this heavy compression, some trills i tried to add to the singing melody were sounding flat, not as "marked" as i wanted it to sound (i really hope i´m being capable of express myself here).

But this heavy compression helped this transitions between the adjacent notes on the trills sound a lot more like trills, and less like poor attempts of trills.
Anyway, just thought i´d share this.


panning a mono track to the left doesn't take away from the volume in the left channel, only the right channel. Same for panning to the right, if all this makes any sense.

Yes, that actually makes a lot of sense.

More likely I will record the track twice, with two mics each and put both mics on one side - one mic hard left/right, the other half way, and the same on the other side... now that gives you big sound.

Yes, I'm looking for a fuller sound. But if i record the same track twice, specially in the case of fingerpicking style, unless i'm able to duplicate exactly (or almost exactly) the track recorded first, wouldn´t that harm the definition of the strings ?

I was thinking about using one single mic, as you suggested, and add some delay.


On Question 2: what are you using to listen to the recordings where you hear these "piercing" frequencies? Are they good flat-response monitors?

They're ok, i guess. And I don't think they're the problem, because i don't hear these frequencies when i'm listening to other songs (except for those pre-mix and demos i talked about).
 
Question 1:
If the acoustic guitar will be the feature instrument with no (or few) others, record it in stereo and pan wide. (Sometimes I duplicate the tracks, via copy and paste, and pan them...LEFT=100% left, RIGHT=center, RIGHT=100% right, and LEFT=center.)

However, if the acoustic will be "filler" along with a bunch of other instruments then I will record it in mono and pan it where needed.

In both cases I keep the mics about 3 to 4 feet away from guitar.

(BTW, you would get far less confusing discussion if you had separated these questions into different threads.)
 
Sometimes I duplicate the tracks, via copy and paste, and pan them...LEFT=100% left, RIGHT=center, RIGHT=100% right, and LEFT=center.)

Hey Raw, doesn't that just give you a louder mono clump? Copying and pasting tracks doesn't accomplish anything other than make them louder.

(I hope I didn't open THAT same can of worms again...I really hope I mis-understood your post).:cool:
 
You can definitely conserve processing power by bouncing down effects-laden tracks to a new track, then archiving the old track.

As far as EQ goes, perhaps you could address the problem with mic choice/discipline/placement. I feel like EQ is an attempt to fix a problem. Frequently the best solution is to avoid the problem in the first place.
 
Hey Raw, doesn't that just give you a louder mono clump? Copying and pasting tracks doesn't accomplish anything other than make them louder.

Well, it makes a mono spot in the center with a wide stereo pair nested around it. Otherwise, with the left mic panned full left and the right mic panned full right, there is nothing in the center because each mic is playing something different. So I am filling in the hole with a mono field. It sounds fuller to me as apposed to just turning it up. Try it!

I suppose if you left each duplicate track panned exactly the same as the original, then yes, it would just be louder.
 
each mic is playing something different..

OK, you lost me. I thought we were talking about copying and pasting the same part. How could each mic be playing something different?

If you take 2 identical mono tracks (copy and paste the same track) and pan each one wide, you still only have a louder mono track.
 
OK, you lost me. I thought we were talking about copying and pasting the same part. How could each mic be playing something different?

If you take 2 identical mono tracks (copy and paste the same track) and pan each one wide, you still only have a louder mono track.

No, no, no! I start with two mics and record them as a wide-spaced stereo pair. Then I pan them respectively hard L & hard R.

Next I make dupes of those tracks and pan them both dead center. (Same as summing them together into a single mono track.) Now I have a mono center point nested within a stereo field. Sorry if I confused you.

It's OK if you feel this doesn't change anything. I was just saying how I like to experiment with different configurations like that.
 
Last edited:
No, no, no! I start with two mics and record them as a wide-spaced stereo pair. Then I pan them respectively hard L & hard R.

Next I make dupes of those tracks and pan them both dead center. (Same as summing them together into a single mono track.) Now I have a mono center point nested within a stereo field. Sorry if I confused you.

It's OK if you feel this doesn't change anything. I was just saying how I like to experiment with different configurations like that.

OK sorry. it was me who mis-understood. That makes more sense. I don't know if I agree or not, honestly. From what I think I understand, it doesn't do anything. But I've never tried doing it with stereo tracks panned, so I won't pretend to be an expert on this one. :cool:
 
Back
Top