M
Maddox
New member
Hello,
A series of questions:
But what if, in my mix, i want to pan the entire acoustic guitar to, let's say, 70% right.
Should I pan each track respecting their right-left oriented pan ?
For instance, if i want the acoustic guitar at 70% Right, and i have two tracks, one 100% right and the other 100% left, I would:
- let the 100% Right at somewhere like 70 or 80% Right,
- and the 100% left panned track at somewhere like 30% or 20 % Left
And in this case, would it be better to equalize and add reverb and compression to both tracks in the same amount, or to do it individually ?
Or should export both tracks to one single left-right hard panned stereo track and pan this new track at the intended 70% right. And only then apply eq, reverb and etc. ?
I know that almost everytime the settings of eq, reverb, compression, etc., will depend on the mix, and on what you want to do, but can anyone just give me some basic directions. I'm just looking for some guidelines.
QUESTION 2This question is about annoying frequencies, de-essing and equalizing:
Sometimes my vocals sound uneven, in a way that, at some parts, it is pearcing, as in opposite to smooth and nice.
But almost never this is due to sibiliance - the "s" sounds fine, but sometimes the vocals will almost hurt my ear.
And even though the compression seems to be working for the dynamics of almost the entire vocals, i still find this pearcing annoying frequencies. Then, If i try to change the compression, i end up with a different, "not as good as the previous" compression.
I tried dealing whit this using equalizers - i would swept the eq bands, find those pearcing frequencies, and cut narrow some high frequencies, sometimes some middle-low frequencies, and, even thought it works for those annoying parts, i noticed it also makes it hard - sometimes impossible - to obtain the vocals i'm looking for the mix, after cutting these frequencies.
Then I read somewhere that compression can help with annoying frequencies derived from sibilance. However, I never read that people applies De-essers or compression to treat annoying frequencies that aren't a result of sibilance.
So my first thought was that this annoying frequencies are derived from either
- bad mic technique - maybe pulling away from the mic on parts with more energy would help;
- bad singing technique - maybe the problem is with the dynamics of the singing, like uneven, poor voice control, and compressors can only do so much.
- poor mixing skills - wich seems to be very possible.
But then (oh, yes, there's more), after hearing some pre-mix and/or demo versions and/or bootleg versions of songs that doesn't seem to present this type of problem on their original, album version (like Lou Reed, David Bowie, Nirvana, Michael Jackson, and even The Beatles) I came to wonder if these pearcing annoying frequency problems at some parts of the song - and not related to the letter "s" and sibilance - aren't something natural to be treated.
Since I'm hoping that this is the case, i ask:
Is it comum the use of de-essers to remove annoying frequencies that aren't resulting from sibilance ?
Should i deal with these annoying frequencies by equalizing them off, then bouncing this to a new track, and then try to achieve the results I'm looking for ?
Am I using the term "sibilance" wrong when i'm assuming that it is applicable only to the phonetics of some letters like "s" and "f", and to high pitch frequencies like cymbals high frequencies ?
I'm using Sonar 8.5 PE with Reason 4.0. I would record the "analog instruments" i have (such as guitar, acoustic guitars and vocals) using Sonar, and then, via Rewire, I experiment with Reason for instruments I don't have.
Then, when i think the result is satisfactory, i'd export the Reason tracks to Sonar, to do the mixdown.
But most of the time, even before I export the Reason tracks to Sonar, I end up having to clone some Sonar Tracks because their levels are too low comparing with the Reason tracks.
And now I have, sometimes, 3 copies of the same vocal, and each track will have their on eq, compressor and other plugins - regardless if with the same or different settings. And this is very CPU consuming.
I wonder if that happens to anyone else, and how they deal with it. This question is regarding the results each method will give.
Do they clone the tracks, bounce it to a new track without any plugin or pan/volume settings (everything on 0dB and center panned), and only then add effects and volume/pan treatment to the single bounced track?
Do they use the Process audio - Gain, to even the levels of the one single low level track ?
Or would it be risky to treat each track individually (you know, with their own effects and stuff), and then, in order to save CPU usage, bounce this treated tracks to a single track ?
Is there any other way to deal with this ?
Is there any problem in working with Stereo and Mono tracks in the same project ?
Because, after I export the Reason tracks to Sonar, they will be in Stereo, and the Sonar Tracks will be in Mono.
I heard that we should check the final mix to see if they will work both in Mono and Stereo systems - althoug i really can't see the point, since 99% of the time we listen to music using Stereo Systems, regardless if we use a CD player, T.V, car radio, iPod, etc. (but i migh be wroing about this).
How should this be handled ?
All-tracks-mono or All-tracks-Stereo ?
Or just leave Mono tracks in mono and Stereo tracks in Stereo in the mixdown ?
How often do you guys check for phase invertion ?
I'm really new at this, so I'm asking because, until now, i don't think i've ever had any problems with phase-invertions and combfiltering. And I suspect this is due to the fact that i still have little experience, and might be missing something.
How important is it to check for phase issues, if, at first sight (or first listening, for that matter) you are not picking up any signal of this ?
And how do you guys do it: do you have some kind of routine to deal with this ?
--------------------
I know that it´s a really long thread, and i apologize for it.
I really am a newbie, and i don't know anyone out of the internet i can talk to about this stuff, hence the size and digressions of the questions.
I really hope you guys can help me.
Thanks.
A series of questions:
QUESTION 1
When recording an acoustic guitar using 2 mics techniques, some tutorials tells to hard pan one to the right and the other to the left. This will result in two tracks.But what if, in my mix, i want to pan the entire acoustic guitar to, let's say, 70% right.
Should I pan each track respecting their right-left oriented pan ?
For instance, if i want the acoustic guitar at 70% Right, and i have two tracks, one 100% right and the other 100% left, I would:
- let the 100% Right at somewhere like 70 or 80% Right,
- and the 100% left panned track at somewhere like 30% or 20 % Left
And in this case, would it be better to equalize and add reverb and compression to both tracks in the same amount, or to do it individually ?
Or should export both tracks to one single left-right hard panned stereo track and pan this new track at the intended 70% right. And only then apply eq, reverb and etc. ?
I know that almost everytime the settings of eq, reverb, compression, etc., will depend on the mix, and on what you want to do, but can anyone just give me some basic directions. I'm just looking for some guidelines.
QUESTION 2
Sometimes my vocals sound uneven, in a way that, at some parts, it is pearcing, as in opposite to smooth and nice.
But almost never this is due to sibiliance - the "s" sounds fine, but sometimes the vocals will almost hurt my ear.
And even though the compression seems to be working for the dynamics of almost the entire vocals, i still find this pearcing annoying frequencies. Then, If i try to change the compression, i end up with a different, "not as good as the previous" compression.
I tried dealing whit this using equalizers - i would swept the eq bands, find those pearcing frequencies, and cut narrow some high frequencies, sometimes some middle-low frequencies, and, even thought it works for those annoying parts, i noticed it also makes it hard - sometimes impossible - to obtain the vocals i'm looking for the mix, after cutting these frequencies.
Then I read somewhere that compression can help with annoying frequencies derived from sibilance. However, I never read that people applies De-essers or compression to treat annoying frequencies that aren't a result of sibilance.
So my first thought was that this annoying frequencies are derived from either
- bad mic technique - maybe pulling away from the mic on parts with more energy would help;
- bad singing technique - maybe the problem is with the dynamics of the singing, like uneven, poor voice control, and compressors can only do so much.
- poor mixing skills - wich seems to be very possible.
But then (oh, yes, there's more), after hearing some pre-mix and/or demo versions and/or bootleg versions of songs that doesn't seem to present this type of problem on their original, album version (like Lou Reed, David Bowie, Nirvana, Michael Jackson, and even The Beatles) I came to wonder if these pearcing annoying frequency problems at some parts of the song - and not related to the letter "s" and sibilance - aren't something natural to be treated.
Since I'm hoping that this is the case, i ask:
Is it comum the use of de-essers to remove annoying frequencies that aren't resulting from sibilance ?
Should i deal with these annoying frequencies by equalizing them off, then bouncing this to a new track, and then try to achieve the results I'm looking for ?
Am I using the term "sibilance" wrong when i'm assuming that it is applicable only to the phonetics of some letters like "s" and "f", and to high pitch frequencies like cymbals high frequencies ?
QUESTION 3
This question is kinda related to the first one:I'm using Sonar 8.5 PE with Reason 4.0. I would record the "analog instruments" i have (such as guitar, acoustic guitars and vocals) using Sonar, and then, via Rewire, I experiment with Reason for instruments I don't have.
Then, when i think the result is satisfactory, i'd export the Reason tracks to Sonar, to do the mixdown.
But most of the time, even before I export the Reason tracks to Sonar, I end up having to clone some Sonar Tracks because their levels are too low comparing with the Reason tracks.
And now I have, sometimes, 3 copies of the same vocal, and each track will have their on eq, compressor and other plugins - regardless if with the same or different settings. And this is very CPU consuming.
I wonder if that happens to anyone else, and how they deal with it. This question is regarding the results each method will give.
Do they clone the tracks, bounce it to a new track without any plugin or pan/volume settings (everything on 0dB and center panned), and only then add effects and volume/pan treatment to the single bounced track?
Do they use the Process audio - Gain, to even the levels of the one single low level track ?
Or would it be risky to treat each track individually (you know, with their own effects and stuff), and then, in order to save CPU usage, bounce this treated tracks to a single track ?
Is there any other way to deal with this ?
QUESTION 4
This question is about Mono and Stereo:Is there any problem in working with Stereo and Mono tracks in the same project ?
Because, after I export the Reason tracks to Sonar, they will be in Stereo, and the Sonar Tracks will be in Mono.
I heard that we should check the final mix to see if they will work both in Mono and Stereo systems - althoug i really can't see the point, since 99% of the time we listen to music using Stereo Systems, regardless if we use a CD player, T.V, car radio, iPod, etc. (but i migh be wroing about this).
How should this be handled ?
All-tracks-mono or All-tracks-Stereo ?
Or just leave Mono tracks in mono and Stereo tracks in Stereo in the mixdown ?
QUESTION 5
This question is more like a curiosity:How often do you guys check for phase invertion ?
I'm really new at this, so I'm asking because, until now, i don't think i've ever had any problems with phase-invertions and combfiltering. And I suspect this is due to the fact that i still have little experience, and might be missing something.
How important is it to check for phase issues, if, at first sight (or first listening, for that matter) you are not picking up any signal of this ?
And how do you guys do it: do you have some kind of routine to deal with this ?
--------------------
I know that it´s a really long thread, and i apologize for it.
I really am a newbie, and i don't know anyone out of the internet i can talk to about this stuff, hence the size and digressions of the questions.
I really hope you guys can help me.
Thanks.