getting things to 'sit right' in the mix without fighting each other

  • Thread starter Thread starter skiz
  • Start date Start date
S

skiz

New member
been messing around with this for a little now and i just wanted to know what techniques can be used to put different instruments into the best frequency range so they dont start fighting eachother and get muddy?

ive been trying to use the c4 multiband compressor to do this and im not sure if the results are great.

ive been compressing the bass at around 100-150hz and dipping it from about 150 - 700hz to make space for the drums and guit.

the snare and kick ill generally compress between 150 and 250hz and dip 100-150 and 300-700. the guits im not really sure what to do with as i dont want to cut away at their mid range. but i do dip them below 150hz

is this a valid technique to try get the instruments not to fight each other and create mud? i didnt want to do this with EQ as i thought it'd be too unnatural.
 
Try the following:

* Mute tracks that don't contribute, or that mask others.

* Use tones and arrangements that compliment one another, rather than compete with one another.


It all starts in the tracking and arrangement. If the tones and arrangements are muddying things up during that phase, then they will probably contnue to do so in the mixing phase.

Mixing is not some magical time where you start sculpting the sound of everything and "make things fit." And certainly not with something like a multiband comp, which really should be used only for problematic tracks or for correcting minor things (for vocal pops and de-essing ... or for fret scraping on an accoustic guitar, for example).

Mixing is a time to take elements that already "sit right" - and refine them a bit and make them sit even righter. :D You're going about this backwards. You don't take stuff and "make them fit" after the fact. You have to track them that way.
 
i understand that daisy, but thats not the case with this recording at all.

i covered a song that was beautifully recorded and all the instruments fit into their right space and theres no muddyness to the recording at all..

now i recorded the exact guitar and bass and drum parts, tuned to the same key and everything.

so all the arranging was identical etc.

but i find the guitars fighting the kick and at times, each other.

so i wanted to know if theres a way i can get them to sit back into their spot and work together rather than against each other..

i want to give each instrument their frequency spaces so that they dont fight each other

so no i dont think it has anything to do with the arranging and composing.
 
i understand that daisy, but thats not the case with this recording at all.

i covered a song that was beautifully recorded and all the instruments fit into their right space and theres no muddyness to the recording at all..

now i recorded the exact guitar and bass and drum parts, tuned to the same key and everything.

so all the arranging was identical etc.

but i find the guitars fighting the kick and at times, each other.

so i wanted to know if theres a way i can get them to sit back into their spot and work together rather than against each other..

i want to give each instrument their frequency spaces so that they dont fight each other

so no i dont think it has anything to do with the arranging and composing.
It's not all about frequency when you mix. Also try using reverb and panning to get things to sit and work better together.
 
Last edited:
so no i dont think it has anything to do with the arranging and composing.


Yes, it does. But that's only half of it.

The other half involves a lot of little things you're going to have to learn to be cognizant of.

The tone of the guitar, for example; what kind of guitar you're using ... which pickup setting ... what kind of amp you're using, how it's miked, what kind of strings you're using, strumming technique ... even what kind of pick (thin, medium, or extra thick).

Consider the tone of the bass; is it going direct, or is it miked? What kind of amp (if any)? How new are the bass strings, and what kind of guage? Pick or fingers? Are you using a newer, American-made Jazz or Precision ... or is it a Sears Catalog special? :D And how long has it been since it's been set up?

Consider the drums; The accoustic environment will play a large role here. The wrong tracking environment can create a nightmare of early reflections or uneven bass response that can result in massive low-mid frequency build-up that probably won't mix well with other tracks ... and EQ'ing won't necessarily correct it either without creating other problems (cut a large enough hole out of the low mids, and listen to your snare drum thin out and your cymbals become harsh).

Consider how the kick drum and bass are interacting. Are they complimentary, or are they creating a bunch of mud? Did you know that emphasizing the beater slap (by loosening the beater head) ... along with miking the bass up (as opposed to going DI) ... can go a long way towards making each occupy their own space in the mix? Probably not, but now you do. You're wecome. :D

Are the guitars and bass fighting one another? Again, another thing to consider is miking the bass up to give it enough midrange to stand out against the guitars. You can also split the signal and go DI with another track ... and bring this track up if you need more bottom end to help round out and compliment the guitar. Speaking of which, you might also consider using a smaller amp, and switching pickups to cut out on some of the lowest frequencies from the guitar. Consider that, while the guitar may sound great and full on it's own, anything below, say 150 hz is going to start fighting with the bass and kick. And again, the best way to avoid this is by adjusting pickups or turning the bass knob down a hair on the amp ... rather than EQ'ing after the fact.

These are all just examples of the type of mindset it takes to make great recordings that are easy to mix. If you're not prepared to be cognizant of how the tones used during tracking are going to affect one another within the context of a mix ... then you're not ready to make great recordings yet. :D Just keep EQ'ing and Multibanding stuff, and keep being unhappy with the results. It's really up to you.

.
 
Holy crap daisy.. +major rep for the most informative response ive ever got on this board. Ive never taken half that stuff into account. I have no idea how i would go about thinking how to shelve the instruments before even starting to record. How do you personally go about doing it? I suppose there's always guess and check right? Record guit and if its not right go back n change it.. Wow im gona have to work on my patience level tho cause all those little things will prob take me ages to sort out.. I suppose after a lot of exp it starts becoming more natural? Thanks a lot for that post tho. Its given me a ton to think about.. Also made me kinda sad tho.. Just hope i can get it right in time!
 
I have no idea how i would go about thinking how to shelve the instruments before even starting to record. How do you personally go about doing it? I suppose there's always guess and check right? Record guit and if its not right go back n change it..


You got it.

I mostly record other people, but a great many of my sessions have gone something like this; guitarist lays down a few bars just for sound checking. We listen back. Doesn't sound right -- stuff just isn't sitting. First thing we do is try switching pickups. Easiest thing -- nine times out of ten, a few switches and we're there. If not ... then I go over to the amp and start twiddling knobs.

Sounds better? Good. But still not quite there yet. "Say, can you try playing that same chord, but higher up on the fret board? Hmmm. I think we have our tone. Let's lay it down."
 
The quick and dirty rules.

1. Garbage in garbage out. No way around it. If you record crap, it will always sound like crap. YES! You may be able to fudge it enough or process the hell out of it. But its just easier to record it right the first time, and tweak that good recording a little bit for a better mix.

2. Dont even think about any kind of compression untill you know about transient levels. I'd give you a link of some where you can go to learn about that, but I cant seem to find anything good, simple and informative.

3. Equalisers can only do so much... See rule one.

4. If you can hear the reverb your using to much. Unless thats what your going for. When adding reverb be easy, you don't usually need to much to acheive the desired results. A little bit goes along way.

5. 3:1 (said as "3 to 1") When rcording with multiple mics. Your second mic needs to be 3 times as far away from the source as the first. Example, you have two mics to record a guitar. One getting the close sound, and another getting the further away room sound. If your first mic is 1ft away, then the seccond should be 3ft away. If the first is 2ft away, the seccond should be 6ft away. This is to prevent phase cancelation, but only applies to using more than one mic.

6. Stereo pans... You dont need to have a hard pan just a little will work. Unless a hard (full left or right) is the intended effect. If the instrument would be center stage live. It should be centered in the mix. Imagine the band playing in front of you, and tweak your pans left or right till they sit about is sounds like they would be in that live situation.

7. The last rule. Every one is full of opinoins, and few know the facts. What is right for someone else may not always be right for you. Read every last bit you can, and every thing will start to come into perspective.

This is a tip not really a rule. When recording distorted electric guitars, take it easy on that distortion. It's much like reverb a little bit goes a long way.

happy mixing.
 
5. 3:1 (said as "3 to 1") When rcording with multiple mics. Your second mic needs to be 3 times as far away from the source as the first. Example, you have two mics to record a guitar. One getting the close sound, and another getting the further away room sound. If your first mic is 1ft away, then the second should be 3ft away. If the first is 2ft away, the second should be 6ft away. This is to prevent phase cancellation, but only applies to using more than one mic.
Nope. This is a common misunderstanding and mis-statement of the 3:1 rule. It does not apply to multi-miking the same source at all; when multi-miking the same source, a 3:1 distance ratio between mics has no significance whatsoever. There will be phase to deal with - both good and bad - regardless of mic distances.

The 3:1 rule applies specifically - and ONLY - to the miking of multiple sources, and can be stated as, "the distance between two mics should, when possible, be at least three times the longest distance of the mics from their individual sources." That is, for example, if you have one mic a foot away from a sax and another 16" from a horn, you should try to have the two mics be at least 48" (4ft) away from each other (16x3=48).

The reason for this "rule" is to reduce phase issues from incidental and unwanted bleed from the other instruments by keeping the relative volume ratio large enough, and by keeping a shallower angle of incidence on the microphone direction that tends to put the bleed source nearer the dead areas of a cardioid mic.

G.
 
* Mute tracks that don't contribute, or that mask others.
not in all cases...MORE TRACKS MOREEEEE TRAAAAACKS! ARHAERHARHARHARHARHAR!

* Use tones and arrangements that compliment one another, rather than compete with one another.

yes..that is right on..the best way... the best way to mix anything is always in the arrangement...you can get things to sit fairly well without even all that much mixing if you have the right musical arrangement.
 
Nope. This is a common misunderstanding and mis-statement of the 3:1 rule. It does not apply to multi-miking the same source at all; when multi-miking the same source, a 3:1 distance ratio between mics has no significance whatsoever. There will be phase to deal with - both good and bad - regardless of mic distances.

hmm...am I insane or is that incorrect? Of course if you mic one single source with more than one mic there can be phase issues. different distances mean slightly different times, and those time delays can cause phase cancelation. But then again, you could always phase align it all, if it's the same source and should be just fine =D
 
EQing will be a lot easier fix than trying to use a multiband compressor on everything. I consider a multiband compressor more of a mastering tool anyways. As far as things sitting right in the mix, the best ways to deal with it are; does it sound better without one of the tracks?, pan them away from eachother, eq them to make it clearer
 
hmm...am I insane or is that incorrect? Of course if you mic one single source with more than one mic there can be phase issues. different distances mean slightly different times, and those time delays can cause phase cancellation. But then again, you could always phase align it all, if it's the same source and should be just fine =D
You are both insane and incorrect :D (just funnin'! ;) )

What people keep forgetting when it comes to phase in recording, is that it's not a simple matter of time alignment like it is in the computer after the recording has been made. The problem is real life is you are forced to ask the question "which frequency/frequencies do I want to align and which ones am I willing to sacrifice, or perhaps even want to cancel." This is what's behind the "good phase and bad phase" concept.

When double-miking a single source, it's not so much a question of phase issues caused by time delay, especially if we're talking 1' and 3' distances or something like that, though that is a variable that plays into the result. It's an even bigger matter of matching to wavelength. Changing the difference in distance between the mics (think of it as sliding one mic back and forth)mainly serves to change the wavelengths, and therefore the frequencies, that will be captured in phase or out of phase. Picking a relative distance between the mics is, to a degree, selecting which frequencies will be more coherent and which ones will be more incoherent. And there is nothing special or magical about 3:1 in that regard.

Let's say - to keep it schematically simple - that mic A is placed 1/2 wavelength (at a certain frequency) from the source. 3:1 may look special there, because then mic B would be at 1 1/2 wavelengths and the phases would coincide (with only a delay of one cycle.) But put it more than 3:1, at say, 4:1, and now the two mics will be a half-wavelength out of phase at that frequency and there will be phase incoherence (not quite cancellation only because there will be a difference in amplitude between the two. Match the amplitude, and they will cancel at that frequency.)

OTOH, for the frequency for which mic A is at 1/4 wavelength distance, the 3:1 distance sucks wind because that's at 3/4 wavelength for that frequency, and we have incoherence, but a 4:1, where it sucked for the first frequency, for this frequency we now have coherence. And we'd have it at 2:1 also (even better, with even less delay), removing all magic from the 3:1 idea.

So the question we have to work out - even though we may not be thinking of it in that respect, we're just looking for what sounds good - is which frequencies do I want to emphasize and which ones do I de-emphasize, because I do have some small range of control over that via the mic distance when multi-miking a single source.

Of course there are other complications that enter in like room reflections, room bass modality, different microphone frequency responses, etc. that keep it from being just that simple, but the point remains valid.

G.
 
That all makes sense to me. My complete experience with using multiple mics is limited. So, I cant exactly argue with someone who has more information to offer than I do.
 
Geez, we're going to cause him to never touch an EQ knob again. :)

You can do everything right: arrange, get the gear, have a nice room, etc. and still need to EQ to clear up *some* inevitable mud, especially when you obviously don't have ideal situations. But yes, there's a diff between fixing mud (if you're referring to it that way there is something severely wrong with the transparency of your monitor, gear, performance or room... don't pick on the helpless li'l mix) and making things "righter."

*You* may still need to use panning and reverb to reinforce the space of the mix, for one, regardless of how amazing the arrangement and particular the space, performance and equipment is. No matter how "true to sound" your input collection and then output is, you still have some work to do with the mix to make it match the space and feeling of being there, which simply may never truly be duplicated save for a change in one or more of the aforementioned.

So yes, while there are a lot of truisms in the thread there are also plenty of generalizations to go along with it. ;)


That is clearly lightyears beyond covering a wellmade song as you're setting out to do.


Not everyone has access to a space with zero effecting of the sound, monitors that are completely flat and transparent, high quality this or that, etc. So you can not make music until you "are ready to make good recordings" aka specific gear, treated rooms monitors etc.... or you can roll up your sleeves and try to mitigate what is going on with your particular mix.


The reality here, and what Daisy is guiding your towards, is that if you're standing there in a room and you hear the band play live, why would you then try to fight those sounds by aggressive effecting or EQing if you mean to capture and replicate that sound? There are reasons to do it intentionally if you're going for something special, but you're either grabbing the sound or not.


Just because you're hearing a dampened version of the sounds or that you're hearing them individually doesn't mean you can simply overlap them in a mix and tweak knobs to get them friendly...

...one of the most telling things when I was first starting out is looking at people's sample mixes that sounded great, their source files and how their EQing was very, very, very, very, very, very minor. When someone says cut this or that out to bring something else up it is so slight that it really is a simple shifting of a layer in the mix at that frequency. Really, most of the times I couldn't tell much of a diff between the two, but you really can hear it across a wide variety of listening devices.

Sure there is always room for notching out certain bits and of course low and hi passes where needed, but the less, the better. Some folks hardly even EQ at all until the "mastering phase" which is really mixing the stereo track to make room for some extra loudness or bringing a bunch of tracks to a similar feel/level.


It cannot be emphasized enough that you begin with the actual sounds that are made by your gear, which are embellished by their environs and then further embellished by the microphone, preamps, and hardware until it is recorded. If you had masterful control over placement, and how each of those segments in your chain effected the sound, you could easily mitigate them.

An SM57 colors a snare differently than a Beyer201. Do you just leave how they alter the sound of the snare alone? That's where you come into understanding that microphone selection is one of the first (after where the instrument is placed in the room and how it is played) choices in terms of 'eqing your song.' Next would be the position of the mic itself. The SM57 boosts high-ends. The Beyer adds a midrange bump. If you had the choice, you'd pick one or the other not only for their specific coloration, but also for what EQing choices you have in terms of bleed, isolation and lowering transients. Making use of the proximity effect of the beyer as well as its superior off-axis rejection is why people use it for%
 
Last edited:
Back
Top