How do you all EQ your mid range stuff?

  • Thread starter Thread starter soundchaser59
  • Start date Start date
S

soundchaser59

Reluctant Commander
Spending the last few hours toying with this idea of creating an "EQ niche" for each instrument in a mix got me wondering.

I started by running a kick drum thru a low pass filter while running my bass thru a bass EQ pedal, and watching both of them at the same time thru a spectrum analyzer. I put the kick spectrum on top, and just below it the bass spectrum, so that I could see them both at the same time, to see if they were peaking in the same EQ range or not. I got it to where the kick is peaking in the 30-60hz range, while the bass is peaking in the 50-200hz range. It seems like this has made it easier to hear the kick and the low bass notes at the same time.

The article I read also said to use a fast compressor attack on the bass so that the bass note attack would not hide or obscure the attack of the kick drum. This works because the kick drum basically has only it's attack to make itself heard, since it has little or no audible decay or sustain in a mix. At the same time, the bass notes usually sustain or at least have a much longer duration compared to the kick notes, so the bass can afford to sacrifice it's initial attack (just 1 or 2 ms is more than enough) in order to make room for the kick to be heard. This makes the kick and bass play together better in the same EQ sandbox. All of this, so far, seems to be working.

So I got to wondering if I might shorten this learning curve some by asking all of you if you have any special tricks for EQ'ing the midrange instruments and voices so as to allow them each to be heard individually within a mix rather than being heard as a mass of midrange mud like some of my mixes seem to do.

I figure most of the instruments that give a song it's "identity" by creating chord progressions and melodies are midrange sounds that have most of their energy in the low mid, mid, and upper mid EQ ranges. In particular, I notice the guitars, the pianos/organs, and the vocals all tend to be most powerful all in the same range, and hence they end up fighting with each other to be heard in my mixes. Most of the fundamentals of these three timbres fall within approximately the 100 to 500 hz range. If the guitar is playing chords and the keys are playing chords while the voices are singing chords, the mix gets fairly blurry there quick. However, listen to a hadnful of great classic rock albums and we know it can be and has been done. How did they do it?

Some of the reading suggests that an engineer need not EQ the fundamentals of an instrument or a voice in order to give that sound its own space in the mix. It is logical to assume that I cannot EQ those three parts all within the 100-500hz range or elses they will never stop fighting each other. The reading suggests that there are ways to EQ each of these parts differently so that each one can seem to have its own space without fighting the other two.

So instead of spending countless hours experimenting with EQ and spectrum analyzers by trial and error, I figured I would try to shorten the slope some by asking all of you how you EQ guitars, keys, and voices in a mix in order to make each one sound like it has its own aural space.

Thanks in advance for any info!
 
So instead of spending countless hours experimenting with EQ and spectrum analyzers by trial and error, I figured I would try to shorten the slope some by asking all of you how you EQ guitars, keys, and voices in a mix in order to make each one sound like it has its own aural space.

I know this is gonna sound crazy, but I do it by LISTENING to the tracks. :D
 
Put them all up in a reasonably rough mix level-wise, in mono, and then EQ whatever needs it however it's needed.

Turn off the spectrum analyzer...
 
Yeah, I was going to answer with two words: "By ear".

Put the spectrum analyzer away, it's the wrong tool for this job. Instead spend some time with your EQ and your ears instead. Learn what the different frequency ranges and key frequencies *actually sound like*, and that will be the most powerful shortcut you can find. It's amazing how fast things go and how good the results are when you can listen to a guitar track and actually tell just by listening that it (just for hypothetical example) too muddy in the low mids somewhere around 250-500, but that you want to save and polish that chug you're getting around 2k.

It's a a lot faster than using your eyes, and the results - unless you just plain have a tin ear - will always be a HELL of a lot better. A good place to start is by running parametric sweeps on all your tracks. Not only will this help to sharpen the tracks, it'll also help you get your ear used to listening to and recognizing frequencies.

G.
 
Last edited:
So instead of spending countless hours experimenting with EQ and spectrum analyzers by trial and error, I figured I would try to shorten the slope some by asking all of you how you EQ guitars, keys, and voices in a mix in order to make each one sound like it has its own aural space.

Thanks in advance for any info!
Honestly, step back from this and ask yourself: if it were so easy, wouldn't everybody do it? If there's one thing I've learned from doing this, it's that each engineer develops his/her own style and ability, based on what he/she considers to be a "good mix". You can't fake it, you can't short-cut it, there's nothing you can do but sit down and track/mix a shit ton of stuff. Do it over and over and over again, make detailed notes of what worked and what didn't work, and don't make the same mistakes twice!
 
A couple of points/tips:

  • Don't EQ with the instruments in solo, it's the interaction between the instruments that's critical. Always make EQ decisions when listening to the entire mix, a track can sound like crap in solo but may be perfect when you hear it in the mix.

  • Know what frequency range the track should contribute to the mix. For example you may want to filter out some of the extreme highs in a distorted electric guitar to make room for cymbals, or cut some of the extreme lows in a kick or bass to let them fit better. High pass and low pass filters are your friend,

  • Listen to the fundamental frequency(ies) of the tracks and listen for tracks that are competing for the same space. EQ is all about the art of compromise, use complimentary EQ where needed.

  • Don't forget to use EQ for depth. Brilliance makes a track seem more "forward", darker moves a track back. Background vocals for example are usually slightly darker than the lead vocal.
 
All great tips here. I'd only add that the best way to get sounds to work together is at the source. I know this always sounds like a copout but it's absolutely true, particularly in the midrange. If you can play with guitar amp settings, for example, until the guitar pops it will always work better than eq.

The spectrum analyzer thing can be used to get your bearings a little in the bass/subbass range since the fundamentals are visible, but it's really not very useful in general. By the time you get to the midrange there are so many harmonics that it becomes useless.

I think we all fight with this stuff constantly, so just keep practicing.
 
I just spent several hours learning about the blessings of the high pass filter.

I needed the visual of the anlyzer to see all the low freq "rumble" that was going on in these tracks. I had no idea there would be stuff going on below the fundamental. Hence the new love affair with the high pass filters!! :D

Yeah, I realized soon after that it was useless to use the anlyzer to try and figure out midrange and upper mid curves for the rhythm instruments.

I also had a hard time trying to figure out how to use the canned plugs to do de-essing. Seems nobody likes including 4 band parametric eq as a standard plug with the software. But I was surprised how attenuating the "ssssssss" freqs (mostly between 5k and 9k?) did not really change the overall sound of the vocal track that much. It seemed to remove some of the "air" but not really any of the vocal warmth.

Good info, thanks everybody!
 
I needed the visual of the anlyzer to see all the low freq "rumble" that was going on in these tracks. I had no idea there would be stuff going on below the fundamental. Hence the new love affair with the high pass filters!! :D .....

yep!...the second I got hooked into looking for that nasty low end stuff my recordings seem to generate, and cleared them, that was a jump in magnitude for me!...I didn't realize how much power an unuseful low end can suck out of a mix!!
 
....The article I read also said to use a fast compressor attack on the bass so that the bass note attack would not hide or obscure the attack of the kick drum. This works because the kick drum basically has only it's attack to make itself heard, since it has little or no audible decay or sustain in a mix. At the same time, the bass notes usually sustain or at least have a much longer duration compared to the kick notes, so the bass can afford to sacrifice it's initial attack ....

Good ideas and points to have in mind but in the end you'll still want to do first what's needed for the individual situation at hand. It may well be that if the bass isn't locked in well with the kick, messing with the bass's envelope and 'sacrificing it's punch could be a way out. In other cases, you're also loosing the punch of the two working together. (What about where the style is 'bass forward, and notes where there's no kick?) It's always context specific tools.

..Some of the reading suggests that an engineer need not EQ the fundamentals of an instrument or a voice in order to give that sound its own space in the mix.

A few things to consider..
-The first line of getting things to fit' (hopefully) is in the arrangement, second, tracking choices -granted this would qualify as 'pre eq' (tone and micing choices), up front.

-Not all have to be in separate tone slots. Overlap is ok.
The more dense the instrumentation, or the more dense you want the mix -the more clean out is needed, or over lap left in.

-Go after the trouble hot spots first, identifying which instrument are in what shape -on their own and in the combination.

Very often just going through a rotation tackling the 'worst first leaves the mix open to work without getting to clinical about the rest it.

by the way.. While I feel most of this stuff is pretty much true.. don't get the idea that I'm comming on like 'it's easy'.. Still struggling my way through too. :)
 
I figure most of the instruments that give a song it's "identity" by creating chord progressions and melodies are midrange sounds that have most of their energy in the low mid, mid, and upper mid EQ ranges. In particular, I notice the guitars, the pianos/organs, and the vocals all tend to be most powerful all in the same range, and hence they end up fighting with each other to be heard in my mixes. Most of the fundamentals of these three timbres fall within approximately the 100 to 500 hz range. If the guitar is playing chords and the keys are playing chords while the voices are singing chords, the mix gets fairly blurry there quick. However, listen to a hadnful of great classic rock albums and we know it can be and has been done. How did they do it?

A lot also has to do with arrangement, a good arrangement will translate into a good mix, or at least it will be easier to mix, i'd say that if you have 2 guitars, voice, piano, etc performing the same chords, or the same notes, etc.., within very similar rythm patterns, you have a bad arrangement. Instead of trying to fix it in the mix, why not trying to make a better arrangement (example: make the piano play an octave lower or higher, or try something like a fifth up, change the rythm pattern on the guitar as to leave space for the piano and voice, etc..). So when you hear your classic rock albums, listen to the arrangement of the song, is the piano/organ and guitar playing the same thing at the same time?, what do the instruments do on choruses?, verses?, is there any space left for the voice?, whats the interaction voice vs instruments regarding rythm and melody (or harmony of the instruments). Is the bass guitar playing with the same pattern of the kick drum?, etc.. i try to make myself this questions first, then after i have a very good idea, then ill start eq'ing stuff...
 
Get a distressor.That thing brings any thing you run through it to the front and adds a whole lot of punch and clarity. Then eq.www.sterlingsoundstudios.com

I think there are a lot of easier, cheaper ways, than buying a 2k dllr compressor, and still it wont solve anything if the arrangement is wrong or the instruments are not eq'ed correctly, or even worse, not recorded properly. By the way, dont you think sterlingsound is a little bit of a rip-off name from the famous Sterling Sound Studios in NYC?
 
-Not all have to be in separate tone slots. Overlap is ok.
The more dense the instrumentation, or the more dense you want the mix -the more clean out is needed, or over lap left in.

Well said. In my tip above about complimentary EQ this is an addendum. Use complimetary EQ where you hear a frequency range becoming "lumpy" or resonant.

-
-Go after the trouble hot spots first, identifying which instrument are in what shape -on their own and in the combination.

Very often just going through a rotation tackling the 'worst first leaves the mix open to work without getting to clinical about the rest it.

The 80/20 rule once again.
 
What is that????
it seems like there are a million variations and definitions of the 80/20 rule, in just about every industry there is. I used to know it in Tech support as "80% of your support calls come from 20% of your customers."

It sounds like Tom is using it in the same way by saying something like "80% of your mix can be fixed by removing the biggest 20% of it's problems."

Am I close, Tom, or am I part of th 20% ;)? (And no fair tailoring your answer based upon the results of tonight's game... :D )

G.
 
You gotta go by ear... who cares what the numbers say.

What sounds good? That's what matters.

Funny how a lot of home recording questions could be answered with that one specific answer. But I think its important everyone understands that.
 
  • Like
Reactions: NL5
it seems like there are a million variations and definitions of the 80/20 rule, in just about every industry there is. I used to know it in Tech support as "80% of your support calls come from 20% of your customers."

It sounds like Tom is using it in the same way by saying something like "80% of your mix can be fixed by removing the biggest 20% of it's problems."

Am I close, Tom, or am I part of th 20% ;)? (And no fair tailoring your answer based upon the results of tonight's game... :D )

G.

Yep, G. has it. Or 80% of your time is spent on fixing 20% of the problems.
 
I EQ it according to the sound that needs to be EQed. :P It varies.
 
Back
Top