EQ Question: Any Mid Frequency Adjustments Messes Every Instrument Up

Mike Freze

New member
Hi! I've played around with EQ settings and here's what I seem to notice. Whether it's on a car stereo system (with basic low, mid, and high adjustments), my amp (same thing), or my computer software program (more specific editing capabilities, parametric settings, Q, etc.), it all seems to boil down to one thing.

It always seems that if I increase certain frequencies (or a band of them) in the low to mid low area, super: get that bottom boost adjustment and it sounds great. When I need more high frequencies or a bit of presence, raising a bit of low or mid highs really works and the low end is still there.

Anytime I adjust the mid frequencies, it ALWAYS messes up the lows I liked (cuts them out or lessens them) and the highs get screwed up, too. I find that if I leave the mids alone (centerfield, no gain or no lessening on my EQ graph), I'm fine when I adjust anythingt lower or higher.

Why is this? I thought mid EQ adustments add "presence" or "forewardness" to a track. Maybe it does, but it messes up the lows and highs if you change it. Even on vocals (supposedly a mid-range type of audio signal), it ruins the overall sonic balance of the mic recording.

So is it advisable to ignore the mids whenever possible and concentrate on all frequencies to the left or right? What's the point of EQ adjustments in the mid area if it just screws up the lows and highs anyway??

Mike Freze
 
When you adjust EQ you can introduce phase issues and cancel some frequencies out. You might be able to time shift them ever so slightly to get a better average. But the mids are right there in that trouble zone. IMO you'd be better off bringing down lows and highs to bring out the mids. Depending on sample rates, editing software, and other things.
 
When you adjust EQ you can introduce phase issues and cancel some frequencies out.

There have to be two versions of the signal with different eq settings for the phase changes to cause destructive or constructive interference beyond the desired boost or cut of the eq itself. One track with eq on it can't interfere with itself.
 
If the mid eq is affecting the highs and low then look at the filter width. Most plugin eqs are parametric so you should be able to narrow the filter (increase the Q) so it doesn't affect as much of the spectrum.

Vocals are not "mid-range" audio signals, they are broad range. They can have important content from below 100Hz to above 10kHz.
 
There have to be two versions of the signal with different eq settings for the phase changes to cause destructive or constructive interference beyond the desired boost or cut of the eq itself. One track with eq on it can't interfere with itself.

I'm assuming that we're mixing multiple tracks with different EQ settings. But even if you're doing it to the final result, there can be conflicts between Left and Right channels if it's not mono and/or you do different EQ to each.

Or it could be something weird like one of the monitors is phase reversed. And other less than intuitive things.
 
Changing one are can effect our perception of the other areas. But might you be using to broad for the cut?

Also, and I don't know if this'll help- Try taming what/where you have too much first.

I find I do more trim and shaping reducing on the bottom octaves than boosts by a long shot. (Not sure why, or what the dif is there..

Someone mentioned 'mix/fix/get the mid range right first then the bottom is easier'.
Can't say I've had this work out for me in practice- still sort of checking the idea out though 'cause it seems like an interesting tack. :)
 
I generally do the following edits in order +/-.

- record at stereo and 24/192

- de-interleave stereo track into left and right mono files.
- concat any multi-part files (field recorder auto breaks at 1GB / must keep below 4GB)
- noise removal if any
- EQ if any
- change speed (adjust clocks to match distinct and otherwise unsyncable devices / speed increase)
- resample to a more manageable rate at the highest quality available. (takes longer than HD video conversions)
- interleave back into a stereo track

- adjust gain and trim
- resample to deliverable formats (from 24/96 to 16/44.1 or 16/48)

- fiddle with EQ to find a better way and repeat from first EQ step if a better way is found.

- generate any optical discs or lower quality / sharable formats.

-----

Doing any noise removal first is key IMO. The filter depends on repeated sounds. Many edits will add stuff to break up repeated sounds to make it sound less harsh. Which renders most noise remove algorithms impotent. If you need to do that step. My main pair of mics are almost 30dB of a noise floor (A-weighted).

I tend to want to do a low end bump, but in reality, I've found that doing a high end bump helps to bring out the lower part because you can hear the attacks of notes better. Instead of just sustains. And my mics are getting old I guess. Plus that whole fake fur thing that attenuates the high end.
 
I'm assuming that we're mixing multiple tracks with different EQ settings. But even if you're doing it to the final result, there can be conflicts between Left and Right channels if it's not mono and/or you do different EQ to each.

Or it could be something weird like one of the monitors is phase reversed. And other less than intuitive things.

Multiple tracks of the same thing copied with different eq applied would be a problem, but that's not how I understood the OP's problem. Eq on one track that has no copies will not result in cancellation although phase is affected. If someone is using different eq on left and right of a mix they have serious problems that should be corrected upstream of that stage. But even then the eq won't cause cancellation unless the two channels are summed to mono.
 
But even then the eq won't cause cancellation unless the two channels are summed to mono.

The two tracks ARE summed to mono at the point of reproduction. Depending on how far you are from the speakers, how far the speakers are from each other, the room and other things. Suffice to say that if they don't do MONO well, then they will always sound better on headphones that on speakers. But not if you push the mono button on your headphone preamp.
 
The two tracks ARE summed to mono at the point of reproduction.

Not really. Even though the left ear hears the right speaker and vice versa, our binaural hearing uses various cues to allow us to hear the two speakers as separate. A measurement mic makes no distinction based on the direction sounds arrive from and just sums them all together at the one point, but human hearing is much more sophisticated and capable of filtering by direction.

Depending on how far you are from the speakers, how far the speakers are from each other, the room and other things.

Generally, yes. More specifically, what matters is the angle formed by the speakers with the head as the vertex. If the angle is too small the brain can't separate the sources and will perceive comb filtering if there are phase interactions.

Suffice to say that if they don't do MONO well, then they will always sound better on headphones that on speakers. But not if you push the mono button on your headphone preamp.

True.
 
If you record in stereo, i.e. two mics, you'll always have phase issues. Just because each mic captures basically the same content at the same time. And might even be the same brand, and model. And even very close to each other (17cm). The difference in space is large enough in many cases to introduce issues related to phase. Our brains are equipped (within limits) to deal with it, and even expect it in some cases. Too much and you've blown the effect. Too often and you've annoyed more than entertained. Without it and you can't pick out certain noises in noisy environments. With it and it's just noise if done poorly. There's a reason that certain techniques have names and have withstood the test of time (a historically short time, but still)
 
If you record in stereo, i.e. two mics, you'll always have phase issues. Just because each mic captures basically the same content at the same time. And might even be the same brand, and model. And even very close to each other (17cm).

That's a change in subject, but I'll address it.

You won't always have phase issues. Using a coincident pair with the elements an inch apart, any significant interaction will be restricted to 17kHz and above. Putting the elements one above the other the distance between them is essentially zero in the horizontal plane, and phase interactions essentially go away.

But this all stems from the discussion about eq on one track. Eq does cause phase shifts, but unless there's a second copy of the same sound at about the same level with different eq, phase is not an audible problem.
 
In most cases it's not audible on most equipment to most people. But if you're doing several edits and EQ is just the first, then the problem as it were can be cumulative to the point of being audible. To most people on most gear. Maybe not on the first listen, but if you're trying to make a hit that will endure repeated listening. Then every little distraction matters.
 
In most cases it's not audible on most equipment to most people. But if you're doing several edits and EQ is just the first, then the problem as it were can be cumulative to the point of being audible. To most people on most gear. Maybe not on the first listen, but if you're trying to make a hit that will endure repeated listening. Then every little distraction matters.

I will repeat, if there is only one copy of the track the phase shifts from eq do not result in phase interference because there is nothing to interfere with. It's the interference, seen as comb filtering in a frequency response plot, that is the audible effect of phase shift. Paranoia about phase shift from eq is a waste of time when there are other far more audible issues to deal with.
 
Boulder is correct. Any phase shift of a single track that is not mixed with other copies of itself or other mics picking up the same source will not be noticeable. In fact, it would only be possible to detect it by comparing it to the original.

Phase is a relationship, one thing can't be out of phase by itself.
 
But how often are you going to use one mic for one source in isolation? Especially for drums? Little drummer boy for the church in december? Beyond that you have at least a snare and a bass drum, probably being recorded in one take with more than one mic, in the same room. No, it's not if I mix the two I get complete phase reversal and a silent mix track. But it is, this sounds odd mixed, but doesn't in the original or individually. And it can be annoying enough to bother converting the .WAV to .MID and synthesizing the drums in midi. Even if you have a decent set of drums.
 
...this sounds odd mixed, but doesn't in the original or individually.

Exactly. As you say, it's only when you combine the tracks does the phase becomes a problem. Individual tracks sound fine. Eq won't have phase problems on one track unless there's a copy with different eq.
 
Exactly. As you say, it's only when you combine the tracks does the phase becomes a problem. Individual tracks sound fine. Eq won't have phase problems on one track unless there's a copy with different eq.

But most gear, even a lowly camcorder records more than one audio track (stereo / 5.1 / ...).

As I try to recall anything other than a voice over that I've recorded acapella. And even when I do that I use two mics so I can have two levels and better odds of getting it in one take. Not that I mix those tracks, but still.
 
The phase shift caused by EQ is not going to make anything sound that screwed up. It only shifts the phase around the frequency you are messing with. Even then, it doesn't change it that much. You are probably experiencing the timing difference between the different mics being accentuated by the tonal difference you are applying to it with EQ. EQ couldn't cause that much broadband phase difference otherwise it would always be totally useless.
 
But most gear, even a lowly camcorder records more than one audio track (stereo / 5.1 / ...).

As I try to recall anything other than a voice over that I've recorded acapella. And even when I do that I use two mics so I can have two levels and better odds of getting it in one take. Not that I mix those tracks, but still.

The stereo mics in a camcorder, or other X/Y mic, are so close there won't be any real difference in arrival time, so no phase problems if you mix them. If the two mics in your voice over setup are close enough together the same applies. If you put the mics at different distances and mix them you could easily have a phase problem.
 
Back
Top