spectral balance

  • Thread starter Thread starter tl32
  • Start date Start date
T

tl32

New member
I wasn't sure if this should be in the newbie forum, but since I haven't been able to find any in depth posts on this subject.

First if we could get a solid definition of spectral balance. I understand you want a nice curve, and no rigid peaks. You want to add distinction and clarity to individual components b finding a space in the spectrum for each instrument to dominate.

Do I have this right?

Are there rules of thumb when it comes to where specific components typically sit in the spectrum? I know every mix is different. I just want to know about the process, and where you start etc.
 
It's like any other art project you have a base and you build from there
 
Is it more about listening and making changes by ear, or more about analyzing technically and adjusting according to some kind of model?
 
Is it more about listening and making changes by ear, or more about analyzing technically and adjusting according to some kind of model?

It's sound, use your ears - most valuable piece of mixing advice you'll ever hear.
 
Yeah, you shouldn't really be looking for spectral balance, you should be listening for it. That said, a spectrum analyzer can be extremely helpful in confirming something that you think you are hearing and can help you figure out how to solve it.

If you feel like there is too much high end, but you can't put your finger on where or from what, a spectrum analyzer can help you pin point it. But you shouldn't look at a spectrum analyzer and say "hey, it looks like I have too much high end, I better bring it down."
 
I agree 100% with everyone who posted before me.

Spectral balance is already in the relationship between the Vocals, Drums, Guitar, Bass, Piano or whatever. Each instrument sits in a different frequency range. When the instruments overlap is when you lose some of that spectral balance because of combined frequencies reinforcing or canceling one another.

Use filters and subtractive or complementary EQing to create space.

Learn to how separate the bass and kick drum and your mixes will become much better.
 
I agree 100% with everyone who posted before me.

Spectral balance is already in the relationship between the Vocals, Drums, Guitar, Bass, Piano or whatever. Each instrument sits in a different frequency range. When the instruments overlap is when you lose some of that spectral balance because of combined frequencies reinforcing or canceling one another.

Use filters and subtractive or complementary EQing to create space.

Learn to how separate the bass and kick drum and your mixes will become much better.

I've heard that a lot, and it's never made sense to me. Most instruments and sounds take up pretty much the entire spectrum. Has anyone ever actually done this successfully? If so, can you show some examples and explain how you did it?
 
I've heard that a lot, and it's never made sense to me. Most instruments and sounds take up pretty much the entire spectrum. Has anyone ever actually done this successfully? If so, can you show some examples and explain how you did it?

Ever done what successfully? I don't understand the question...
 
Ever done what successfully? I don't understand the question...

Tried to get instruments to not overlap. I think I just don't understand what the process is. What aspect of each instrument do you not want overlapping?

I understand as far as low end goes. I high pass filter pretty much everything that is not a bass or kick drum, but other than that, I pretty much let everything be and adjust eq to my taste, but don't ever really think about overlapping instruments.

I'm not saying it's a wrong thing to do, I'm saying that there is a fundamental idea behind it that I don't understand.
 
Tried to get instruments to not overlap. I think I just don't understand what the process is. What aspect of each instrument do you not want overlapping?

I understand as far as low end goes. I high pass filter pretty much everything that is not a bass or kick drum, but other than that, I pretty much let everything be and adjust eq to my taste, but don't ever really think about overlapping instruments.

I'm not saying it's a wrong thing to do, I'm saying that there is a fundamental idea behind it that I don't understand.

Well, I was mainly talking about the low to low-mid frequencies. But, for example if I have a synth and guitar fighting over a frequency then I'll cut a little here boost a little there and wallah now they fit. When you EQ you don't listen to see what your doing affects the way the instruments sound together? That's really all I'm saying here. My intention was wasn't to tell the OP to put a piano at frequency range xhz-xxkHz and never let a guitar touch that range.
 
First if we could get a solid definition of spectral balance. I understand you want a nice curve, and no rigid peaks.
Sorry, this is incorrect. There are no rules or guidelines or any other such thing as to how a spectral graph should look. There will be peaks, sometimes more, sometimes less, and the idea of a "nice curve" only works if you're talking about pink noise, which most of us don't like to listen to.

Spectral analysis cannot be used to determine if something is right or correct. It can only be used by trained eyes to help diagnose some very specific causes of problems one already hears.
You want to add distinction and clarity to individual components b finding a space in the spectrum for each instrument to dominate.
This is kind of closer to the truth, but still is somewhat oversimplified.

First there are many ways to add distinction and clarity to individual components that have little to do with which part of the spectrum they dominate. Sometimes, in fact, one needs to cut bad stuff away from the section of the spectrum that an instrument dominates in order to increase clarity and definition.

Second, there may often be more than one instrument that are dominant in a specific part of the spectrum, but because of the song's arrangement or the instruments' locations in the stereo soundscape (or both) they cooperate with each other instead of fight against each other for their part in the mix.

That said, though, it is generally correct that you want to use the frequency spectrum available to you in a fairly democratic way by letting the instruments stake out their own sections and their own roles in the mix.

A huge chunk of this should be handled in the music composition and arrangement, and the tracking and mix should be designed support this arrangement. Where there is no arrangement - e.g. a newb garage band where every single player wants to play full-bore lead through the entire song - the mix engineer has to have the discipline to know when less is more and when to use gain or mute automation to arrange the song on the fly to make spectral sense.
Are there rules of thumb when it comes to where specific components typically sit in the spectrum? I know every mix is different. I just want to know about the process, and where you start etc.
Like already said, you start with the music composition and arrangement. Just like Beethoven with an orchestra, the instrument lines should be written/performed to give each instrument it's role spectrally as well as emotionally. Then you sound design the tracking to support that. For example, you might tune the guitar amps and select the guitars and guitar pickups to support the idea of a high guitar vs. a low guitar, or select bright and dark microphones/preamps for different instruments to support the roles in a similar way.

There are no rules as to which instruments to use for which roles. There are obvious trends, but there are so many exceptions to these trends that they cannot be considered rules. The obvious ones would generally be to have bass and kick drom dominate the low frequencies and cymbals and shakers and such up on the high end, but everything else - guitars, keyboards, horns, vocals, etc. - tend to all fall somewhere in the middle and can be used for just about any spectral role you desire (of course a barrie sax will be lower than an alto sax and so forth).

Follow the arrangement. If the arrangement sucks or is non-existent, then use your ears to figure what make sense. If git 1 sounds like it has a little lower of a timbre than git 2, then you might EQ to reinforce that difference. If the left hand on the piano sounds really cool, you might make room for that more on the low end, but if that's boring or muddy, but you have a great big tom, then you might let the left hand of the piano fade just a bit and emphasize the noodling on the right hand instead. And so forth. This is what is often called "letting the mix tell you what it wants".

G.
 
Tried to get instruments to not overlap. I think I just don't understand what the process is. What aspect of each instrument do you not want overlapping?

I understand as far as low end goes. I high pass filter pretty much everything that is not a bass or kick drum, but other than that, I pretty much let everything be and adjust eq to my taste, but don't ever really think about overlapping instruments.

I'm not saying it's a wrong thing to do, I'm saying that there is a fundamental idea behind it that I don't understand.

I think broadly speaking it comes down more to arrangement and trying to avoid having to many clashing elements in the same registers. Like don't have the vocals, pianos and guitars all on the same notes etc but place them in different octaves to support each other without fighting for the space and then use EQ, panning etc to cut out where unecessary overlap occurs to stress the fundamental of the instruemnt

But if you are looking specifically at kick and Bass guitar it's a tougher one because they are both inherently Bass instruemnts. I have found using side chained compression to slightly duck the longer bass guitar notes while the kick is hitting it's initial oomph is a good way to allow both to exist in the same space and be somewhat distinct. I have found personally the way I set up my bass lines that there is rarely any action below 80- 100hz so I can shelve or rolloff below that to allow some room for the kick to hit and I'm experimenting with separating the action of above say 250 hz and panning the two slightly (Very slightly ) apart and off center to see if that helps to open up a space in the middle. The really low stuff I leave alone since it has a tendancy to be more directionaly hard to place anyway.
But then again I'm only really starting to delve into this stuff more deeply now that I have had the chance to mix some stuff and then step away from it and come back with fresh ears and say "Yikes"
 
I've heard that a lot, and it's never made sense to me. Most instruments and sounds take up pretty much the entire spectrum. Has anyone ever actually done this successfully? If so, can you show some examples and explain how you did it?

For separation of bass and kick I usually start with hi pass filters for the lowend. I tend to set the Kick at around 60hz with a somewhat steep roll off. I set the bass at 80hz with a steep roll off. That way the kick is pretty much the only thing living in the 50 to 80 Hz range. I then push up 4k to get the kick click that I like. Less for mellow songs, more for hard rock.
At that point it's just a matter of making sure that the bass has enough mid to come through. You can get away with scooping some mids on the kick too if you need. Usually I have plenty of seperation at this point and I adjust the kick mids to taste.


You’re hitting on it. I try to avoid what I call frequency stacking. Multiple instruments with a heavy presence in a certain range. 250hz is a real common area because many people like it in many instruments to make them big. By it's self this might be fine but if your guitars, bass and drums are making them selves known there it might get you a really ugly bottom end. 2505hz may sound fine until you drop the vocals in and it’s the straw that boke the camels back so to speak. That does not mean you necisarily have to pull 250 from the vocals. It may only sound muddy or resonant when several things chime in at once. All of them contributing. See what can most easily loose the offending frequency and how much it can lose. maybe you will end up pulling a bit from several tracks.

Don't boost two guitars at 2.5k to make them stand out unless they are already as seperated in the mix as you want them. Boost one there and another at 2k or if they are both coming through fine but are blending with each other, maybe you cut a bit of 2.5k on one and boost a tad on the other. That's just a common sense thing. Please don't take any frequency I list as a steadfast rule. You may have way too much or too little 2.5k in your guitars? I don't have any clue.

You are right that everything takes up most of the spectrum and that everything needs a home. That's why you focus on the fundamentals of each instrument and don't mix things while soloed unless you are looking for a specific problem. I you listen and have the whole mix up, that will eliminate most issues. You are looking for how things sound in the mix so mix that way. The other thing is just experience. Knowing what frequencies make what sound when there is too much or too little.(boomy, honky, dull, sparkly, etc.)

Hope somewhere in that ramble you learn something;)

F.S.
 
I don't think Bozmillar was actually asking for advice on this...

I think Boz had a issue with my first post.

It's interesting to hear about some other approaches to mixing though.
 
I don't think Bozmillar was actually asking for advice on this...

I think Boz had a issue with my first post.

It's interesting to hear about some other approaches to mixing though.

I actually was looking for advice. I understand your feelings though, since I do try to spark debate a lot, but I wasn't this time.

I feel like the replies were actually exactly what I was looking for. Thanks.
 
I'll be honest...I've never adjusted EQ of individual tracks based specifically on what EQ range other tracks are occupying, thereby trying to give each tracks a specific "slice" of the bandwidth.

What I do is listen to the total mix...and if the sum of instruments is say...real heavy in the 300-500 range...I will adjust ALL of them so that they, as a WHOLE, are not overloading that range...but I will not cut say…all the 300-380 of one instrument and cut all the 380-460 of another...etc...etc.
To me, when all those instruments play live...you don't shape their EQ in that manner, they all are allowed to play throughout their entire frequency range.
So at most, I would say you just need to adjust some "areas" of the total bandwidth to better accommodate the recording mediums and playback systems...but not to “reserve” specific bands for specific instruments/tracks.
I think the better way to achieve some separation is through panning…and then use EQ to subtly fine-tune, but without heavy cuts of any natural frequencies the given tracks/instruments have.

AFA manipulating the bandwidth through specific arrangement…yes, that is the better approach than EQ adjustments during the mix…though I will point out that it’s not always better to arrange so that individual instruments “avoid” each other in certain frequency bands. Sometimes it’s the summing of similar bands of different instruments with different timbers that actually works positively…and not always negatively.
So it’s really a song by song basis.
I’m not sure why some people have very specific EQ SOPs that seem to be decided upon in advance of the actual music and/or how it sounds when the individual tracks are mixed….things like, always cutting xxx on the Kick…or adding xxx on the guitar…etc…etc…???
 
I’m not sure why some people have very specific EQ SOPs that seem to be decided upon in advance of the actual music and/or how it sounds when the individual tracks are mixed….things like, always cutting xxx on the Kick…or adding xxx on the guitar…etc…etc…???

I don't know about in advance of the music but you need to have an idea of the shape of the song and what you need to do to get there. If your mixing for someone else and have never heard the piece before then your going to run it through faders up to get an impression of the thing and then talk to the artist(s) about what their vision is and build a plan to achieve it. If you are the home recording writer, performer and engineer the you should already have an idea of what the desired outcome is.

If I want the acoustic guitar to back away a little when the vocal comes in then I'm going to low pass some of the high end and drop the volume a little for that section. I don't know exactly how much until I'm actually mixing but that's my general plan. If I have a strong walking bassline that I really want to emphasize in the bridge then I don't want a lot of low end mud from the acoustic going on at the same time again nothing wrong with knowing that ahead of time and knowing I'll high pass the acoustic in that section (exact settings to be determined). I see nothing wrong with having an overall plan for the mix beforehand with the flexibility to experiment for effect as the mix develops. And there's nothing to say that my original plan will sound 100% right when I put it into the mix and I 'll have to change/adapt
Now that is not the same as using exactly the same preset or settings on every track for every song. That's a foolish approach, but going into a mix with no idea of what you want the outcome to be and no idea what you are going to do is equally foolish IMO, that's just pissing into the wind and hoping you don't get wet.

To me, when all those instruments play live...you don't shape their EQ in that manner, they all are allowed to play throughout their entire frequency range

That's true but when listening to live instruments in say a bluegrass band, I don't listen through only one ear 3-6" from every instrument on the stage with every instrument having a simillar level of volume/power. But thats how the mics generally capture sound in the home studio. So in these cases there is no sense of distance and the effect of distance on higher freqs, or resonance in a nice room that could reinforce or cut certain freqs that you would hear in a "natural setting" so EQing could be appropriate to re insert that sense of space and 3 dimensions that is lost in close mic'd instruments that really don't sound much like they do in a real live setting
 
Last edited:
I'll be honest...I've never adjusted EQ of individual tracks based specifically on what EQ range other tracks are occupying, thereby trying to give each tracks a specific "slice" of the bandwidth.

What I do is listen to the total mix...and if the sum of instruments is say...real heavy in the 300-500 range...I will adjust ALL of them so that they, as a WHOLE, are not overloading that range...but I will not cut say…all the 300-380 of one instrument and cut all the 380-460 of another...etc...etc.
To me, when all those instruments play live...you don't shape their EQ in that manner, they all are allowed to play throughout their entire frequency range.
So at most, I would say you just need to adjust some "areas" of the total bandwidth to better accommodate the recording mediums and playback systems...but not to “reserve” specific bands for specific instruments/tracks.
I think the better way to achieve some separation is through panning…and then use EQ to subtly fine-tune, but without heavy cuts of any natural frequencies the given tracks/instruments have.

AFA manipulating the bandwidth through specific arrangement…yes, that is the better approach than EQ adjustments during the mix…though I will point out that it’s not always better to arrange so that individual instruments “avoid” each other in certain frequency bands. Sometimes it’s the summing of similar bands of different instruments with different timbers that actually works positively…and not always negatively.
So it’s really a song by song basis.
I’m not sure why some people have very specific EQ SOPs that seem to be decided upon in advance of the actual music and/or how it sounds when the individual tracks are mixed….things like, always cutting xxx on the Kick…or adding xxx on the guitar…etc…etc…???


It's should always be as needed.
Of course if you always use a D112 on your kick drum you are always eqing it the same way as a matter of course ;) The Mic manufacturer has already taken into account that the vast majority of the time kick drums sound better with some pretty drastic EQ'ing and so has pretty much every other kick drum mic manufacturer.

F.S.
 
If I want the acoustic guitar to back away a little when the vocal comes in then I'm going to low pass some of the high end and drop the volume a little for that section.

Yeah...I know some people like doing EQ in that manner, changing it for different sections of a song, but I tend not to use that approach.
IOW...the overall EQ of an instrument or voice doesn't normally change as they are played or as people sing...so I don't like doing that to them during a mix.
I like to treat my mixes (for typical Pop/Rock/Country/etc) as though the music is being played by a real band. For "head music" (synth/electronic/trance)...the nature of the music is often based on more "artificial" sounds to begin with and often not something to be performed by a typical 5-6 piece (sometimes it can be), so in those cases, mixing decisions are more flexible and left to any/all interpretations.

...when listening to live instruments in say a bluegrass band, I don't listen through only one ear 3-6" from every instrument on the stage with every instrument having a similar level of volume/power. But thats how the mics generally capture sound in the home studio.

Yes and no.
I don't usually close mic…I like a little air in-between the source and mic.

So in these cases there is no sense of distance and the effect of distance on higher freqs....

Room ambience is easily added during mixing, as needed...so yes, you CAN create a large room sound, as when a band is playing live.

Again...it's a song by song basis, but I generally reach for overall EQ settings as the very last step of the mix. I'm more focused on setting the panning and level balance first, as I find you can get most of the way there before hitting the EQs...in most cases.
But I'm mostly working on my own stuff, so my pre-production and tracking are already aimed toward a "sound" (in most cases). It's not something I consider only at the mixing stage...but even so, I don't think in advance how much and which EQ I'm going to apply for this or that.
I just wait to see how it sounds after it's all working together and the panning and levels are set.

I guess everyone has their "method"....
 
It's should always be as needed.
Of course if you always use a D112 on your kick drum you are always eqing it the same way as a matter of course ;) The Mic manufacturer has already taken into account that the vast majority of the time kick drums sound better with some pretty drastic EQ'ing and so has pretty much every other kick drum mic manufacturer.

Why don't they just build that drastic EQ curve right into the mic...? :)

I agree there are some SOPs that get used over, from session to session...we all do that...I'm just saying that I try not to have the same EQ settings planned for things right from the start as some automatic SOP.
If I know a certain mic/instrument combination is going to have a certain sound...and that's the sound I really want...OK…
….but to just apply it in advance, as though that's always THE sound...
…mmmmm, not too often, as it doesn't always work out.

That's like that Har-Bal crap...where you copy EQ settings from one song and then apply them to a different song. :D

I zero all my EQs and then set them one by one for everything...per song...even if it's the same instrument, same mic, same overal recording setup...etc.
 
Back
Top