Mastering out the muddyness

  • Thread starter Thread starter fdbk
  • Start date Start date
F

fdbk

New member
Hey all. I always seem to have this problem when using digital audio interface programs. I can get the low end of the songs to sound pretty good when I use headphones, but when I use speakers it always ends up getting muddy, loses crispness, etc. It's not just my speakers either... my friends all agree. Does anyone have some mastering recommendations that I could try in order to reduce the muddy "thudding" low end? Thanks! Attached is something I made using Reason 4.0.

 
I have two suggestions; 1) Don't mix with your headphones, use monitors in an acoustically treated room. Seriously, this is where you mix is falling apart because you aren't actually listening to how it sounds. Headphones give you a false representation of the mix and you make the wrong adjustments to compensate. You want an acoustically treated room because an untrated room will do the same thing as headphones. The term "Translation" means how your mix sounds on different sound systems; studio monitors, car stereo, home theatre, ipods, etc. If it sounds good on all other types of systems, it translates well.

2) Don't wait until the mastering stage to fix the muddiness. Get it sounding good at the mixing stage.
 
Chili has said it all, really.

You can do preliminary mixing and detailed editing on headphones . . . but EQing and final mixing are best done on speakers.
 
Ok - so I should edit on speakers, got it. Any tips on what causes muddiness, so I can prevent it? Or should I post this in the newbs section? I'm kind of new to digital recording... thanks :D
 
muddiness is caused by a build up of low frequencies...take any instrument that is not a bass instrument, and start cutting from say 400 hz and down...you want the low end from kick drum and bass guitar....you can lower this range for pretty much everything else and your mixes will clean up a lot....
 
+1 on Chili -

What to do? Stop using headphones first and foremost. T'aint gonna happen.
 
Muddiness in what? Is this a stereo mix, or separate tracks?

If it's only a stereo mix, this ain't good.

If it's separate tracks, figure out which track has the muddiness in it.

Whether it be drums, bass, or vocals, mudiness is usually found in the range of 250hz-800hz.

That's where you will find the offending sounds.
 
It's good (essential) to get an understanding of the harmonic series. It really helps to take some basic trumpet or trombone lessons!

You've got your fundamental, then the first harmonic (the octave) and the second harmonic (the fifth).

Muddiness is a caused by too much first and second harmonic compared to the fundamental.
 
Last edited:
I dunno how exactly to describe the muddiness I'm experiencing. Basically, it's like the subwoofers on my speakers are rumbling way more than they should be. Maybe they're just bad speakers?
 
Ok - so I should edit on speakers, got it. Any tips on what causes muddiness, so I can prevent it? Or should I post this in the newbs section? I'm kind of new to digital recording... thanks :D
OK, this may come out harsh, but I am back from a b-day party and have finished a bottle of congnac with a buddy of mine, so I am a bit tipsy. Take that into consideration...

Dude, stop blaming it on digital. Your mix will likely sound even more muddy in analog :p

Plus, you never, ever fix muddyness issues or any "bad sound" issue in mastering.

You get your tracking right, and you take care of your EQ and sound-fitting in mixing. Mastering should never be about corrective surgery, although that's what it is being turned into for some odd reason.

Stop mixing on headphones. However, I can understand if you HAVE to mix on headphones... I am in a similar boat... whiny neighbors, wife, kid... issues. So you end up using the headphones 90% of the time and then things sound like shit.

If you HAVE to mix on headphones, get ones that are accurate enough, specially when it comes to the low end. Then learn the shit out of them by listening to as many CDs as you can. Even then, you will never get the bottom end right, and you'll likely not get the midrange right either, and it's possible you won't get the high end right and will end up with muffled mixes.

So, eventually you will have to fire up the monitors and at least check your mixes on them. The problem is, if you listen to your headphones most of the time, you won't be learning your monitors as well as you should, so it's likely you'll start "fixing" things that you shouldn't and making poor decisions. Also, there is this misguided attitude that nearfield monitors don't need much of room acoustic treatment, which is far from the truth. You need good acoustics. I used to have weird phasing, comb filtering issues in the mid to high range due to reflections. You move your head slightly and the timbre changes... good luck getting midrange right in that kind of environment.

So... bottom line, treat your room, try to spend some time with your monitors and learn them, and try not to mix on the headphones.
 
I dunno how exactly to describe the muddiness I'm experiencing. Basically, it's like the subwoofers on my speakers are rumbling way more than they should be. Maybe they're just bad speakers?
Nope. They're just bad mixes ;)
 
You get your tracking right, and you take care of your EQ and sound-fitting in mixing. Mastering should never be about corrective surgery, although that's what it is being turned into for some odd reason.

I'll take this one step further and say you should not be fixing EQ problems or doing any sound-fitting in mixing.

The single most important use your monitors have is for the initial recording. If there is mud coming through your monitors when you're micing something up, don't even hit record. By the time you are mixing any EQ or tone changes should be artistic, not corrective.


noisewreck nailed the rest of it.
 
And to help ya with fundamental frequencies for the instruments etc...
Check out Southside Glens Goods
http://www.independentrecording.net/irn/

Toward the bottom right you'll see the interactive frequency doo dad.

Study that for a bit and it'll help guide you on where you can do hi pass (lo cut) EQ on your tracks.

Muddiness is one of the things that kicked my ass for awhile. A big part of it is just cutting out the lower to lower mid frequencies that isn't part of the fundamentals for that instrument or vocal.

It takes a little practice and patience. And monitors...LOL!
Headphones lie dude. :D
 
I'll take this one step further and say you should not be fixing EQ problems or doing any sound-fitting in mixing.

The single most important use your monitors have is for the initial recording. If there is mud coming through your monitors when you're micing something up, don't even hit record. By the time you are mixing any EQ or tone changes should be artistic, not corrective.


noisewreck nailed the rest of it.

That's a pretty interesting idea but I don't know if it's practically possible.

I can't change how my voice resonates within the confines of my head and chest without some kind of surgery, sure I can engage the filter on my SM7B to cut the lowend but then it's gone for good when I record and if I want it later I cant have it back.

I want to mic for as good a core sound as I can get and if I do a good job with that the track will take the EQ better than a poorly miced track.

Good case in point is with an acoustic guitar track I'm using on a song I'm working on, I want a good wide spectrum, full sound when the guitar is solo however when the bass, electric guitar and drums come in I cut the EQ completely below 250 hz and low shelf a couple of db between 250 - 400 hz on that acoustic guitar track. On its own the acoustic sounds wierd at that point but in the mix it's not fighting for space but keeps a good essential "Struminess" that showcases the acoustic aspect in the mix
If I miced it way off axis with EQ on the way in it might not have any mud at the points where it's mixed with other parts but I wouldn't be able to have the full sound when the guitar is solo because that sound would simply not be present in the recording
 
That's a pretty interesting idea but I don't know if it's practically possible.
Sure it is.

I can't change how my voice resonates within the confines of my head and chest without some kind of surgery,
But you can change the distance from the mic, the angle you sing into the mic, where the mic is placed in the room, what room the mic is in. All make drastic tone changes.
sure I can engage the filter on my SM7B to cut the lowend but then it's gone for good when I record and if I want it later I cant have it back.
That's why producers get the big bucks. You have to make vital sonic decisions and live with them at the tracking stage. Know what the sound should be so that it fits without wrecking anything else, make that sound, record that sound and nothing else, and move on.

I want to mic for as good a core sound as I can get and if I do a good job with that the track will take the EQ better than a poorly miced track.

Good case in point is with an acoustic guitar track I'm using on a song I'm working on, I want a good wide spectrum, full sound when the guitar is solo however when the bass, electric guitar and drums come in I cut the EQ completely below 250 hz and low shelf a couple of db between 250 - 400 hz on that acoustic guitar track. On its own the acoustic sounds wierd at that point but in the mix it's not fighting for space but keeps a good essential "Struminess" that showcases the acoustic aspect in the mix
So record it that way in the first place instead of using EQ to fix it in the mix later. Move the mic around. Twiddle the amp knobs. Change the pickup selector. Get that sound.
If I miced it way off axis with EQ on the way in it might not have any mud at the points where it's mixed with other parts but I wouldn't be able to have the full sound when the guitar is solo because that sound would simply not be present in the recording
So you track the guitar one way for it's backing tracks and a second way for it's solo tracks. Two different sounds deserve two different tracking methods.
 
Sure it is.

But you can change the distance from the mic, the angle you sing into the mic, where the mic is placed in the room, what room the mic is in. All make drastic tone changes. That's why producers get the big bucks. You have to make vital sonic decisions and live with them at the tracking stage. Know what the sound should be so that it fits without wrecking anything else, make that sound, record that sound and nothing else, and move on.

So record it that way in the first place instead of using EQ to fix it in the mix later. Move the mic around. Twiddle the amp knobs. Change the pickup selector. Get that sound.
So you track the guitar one way for it's backing tracks and a second way for it's solo tracks. Two different sounds deserve two different tracking methods.

Those are all good points
Either way you look at it your EQ ing the track
Your method you EQ at the recording stage my way I EQ at the mixing stage. and neither way is the definitieve right way

Personal preference for me is to generally record the track in one go. For me it doesn't make sense to record two bars mic'd one way for when the acoustic is front and center leading the instruments. Two more bars mic'd another way for when the drums come in and then mic'd yet another way for when the whole band comes in and then diffently mic'd again when the acoustic and electric are working together alone in the mix for a couple of bars
For me the music doesnt flow as well as just playing the whole section through and then EQing the mix afterward. I find it's too chopped sounding if I have to record the part twenty different times twenty different ways and then glue it together after the fact.
Also if I decide later to drop an instrument or change the orchestartion I've then got re record to get the EQ I want vs just changing my EQ effect automation on a track that I like, that's why I suggest the non destructive method.
 
Last edited:
For me it doesn't make sense to record two bars mic'd one way for when the acoustic is front and center leading the instruments. Two more bars mic'd another way for when the drums come in and then mic'd yet another way for when the whole band comes in and then diffently mic'd again when the acoustic and electric are working together alone in the mix for a couple of bars
For me the music doesnt flow as well as just playing the whole section through and then EQing the mix afterward. I find it's too chopped sounding if I have to record the part twenty different times twenty different ways and then glue it together after the fact.
Also if I decide later to drop an instrument or change the orchestartion I've then got re record to get the EQ I want vs just changing my EQ effect automation on a track that I like, that's why I suggest the non destructive method.
This is where arrangement comes in. Yes, it doesn't make sense to change the tone that often and chop an instrument up. So carefully arrange the song in such a way as you don't need to do that. Make decisions and stick with them. What is driving your song? Is it the acoustic guitar? Is it the electric? The vocal? Is there a reason to change the "driver" at a point in the song? What is the best way to change "drivers" if you have to? Most times the best way is to simply leave it up to the dude playing the instrument. He hangs back when he needs to, digs in when he's the show.

What if an acoustic keeps driving after electric stuff comes in? If the acoustic is the focus, don't make it concede to the backing guys. Track the backing guys in such a way that they keep their place.

All about arrangement. All about a plan. All about making tough choices up front.
 
Chibi is being perhaps a touch too reductionist, but overall he's pretty much right. I'll use EQ to do a little bit of tweaking here or there, or occasionally for special effect, but by and large if you don't have something that sounds pretty much like a mix when you're done setting levels and panning then the problem isn't the mixing, it's the tracking.

I don't know if this is true or not, but I've heard it alledged that the last Tool album, 10,000 Days, was recorded entirely without EQ, and the engineer did everything he needed to with mic choice and positioning. It's worth thinking about.
 
My last couple of songs I tracked for my bro were mostly "done" in the tracking stage.

Mic placement was a big part but a bigger part was gain staging starting at the very first pre.
I set all my faders at unity and brought my trims up to around -18 to -15 when I tracked it.
Left alot more headroom and was much easier to mix. Basically just tweaks.
 
Back
Top