Question on Mixing Heavy Rhythms and the stereo spectrum

So, for instance, I mentioned that the Axe FX contributes with lack of clearity (among other things). Now, that is due to multiple things, but I'm going to cover a few aspects of it. The first one is that the timbre compared to the real thing is damaged by the digitization- and fx processing when running the signal through it, which basically reduces all nice qualities about the sound source (relative to the real thing) and its impact on the mix.

Translation: Real amps might sound better.


The other thing is that the Axe FX specifically, cannot be configured to run at any other sample rate than 48 kHz, which means it puts a sample rate limitation on the whole project down to 48 kHz, you can do various workarounds, but it puts a clear limitation on your platform, your sound and your workflow. Now, what that limitation means, is what you make of it (I'll describe why later), but in the context of the issues that OP is having, it's definitely essential, but it's also essential in general, at least if you ask audiophiles like me.

Translation: Maybe try using a higher sample rate plug.

Now, the reason why the Axe FX issue might seem to be a minor issue, but is not especially in this case, is due to this:

Each sound source's all frequencies bring about a set of qualities to the mix as a whole, these are found in the timbre of each sound source, the product of how the signal was created/captured and processed (and also of course a natural property of the sound source). The less mix signal you assign to it, the less dominant those set of qualities will be on the mix as a whole. In this case, OP is doing rock/metal mixes where one of the qualities is to have big/fat guitars that also contribute to a high quality stereo image. That is not possible to achieve without assigning a certain amount of mix signal to it, in fact quite a lot of mix signal. Now, going back to what I explained that each sound source brings about a set of qualities to the mix, these are however not in equal "doses" relative to each other, the doses are sitting on a logarithmic scale, because the underlying Voltage RMS is logarithmic. In other words, the more dominant you make a sound source in the mix, the more dominant all its qualities will be on the mix as a whole, cumulatively more dominant the more of the mix signal it consumes. That's an aha-moment, why, because it is something that is not commonly understood like that, it's understood to be something linear, but it's not. For this reason, it is important to understand what qualities each sound source brings to the table, because even small undesired qualities related to individual sound sources can become very dominant on the mix as a whole.

Translation: If the guitars are too loud in the mix, then turn them down...if they are too low, turn them up.

What happens is that when OP adds the signal he/she wants in order to create that big rhythm guitar section and big mix sound, its negative qualities are brought with it, of course not entirely "blamed" on the natural limitations of the device but also due to how it has been setup and recorded. For instance you can record two separate guitar sounds on each channel that you hard pan L or R, or you can record a significantly inefficient stereo signal and then give it too much signal as well. It's all going to add up to the issues OP is facing.

For this reason, as soon as you have any issues related to any mix, it's important to look at what the product is consisting of.

Translation: If the guitar rhythm tones don't sound right, you should record again with better guitar tones.


This would make a great episode for:

science_of_stupid0nkop.jpg

It's a show about ordinary people coming up with their abstract/layman's versions of "scientific solutions and explanations" to certain processes. :D
 
So, for instance, I mentioned that the Axe FX contributes with lack of clearity (among other things). Now, that is due to multiple things, but I'm going to cover a few aspects of it. The first one is that the timbre compared to the real thing is damaged by the digitization- and fx processing when running the signal through it, which basically reduces all nice qualities about the sound source (relative to the real thing) and its impact on the mix.
So you are saying that the pristine signal from the Duncan Distortion pickups is being just ruined by digitization, before it even gets distorted to death? How is it that the distorted guitar isn't further ruined by digitizing it into the DAW and/or the CD that it is eventually burned to. If digitization damages everything it touches, what are we even doing here?

The other thing is that the Axe FX specifically, cannot be configured to run at any other sample rate than 48 kHz, which means it puts a sample rate limitation on the whole project down to 48 kHz, you can do various workarounds, but it puts a clear limitation on your platform, your sound and your workflow. Now, what that limitation means, is what you make of it (I'll describe why later), but in the context of the issues that OP is having, it's definitely essential, but it's also essential in general, at least if you ask audiophiles like me.
You can easily record the analog outputs of the AxeFX into the DAW at any sample rate you want. Of course, if your guitar has any significant signal past 20khz, which would be the only reason to record any higher than 48k, that might be part of the problem with the guitar sound. It might also be why the paint is falling off the walls.

Now, the reason why the Axe FX issue might seem to be a minor issue, but is not especially in this case, is due to this:

Each sound source's all frequencies bring about a set of qualities to the mix as a whole, these are found in the timbre of each sound source, the product of how the signal was created/captured and processed (and also of course a natural property of the sound source). The less mix signal you assign to it, the less dominant those set of qualities will be on the mix as a whole.
Translation: The lower it is in the mix, the less of it there is in the mix.

In this case, OP is doing rock/metal mixes where one of the qualities is to have big/fat guitars that also contribute to a high quality stereo image. That is not possible to achieve without assigning a certain amount of mix signal to it, in fact quite a lot of mix signal. Now, going back to what I explained that each sound source brings about a set of qualities to the mix, these are however not in equal "doses" relative to each other, the doses are sitting on a logarithmic scale, because the underlying Voltage RMS is logarithmic. In other words, the more dominant you make a sound source in the mix, the more dominant all its qualities will be on the mix as a whole, cumulatively more dominant the more of the mix signal it consumes.
Translation: The louder the guitars are in the mix, the more you hear them.

That's an aha-moment, why, because it is something that is not commonly understood like that, it's understood to be something linear, but it's not. For this reason, it is important to understand what qualities each sound source brings to the table, because even small undesired qualities related to individual sound sources can become very dominant on the mix as a whole.
Translation: If the guitars are kind of annoying sounding, the louder they are in the mix, the more annoying sounding the mix will be.

What happens is that when OP adds the signal he/she wants in order to create that big rhythm guitar section and big mix sound, its negative qualities are brought with it, of course not entirely "blamed" on the natural limitations of the device but also due to how it has been setup and recorded. For instance you can record two separate guitar sounds on each channel that you hard pan L or R, or you can record a significantly inefficient stereo signal and then give it too much signal as well. It's all going to add up to the issues OP is facing.
Translation: By the time you turn up the annoying sounding guitars loud enough to be big, the mix sounds annoying because the guitars that sound annoying are really loud in the mix.

For this reason, as soon as you have any issues related to any mix, it's important to look at what the product is consisting of.
Translation: Because when you have an issue with a mix, you have to listen to it to see what you have in the mix that could be causing the issue.
 
Im a pretty good mixer. I usually start of with a joke or two, to lighten the mood. Then I bring up a current event to get the ball rolling.

usually by 9 pm, I have a girl to take home. Well not all the time, merely 95% of the time, or so.

Roofies and rape vans aren't really "taking a girl home".
 
Hmm. It might be that a point by point explanation can add some of the clearity that you feel is not there. I'll try to extract a few of my points and hopefully be able to anchor this in something that makes some sense from your point of view, because this is not something I want to make sense only from my point of view, the reason I'm sharing is really to spread that insight so that you might also find that it makes sense and why it does so, hence maybe give it a try or at least a chance as something that can have a positive impact on mixes.

So, for instance, I mentioned that the Axe FX contributes with lack of clearity (among other things). Now, that is due to multiple things, but I'm going to cover a few aspects of it. The first one is that the timbre compared to the real thing is damaged by the digitization- and fx processing when running the signal through it, which basically reduces all nice qualities about the sound source (relative to the real thing) and its impact on the mix. The other thing is that the Axe FX specifically, cannot be configured to run at any other sample rate than 48 kHz, which means it puts a sample rate limitation on the whole project down to 48 kHz, you can do various workarounds, but it puts a clear limitation on your platform, your sound and your workflow. Now, what that limitation means, is what you make of it (I'll describe why later), but in the context of the issues that OP is having, it's definitely essential, but it's also essential in general, at least if you ask audiophiles like me.

Now, the reason why the Axe FX issue might seem to be a minor issue, but is not especially in this case, is due to this:

Each sound source's all frequencies bring about a set of qualities to the mix as a whole, these are found in the timbre of each sound source, the product of how the signal was created/captured and processed (and also of course a natural property of the sound source). The less mix signal you assign to it, the less dominant those set of qualities will be on the mix as a whole. In this case, OP is doing rock/metal mixes where one of the qualities is to have big/fat guitars that also contribute to a high quality stereo image. That is not possible to achieve without assigning a certain amount of mix signal to it, in fact quite a lot of mix signal. Now, going back to what I explained that each sound source brings about a set of qualities to the mix, these are however not in equal "doses" relative to each other, the doses are sitting on a logarithmic scale, because the underlying Voltage RMS is logarithmic. In other words, the more dominant you make a sound source in the mix, the more dominant all its qualities will be on the mix as a whole, cumulatively more dominant the more of the mix signal it consumes. That's an aha-moment, why, because it is something that is not commonly understood like that, it's understood to be something linear, but it's not. For this reason, it is important to understand what qualities each sound source brings to the table, because even small undesired qualities related to individual sound sources can become very dominant on the mix as a whole.

What happens is that when OP adds the signal he/she wants in order to create that big rhythm guitar section and big mix sound, its negative qualities are brought with it, of course not entirely "blamed" on the natural limitations of the device but also due to how it has been setup and recorded. For instance you can record two separate guitar sounds on each channel that you hard pan L or R, or you can record a significantly inefficient stereo signal and then give it too much signal as well. It's all going to add up to the issues OP is facing.

For this reason, as soon as you have any issues related to any mix, it's important to look at what the product is consisting of.

I suppose this thread is so far derailed from it's topic it won't be presumptuous to add my A$0.02. Don't know if it's pure trolling, but you could have said it like this:

P1: I want you to understand my point of view

P2: The Axe FX sounds like crap compared to real amp micing because it uses DSP. It also limits the project sample rate to 48kHz which is a problem to me because i consider myself an audiophile.

P3: If the source sound of one channel in a mix sounds like crap, you will hear that crap in the mix. Guitars are loud in rock and metal, and Axe FX guitar sounds are crap. Because the dB scale is logarithmic, the crap sound of the guitar is not just louder, but exponentially louder. This will make the whole mix sound crappier, exponentially crappier. You have to train your ears to hear crap or it will end up in your mixes, and they'll sound crap.

P4: You could also mic an amped guitar in stereo and it could still sound like crap in the mix if you screw it up.

Last Sentence: If your mix sounds crap you have to look at the individual elements to find the crap.

Don't know if this helps, or is just trolling in it's own right, but that is surely the long way round, to say some pretty self-evident and obvious things. I don't endorse/disapprove any of MW's points of view and have never been near an Axe FX as long as i've lived, i was simply paraphrasing. I'm not trying to be an ass about it to anyone, i've done enough of that IRL to bother with attempting it with strangers@interwebs. I just saw it all as a hilarious episode of miscommunication. And after my head exploded after 10 pages of trying to decipher some of what was said, i feel the need for, and think i've earned the right to some payback :cursing:

Edit: LOL just saw the other translations - sorry to add more dross - i was slow... but it is pretty hilarious
 
Last edited:
Im a pretty good mixer. I usually start of with a joke or two, to lighten the mood. Then I bring up a current event to get the ball rolling.

usually by 9 pm, I have a girl to take home. Well not all the time, merely 95% of the time, or so.

I knew this circus was missing something.....and then they send in the clown.

Perfect. :)
 
"Talk to me, you never talk to me anymore".... Pure cinemascope!

I see you're still stuck on that line......I knew it was gold when I wrote it. :)

Now....if you would just follow its sentiment, that would certainly bring it on home for everyone here.
 
I always read the comments first. Everyone has good suggestions. One is on the money and that is that panning gives a false sense of space. Without a center speaker there is no isolation. The drums and bass just end up on both speakers mixed into the guitars. Here's your real problem and this is not a comment about your choice of music or genre. When you have this much distortion and so little dynamics, all this work is really a waste. So let's deal with what the experts do. First, run each track through a parametric eq. What you're looking for are the "bad" sounds that are irritating. Narrow the Q all the way, turn up the gain and run a slow sweep from bottom to top noting the offending frequencies. Now slowly widen the Q to see how far on either side the problem is. Isolate it and then apply a full cut at that frequency. Do that on each track and try a mix. If it sounds a little to sterile, add a little back in here and there until you are satisfied. Now do a mix in mono. Get it right, focus on the bass and drum mix first. Reverse the process, looking for the good thud and boom of the bass and guitar and make sure they don't cancel each other out. (Cut the bass where the drum is best and the drum where the bass is best. Where they are both, make an educated guess. Now add the guitars but don't put them in front. (Tell me the last time you saw all the pretty girls dancing to a guitar. Girls dance to the drums. Put them in front. There's a lot of air in them so the guitar will be just fine. Now for panning. Leave the drums in mono (That's the way the audience hears them) and the bass as well. Bass frequencies are unidirectional; come from everywhere; so you can never tell if they are left or right anyway. Now drop out the guitars and add any singer. Get the mix of the bass, drums and singer right before pulling the guitars up. Everyone always adds way to much guitar and destroys the song for the sound of metal. Remember, it is all about the song. (A band should never be involved in the mix even though they are paying for it unless they are professional enough to listen for the whole song. Everyone always wants "more me" and they just waste their time and yours. Good luck, I like your music.
Rod Norman
Engineer

Hello all! I had been a member a long time ago, and finally joined back.
I wanted to know what you guys listen for, what you do, and what you use in order to create open sounding mixes where your rhythm guitars sound like they're pushed far out to the sides and the center of the spectrum left clear for kick, snare, and bass to punch right through in near perfect isolation. You can check my mix tests on the link in my signature.
I currently use an AxeFx II for guitars and bass into Cubase. On the more recent mixes I use two different rhythm tones hard panned Left and right. I High and Low pass guitars, with minor eq cuts around 3.5kH, maybe 800ish Hz and 250ish Hz to sit better in the mix. Multiband compress the low end so the palm mutes don't trigger my buss compressor when necessary. Rhythm bus goes into Slate VTM. I monitor through Adam A7x's in a completely untreated bedroom :(

So, let's share some tips and tricks and get these mixes to open up wide! I don't like that I can't get the guitars out wide, but maybe I need to look somewhere else.

All comments appreciated.

Thanks!

Carlos.
 
One thing though, when you run a parametric with a narrow Q and cranked up....pretty much every frequency it sweeps through will sound bad.
You need to kinda *know* in advance where the offending stuff is, and just focus there, though of course, even the harmonics on either side of the "bad" frequency can often add to the total offending sound....so it's not always as simple as just finding one frequency that you can slice out.

Not sure what you're talking about "without a center speaker" .....???
Drums and bass can be panned dead-center, even without a center speaker. They won't end up on the sides with the guitars.
 
I always read the comments first. Everyone has good suggestions. One is on the money and that is that panning gives a false sense of space. Without a center speaker there is no isolation. The drums and bass just end up on both speakers mixed into the guitars. Here's your real problem and this is not a comment about your choice of music or genre. When you have this much distortion and so little dynamics, all this work is really a waste. So let's deal with what the experts do. First, run each track through a parametric eq. What you're looking for are the "bad" sounds that are irritating. Narrow the Q all the way, turn up the gain and run a slow sweep from bottom to top noting the offending frequencies. Now slowly widen the Q to see how far on either side the problem is. Isolate it and then apply a full cut at that frequency. Do that on each track and try a mix. If it sounds a little to sterile, add a little back in here and there until you are satisfied. Now do a mix in mono. Get it right, focus on the bass and drum mix first. Reverse the process, looking for the good thud and boom of the bass and guitar and make sure they don't cancel each other out. (Cut the bass where the drum is best and the drum where the bass is best. Where they are both, make an educated guess. Now add the guitars but don't put them in front. (Tell me the last time you saw all the pretty girls dancing to a guitar. Girls dance to the drums. Put them in front. There's a lot of air in them so the guitar will be just fine. Now for panning. Leave the drums in mono (That's the way the audience hears them) and the bass as well. Bass frequencies are unidirectional; come from everywhere; so you can never tell if they are left or right anyway. Now drop out the guitars and add any singer. Get the mix of the bass, drums and singer right before pulling the guitars up. Everyone always adds way to much guitar and destroys the song for the sound of metal. Remember, it is all about the song. (A band should never be involved in the mix even though they are paying for it unless they are professional enough to listen for the whole song. Everyone always wants "more me" and they just waste their time and yours. Good luck, I like your music.
Rod Norman
Engineer

Thanks Rod.
I use the sharp Q technique on snares and room mic's to eliminate ringing. I also use it on bass to reduce some piercing mid frequencies caused by the over drive pedal, but haven't really done it on anything else. I haven't checked any mixes in mono, but I guess I could try this as well as you suggested.

As far as the posts by MW, aside from all the stuff that went way over my head, was that of mid/side EQ and mid/side compression. Density meaning how much compression was applied to each.

I'm not sure what he meant by "how much mix signal" to put through the axe fx.

The axe fx is also my only solution right now. I can't justify the purchase of an amp and cab plus microphones/mic pre-amps. The axe is capable of a lot. I don't see how limiting me to 48K sample rate affects me negatively. I also don't get the "timbre" argument.

I know the reverbs and panning of the center guitars are an issue I need to address, as well as their volume and mid frequencies mud.

Thanks again.
 
Thanks Rod.
As far as the posts by MW, aside from all the stuff that went way over my head, was that of mid/side EQ and mid/side compression. Density meaning how much compression was applied to each.

Yes, also how you dial it in, for instance a shorter release time will kind of create more holes in the compressed layer of frequencies allowing transients from other sound sources (also on the same sound source) to cut through more easily. Hence, in order to create more rhythm/groove you might need to lower the compressor's release time significantly. A common issue is that the center panned sound sources have too long release time dialed in on the compressors, which makes the sides suffer. A very important quality about the sides is in their rhythm quality. You want the sides to "communicate" with each other in as high quality as possible in terms of rhythm, usually you want that because of the kind excitement it brings to the mix. Delay is typically also used to enhance this.

Thanks Rod.

I'm not sure what he meant by "how much mix signal" to put through the axe fx.

Thanks again.

It's simply just how much of the available mix signal that consists of those particular guitar frequencies. Your converter's input and output stages might have their full scale at +24 dBu, a certain amount of that available mix signal is addressed/assigned by each track in the mix.

The axe fx is also my only solution right now. I can't justify the purchase of an amp and cab plus microphones/mic pre-amps. The axe is capable of a lot. I don't see how limiting me to 48K sample rate affects me negatively. I also don't get the "timbre" argument.

I know the reverbs and panning of the center guitars are an issue I need to address, as well as their volume and mid frequencies mud.

Thanks again.

In this kind of situation when you have a limitation like the Axe FX in the mix, then it's really up to those other qualities in the mix that is kind of going to indirectly tell you what kind of issue the Axe FX really is, the style, your goals will also color this decision. I have an Axe FX II, so I know it's true capability and potential as well as how I use it (when it can be used, when it cannot be used etc.), in other words all I'm writing is a result of personal experience. Had it been a pop mix where you would have had a number of other delicate sound qualities and your goal would maybe be not to make the guitars so dominant, then just lowering its dominance on the mix would be the solution. In your case you are almost building the whole mix around the guitars, in that case it's a little different. It can work but as you see you start having various mix quality issues such as a muddy center on your particular guitar/axe configuration as soon as it reaches certain signal levels.

Try this, temporarily mute the guitar in the center as well as the bass, then step by step lower the guitar by -0.1 dB until you find it "clicks" in terms of center clearity. Stop there, then unmute the guitar in the center and the bass and see what you've really ended up with. If the center is then not clear enough, pan the center guitar slightly to one side, try out which side feels the best.

I know the reverbs and panning of the center guitars are an issue I need to address, as well as their volume and mid frequencies mud. Thanks again.

Great! I hope I've been of help, please post your new version of your mix where you have dealt with the issues in the ways proposed. :thumbs up:
 
Last edited:
I think thr axe fx is not the problem. A large percentage of all the heavy music coming out in the last year or two has one of those things in it somewhere, if not everywhere. Don't get me wrong, you can make them sound bad, but that is user error, not a failing of the unit.

I diagnosed the problem with the mixes early on. The guitar sound is fine, even if it's a little too processed. The trouble is the bass and all the rest of the instruments have the exact same tone dialed in.

The mix would sound huge if the bass had some low end to it and had a different midrange than the guitar. ( especially in the parts where the bass is being played in the same octave as the guitar.

There is no need for any fancy tricks, special processing , amp purchases, a higher sample rate, delays, mid/side processing, multi-band compression, parallel compression, etc...

Simply make the other instruments sound different than the guitar, which they should anyway, and the mix will sound huge just having those guitars panned wide.

If you do have a center guitar, mute it. It will add to the thickness, but it will take away from the stereo spread of the guitars, unless it is playing a different part than the main rhythm.
 
.... a shorter release time will kind of create more holes in the compressed layer of frequencies allowing transients from other sound sources (also on the same sound source) to cut through more easily.

"a shorter release time will kind of create more holes in the compressed layer" ........?

No doubt that release times make a difference....but once again, your terminology makes no real application sense.
Once again, you're using your own contrived phrases to explain something as only YOU understand it.

Maybe your net results are quite good (we can only guess, until we actually hear some)....but I think you should step away from "my writing is a result of personal experience" and study a bit how most audio pros would describe application processes.

Don't just try to pick up the "key word" lingo (like what/how Pensado says stuff on his videos).....instead, step back to the fundamental audio language and descriptions, and then I think everyone would better understand you. Some audio examples would certainly help.
 
Back
Top