Phasing--Let's Talk About This

  • Thread starter Thread starter crawdad
  • Start date Start date
Middleman--while all you surmise may indeed be true, I think that frequency conflict is the prime suspect. If you've ever seen a chart of frequency response for common instruments, you know that there is a lot of overlap between things like guitars, pianos and bass, not to mention synths and other acoustic instruments. When the same frequencies build up in certain areas, the ear doesn't know what to focus on--its a sort of trainwreck.

The answer is partly getting the sound right at the source and using the right mic placement for the situation. That helps. Still, there are going to be areas where two guitars and a piano are fighting for the same space--thus making muddy and murky sound. If you doubt this--just pull all the midrange stuff and hear the bass and drums alone. All the sudden you hear timbres in the bass and drums that you didn't know were there!

Thats where EQ comes in--and thats why I have been asking all these crazy questions! EQ can shape each sound so that it has its own spot in the mix without walking all over everything else.

My other rule of thumb is the old cliche that less is more. Its far easier to mix a bass, drums, guitar, piano and vocal mix than it is to mix something with that and 8 more parts on top. The more there is, the less separation there tends to be.

I notice it most in the 100-300htz range--all that low mid stuff. I think that is something that every engineer deals with--no matter how good the mics and pres are--at least of you are recording bands that lay down lots of tracks.

Just some thoughts.
 
Middleman said:

When I am working on multiple tracks after about the 10th track, I start to hear phasing or maybe its frequency conflict, I am not sure. Things just start to sound, not as defined, for lack of a better phrase. Is this the result of too many EQs and compression on various tracks? Do poorly written or perhaps too many plugs tend to build up kind of a digital distortion?

Perhaps this a function of the AD/DA conversions being overloaded and it would improve with a more high end sound card. Or, is this the result of just digital recording in general, all those 1s and 0s being pushed through software, plugins and converters?

It's a bit of 'all of the above'. The mix buss on software mixers are generally the wink link and some software just doesn't mix as cleanly as others. Using a lot of compression, EQ and effects can definately take the zip and sparkle out of your tracks.

When in doubt turn off all your effects and listen to the tracks raw and decide if you really need that compression or EQ on a track. If using the effect isn't noticeably better than get rid of it.

When in doubt do without. (that was lame, I know)
 
All that has been said is pretty good, but I would add one more thing, Middleman. If you have that many instruments going, you need to be creative with your panning, which to my mind means you need to start with mono tracks. If you start layering a whole bunch of stereo tracks, you will just run into problems. The theory, when mixing stuff like that, is to give every thing its own place in the mix, both in terms of panning and EQ. Where exactly you put things, well that is the creative part of it.

Most importantly, keep practicing, and keep listening. This is a skill and it can only be developed with time, and hard work. If this stuff was easy, then it wouldn't be fun when you got it right.

Light

"Cowards can never be moral."
M.K. Gandhi
 
Thanks guys, great feedback.

I am pretty much on top of the frequency collision thing and can define each instruments sonic space. I agree with Crawdad that the 100-300 zone is a bear and is where I spend alot of time between tracks. Still, there are other harmonics that are just as difficult and that create the fuzziness in the 2K and up space. This is where clarity goes to hell. I am finding I have to pick and choose the tracks that will dominate the upper end, around the vocal. Cymbals, piano, strings, synths only one or two can own the upper end and the rest have to suffer as I am finding. Or, I can notch out distinct spaces for each but this is not alway optimal.

Texasroadkill's comments are key. If you can get the sound prior to hitting record then you are better off with less EQ, many others have said this before and I am taking this more to heart.

Regarding software quality, I wish there was a site similar to the soundcard comparison site, can't remember its name now, that rated recording software on its output quality. Similar to comparison reviews on mixers and other hardware reviews. This has got to be one of the worse oversites of the magazine industry. Same sound card, same systems, record and playback results, which one does a better job? I have been running down the Cakewalk path for almost 5 years but am beginning to think there may be higher audio results in the same price range elsewhere.

Light, the panning thing is an area which I have not explored enough. Vocals center, bass center, kick center - I have this down. I have not spent a lot of time with other tracks and placement so I will give this a try. Also the stereo track recording, I am moving away from this, as you say, things get cloudy with that approach.
 
One other thing is the whole area of psycho-acoustics, which is something I am still trying to study. For example, the ear gets familiar with a sound pretty quickly. If you have a guitar part, say, and its mixed up for the first few bars and then turned down, the ear will still hear that part and lock on it even if its 3-6db quieter later on. And, what about the effect that one part with a lot of high frequency energy in it does to the perception of other parts?

I've heard tracks where there is a chunky, sizzly hi hat and I swear that it makes the guitars sound like they have more high end than they really do! Bells and certain ride cymbals do similar things. In other words, you are on the right track (pun intended!) when you pick and choose which instruments will be dominant. Not everything has to be super clear and bright. In fact, I've heard a lot of recordings where the only bright part was some percussion--but it gave me the perception that the whole recording was real clear.

I guess the general idea is that we listen to music as a whole, but when we mix, its real easy to focus on one small part, as I'm sure you know. The trick is to be able to dive into the details but be able to come out and hear the mix as a single whole also. This whole psycho-acoustic thing is amazing to me and I know very little of it. Maybe theres a book someplace? I'd love to read that one!
 
Yes, absolutely. The apparent clarity in the final product, comes from the one or two well defined instruments in the upper frequencies. Sometimes this will come from a delay or reverb over-hyped in that region too. This gives the impression of an open, airy effect.

I maintain however, that the intergrity of the upper end is difficult to maintain in digital as multiple tracks build up. I am going to spend some time trying to understand this. Or, I can slap a multiband on the stereo buss and just color it to sound good....ouch.
 
Does the direction of the microphone effect phasing issues? If they are pointing sort of towards each other does it matter? Its just the physical distance from each other regardless of which direction they are pointing?

And with the xy configuration i've seen using 2 small diaphram condensors, it looks like they are almost touching. This is void from the rule because of the special setup?
 
yes ofcourse this matters;
for example; the bottom snare mic has the reversed phase of the upper.
 
Keep in mind that the EXACT SAME time delay will cause drastically DIFFERENT amounts of phase shift at different frequencies - for example, at 15 kHz, one cycle takes 66.6 microseconds to complete - so a time delay of 33.3 microseconds would put two identical 15k tones 180 degrees out of phase with each other, causing them to cancel. In terms of wavelength, that same 15k sound takes .903 inches to complete one cycle.

This means that two mic capsules placed .4515 inches apart, would completely cancel each other out at 15k, assuming they were pointed at the same source. In the case of the XY pair, these are used to get a stereo image and are NOT pointed the same way - the capsules are placed as close as possible to each other to minimise phasing problems, and it's normal to place the capsules OVER and UNDER each other but the SAME distance from the sound source to AVOID high freq cancellations - This because it's IMPOSSIBLE to place the capsules close enough together to be in the SAME SPACE, which is what it would take to avoid phase cancellations at high freq if you DIDN'T go the over/under route.

Even so, you still need to "cut and try" when using an XY pair, usually while listening in closed cans for the HF cancellation to minimise.

At 1500 hZ, it would take ten times the distance to cause the same problem - at 150 hZ, it would take 100 times the distance, or about 45 inches to put two mics out of phase by 180 degrees at 150 hZ.

This is why a finite amount of TIME delay has a much bigger effect on higher frequencies than it does on lows, especially since the amount of time delay in a circuit, unless it's INTENDED as a delay, is usually small enough to only affect higher frequencies.

One reason software mixers start to lose highs when multiple tracks are mixed, can be this minute (small, not 60 seconds) time delay between stereo tracks. If you figure the time delay of just one sample at 44.1 kHz, it comes out to 22.6 microseconds - this is enough to cause a phase shift of approximately 120 degrees in a 15 kHz signal, which can cause phase cancellations/stacking that is very noticeable. If the summing algorithm of the audio software isn't very carefully coded, this minute time delay is going to screw things up with multiple tracks - like I said, it doesn't take much of a time shift at those higher freqs to really blow it.

Even in a hardware mixer, if the two busses have tiny time delay differences, anything summed to mono would also suffer phase problems - for example comb filtering, which can be caused by a single constant time delay in one channel. As the frequency of the audio signal changes, that one fixed time delay causes a different phase shift for every frequency you put through the circuit. This is what causes comb filtering - so called because the frequency response graph looks like a "comb", with the peaks and valleys of the graph looking like the teeth of the comb. The peaks are where this finite time delay causes that particular frequency to be IN PHASE in both channels and therefor ADDITIVE, and the valleys are where the frequency has changed so that the SAME DELAY causes a phase shift to OUT of phase, so the two signals CANCEL. In between, there are varying amounts of phase shift at other frequencies.

All this from ONE FINITE delay between channels - from the math at the beginning, you can see that even a few microseconds difference in the delay of left and right channels of a mixer, hardware or software, can cause big problems with the sound.

This is why people who want the best sounding mix from an analog board have it re-capped at the least, when they buy it used - every cap in the signal path can cause differing amounts of delay, so if the caps in two adjacent channels aren't as close to identical as they can be, voila - phase problems. Ditto the caps in the busses, the stereo bus, EQ, etc.

Assume two adjacent channels of a board, set up with an XY pair of small diaphragm condensers, used as drum overheads - assume slightly different distances from each mic to the source (remember, it's a drum KIT, not a single hit from a stick on a single component of the kit) - assume slightly different capacitor values in the EQ of these two channels, and that you decided to BOOST the EQ at 15K to bring out the "shimmer" - Now, assume you ALMOST boosted the same amount on the two channels, by looking at the pot positions - Can you imagine the number of places this stereo signal can get "phased" to death under those circumstances? Makes it hard to believe ANYBODY can EVER record a drum kit, huh?

Just some more food for thought, nobody had mentioned some of this yet... Steve
 
Good info Steve, thanks. So often does a mixer need to be recapped? I don't recall ever hearing of anyone getting a mackie recapped. Isn't that more of a Neve or SSI kinda thing.
 
knightfly

Great information. Most of the tracks I am having a problem with where recorded in stereo, a big no-no as these are where the phasing thing is being introduced I gather from yours and Light's comments.

I need to move my tracks to mono and I think I can reduce some of the problems. I still have one question though. Say I have reduced the phasing problem by miciing correctly and recording computer tracks in mono to reduce the time delay, can independantly recorded tracks/instruments with similar frequencies introduce conflict phasing by competing with each other?

Hopefully that question was clear.
 
It is my practice to invert the polarity of every instrument when I am mixing, just to see if there is an improvement in the relationship between parts or not. This is often quite effective when you have several instruments playing similar parts which are panned similarly. Sometimes it helps, sometimes it makes things worse, and sometimes it has no effect, but I always know that I have at least tried it. I record a lot of acoustic guitar, and I like to layer many parts (I'm a guitar player, what can I say). I frequently find that two parts will interact in ways I did not expect.

Just one more thing to make the process more complex.

Light

"Cowards can never be moral."
M.K. Gandhi
 
Tex - "So often does a mixer need to be recapped? I don't recall ever hearing of anyone getting a mackie recapped. Isn't that more of a Neve or SSI kinda thing." -

That's because those boards are just too big and heavy to feel good about throwing away :=) (JK)

Every piece of electronic gear that has capacitors in it is subject to component drift - other components too, but not as bad as caps. The main reason you don't hear of anyone re-capping a Mackie is that it's gonna be cheaper, by the time you research which caps, buy them, and install them, to just get a part-time job flipping burgers and save for a NEW Mackie.

Good caps aren't cheap, and there are several per channel strip, several more in the master section, more yet in the tape return section, even more in the power supply, etc - you could easily spend $40 or more per channel - times 32 channels, you're at $1200 just for caps, not to mention getting rid of carbon resistors in places where they'll generate noise (anywhere there's current flowing) (although there are some who claim carbon resistors sound different than metal film, I'm not willing to claim either way, I just know carbon resistors generate more noise, which I'm not overly fond of)

Anyway, at maybe $60 a channel for a 32 channel board, you're already at around $1600 for parts, and that's assuming you HAVEN'T found a pin-compatible, lower noise, faster slew rate op amp for several of the chips - if you don't do this as a labor of love, but instead have someone else do it, the labor costs will be far more than the parts cost - if you hire it done, you could be easily $5k into a 32 channel board for basic clean-up. This is assuming that the inexpensive board wasn't achieved by using surface mount components, which can't even be replaced unless you have $3-400 minimum invested in SM soldering/desoldering gear, and the right type of soldering paste, etc.

Also, finding a local tech that knows which types of caps are better for which circuits can be a real crapshoot in itself - Some caps have a high ESR, or Equivalent Series Resistance, which can change the Q of an EQ from what was intended to something with a lower Q (adding resistance to a pure LC circuit will lower the Q)

Likewise, if you get wirewound resistors you've just introduced inductors into a circuit that wasn't intended for them. Even leaving the old carbon high tolerance resistors in place is better than that.

Electrolytic caps used in power supply filters tend to dry out with age, lowering their capacitance, which BTW can vary as much as 50% from the label and still meet spec on some caps - lower capacitance here decreases filtering of the raw DC, increasing ripple on the power supplies and causing hum -

Middleman - "Say I have reduced the phasing problem by miciing correctly and recording computer tracks in mono to reduce the time delay, can independantly recorded tracks/instruments with similar frequencies introduce conflict phasing by competing with each other? " -

Sure they can, but it's nowhere NEAR as likely as two simultaneously mic'd stereo tracks of the SAME performance by the same instrument, and since any interference will be a passing thing,most likely will just add some random "shimmer" to the mix. If you find two tracks fighting for space in the freq spectrum, you could dip the EQ slightly in each, at slightly different freqs, just enough to give each of them their own space, and pan each slighty to one side or the other, maybe use less 'verb on one to bring it forward a bit, things like that...

Light, know what you mean about being a guitarist - I helped a buddy (long distance) with his first CD a few years ago, and he still had a hard time leaving enough room for his wife's vocals on a couple of the tunes - since I play (at) keys, bass, drums, guitar, some strings and some brass, I think it helps me balance the "power struggle" a little...

BTW, the phasing comments on my earlier post, if you draw the same conclusions I have, are all the more reason to trust your EARS when mixing and tracking - even if you have ALL these updates done to an analog board, I can almost GUARANTEE that you will NOT get the exact same signal path quality thru two adjacent channels just because "all the knobs are set the same" - You can get close that way, then the "cute little wavy things" (silly little cilia) inside your ears need to take over... Steve
 
knightfly said:

One reason software mixers start to lose highs when multiple tracks are mixed, can be this minute (small, not 60 seconds) time delay between stereo tracks. If you figure the time delay of just one sample at 44.1 kHz, it comes out to 22.6 microseconds - this is enough to cause a phase shift of approximately 120 degrees in a 15 kHz signal, which can cause phase cancellations/stacking that is very noticeable. If the summing algorithm of the audio software isn't very carefully coded, this minute time delay is going to screw things up with multiple tracks - like I said, it doesn't take much of a time shift at those higher freqs to really blow it.

Sounds to me like the more one can anticipate the desired sound for a track as it is incoming, the less these software mixers will screw things up. In other words, a great EQ section coupled with a preamp which is used to tailor the source sound may be a good way to approach things in the digital domain. If I add a bit too much, its better to use subtractive EQ at that point than to boost anything. Yes?

I certainly am not into the idea of recapping a console. I don't own a huge console, so I guess its a moot point, but I know that caps do drift with time. Sounds to me like phase issues are a fact of life and that good ears and engineering skills are the best medicine to minimize the certain degree of phase problems inherent in recording audio to begin with.

Great posts, by the way! I am enjoying all of this!
 
It's all about the ears. Some of the best engineers I know have no clue about any of the technical stuff, and barley know how to use their boards, but they put out great stuff. I have one friend who can not for the life of him figure out how to wire his studio, he just can not do it. Even so, his tiny little one room studio in his basement is busier than almost anyone I know. The only real selling point to his studio (aside from his $25.00 hourly rate) is him. He gets more acoustic artists, and a fair number of small light jazz groups, than you could imagine, all because of his ears.

Light

"Cowards can never be moral."
M.K. Gandhi
 
My sediments perzackly - after 38 years in the various technical fields, I tend to say,"Nice specs, now what does it SOUND/LOOK like?" (Depending on whether it's audio or video gear - BOTH if it's a Video Recorder)

Now, if we can just get some of these so-called high end audio manufacturers to learn just WHEN to label things Polarity, instead of phase... (maybe it's just cost-prohibitive to silk screen those extra 3 letters on each channel... :=)
 
Back
Top