Phase and Mastering

I started watching and didn't see anything about phase, so I skipped through and still didn't see anything about phase. Maybe you could link to that part.
 
Very interesting video!
Coming from the physico-chemical science community I had a similar phenomenon in one of my lectures about Fourier transformation.
If you mix two waves, where one wave has the double frequency of the other (i.e. one octave higher), you have to take into account their relative phase because you might get exactly what this guy is talking about:

Here we have two waves with frequency 2 and 4 (arbitrary units). You can see the individual oscillations as grey lines. They start both at a maximum, i.e. their phase difference is 0. As you can see the resulting sum (blue line) is peaking at +2 amplitude (again arbitrary units) but only at -1 in the negative direction.
fft-phase-1.png

Here is a second example. Just as above, but now the oscillations are phase shifted by pi/2.
As you can see, the maxima of the individual oscillations are now at different positions and the resulting wave is symmetric around zero peaking at about +1.71 and -1.71.
Looking at the frequency spectrum (right side) in both examples position and peak height are identical, but the peaks in the second spectrum look a bit more symmetric.
fft-phase-2.png
 
I don't think it is a matter of "bad", to your point, it is a matter of information. One more thing to look at and understand what is going on. That was my take away from the video.
 
Good morning. I use a lot of stereo micing techniques and I always zoom in one each and check alignment between the two. That stops the creation of offsets between the two and tells you how accurately you have located the mics. In other words correction at the beginning before mixing (therefore mastering as well) will lower the effect before you get multiple tracks in mix and sitting in front of the mastering engineer. It's also good to do with multiple drum mics before you start eq'ing the mix to to use a solid snare hit to align everything with it. As the frequency goes up it becomes less meaningful in the mix. So you will hear it first in our ears in the clarity or tightness of bass and drum hits. IMHO DM60, I guess I'm not the only one who attatches the wrong thing once in a while? Ha Ha. Welcome to my world. :-)
 
Surely with stereo techniques time alignment being different is what some of the popular techniques rely on? Either amplitude, or time differences, or mixtures of both. It's never occurred to me to tinker with time alignment on M/S, X/Y or the variants of these or A/B. With drums and separate mics I get it, but stereo technique must be treated as a combined capture and adding delays to one or the other would produce big swings in the apparent image locations?
 
When you zoom in and look you should find only relatively minor differences unless something is widely wrong. As per your example of M-S, I can’t perfectly align the capsules but I can make a very fine offset on one to correct it. And as you observed there is a much larger difference on multiple drum mic’s than any of the stereo techniques. Also I should note that I use stereo pairs on most acoustic instruments as well as the more classic stereo pairs on complete groups such as chamber quartets etc. With multiple stereo pairs “live off the floor” such as a jazz quartets I check them against each other. I record on all mic’s a hand clap from a position that I consider the best live listening position. Maybe I’m a little bit of a nit pick?
 
If it works for you that's fine - a new technique is always worth discovering. I guess you are using the 'stereo-pair' as two different tonal sources on solo instruments, so they're not subject to the usual rules. We use the clap sometimes when trying to blend in spot mics for orchestras or larger choirs where there's big distance between the stereo cluster and the spot mic.
 
Last edited:
Well it is a bit more complicated than tonal differences. It more like replicating how we hear the instrument unless someone is deaf in one ear. Musicians understand it when I tell them to cover one ear while I strum a guitar in front of them. AIX records has used the technique for a long time as well as others. Cheers T eh
 
With a spaced pair, anything to the sides will arrive at the mics at different times. If you "fix" the alignment of sources on one side, it will make the alignment of sources on the other side worse.
 
For certain. You have to set them know what you want for the center, or the mid point. For solo, classical guiatr I have used M_S infront of the guitar and where I like the balance and added two omni spaced pairs 45 cm each side of the M_S.
 
What for? A classical guitar has very limited width? 4 mics in less than 3ft? I can't get my head around close miking near point source instruments like this. Surely your right channel would just be noisy fingerboard stuff and your left channel a hand blocked sound hole tone?
 
Good morning Rob. "what for?" Well I try to record as many acoustic performances in a great sounding environment and this allows me to capture a lot of the interaction with the room and place that in a ratio that represents what it sounds like in that environment. It is not about close mic'ing and creating the environment through artificial reverbs.
As I said above and when you get a chance, please check out AIX records, Chesky, as well as most classical concert recordings. Please see https://sonicscoop.com/2015/06/15/o...-recording-the-business-of-specialized-sound/
and
and

As one of my old Prof's said "the more I learn, the less I know." I use this quote often.

Have a great day and stay healthy.
 
I appreciate your effort to make this kind of recording, but I'm afraid the basic philosophy goes against all my training, technical education and understanding. I do understand the requirements and desires of the audiophile community, but I'm too old to re-learn the old ways. The interview was about everything except technical detail. In fact, the AIX site seems to lack the same thing.

I am not saying you are wrong - I'm sure this close mic technique (because it certainly isn't the classic use of any of the usual techniques) give clinical precision but it isn't realistic and the absence of the usual 'sweeteners' leaves me wondering what AIX listeners are actually trying to achieve?

I appreciate your viewpoint and purpose - but it passes me by.
 
Last edited:
Well I usually learn something "audio" related every day. And that is what I enjoy very much. But I'm only 63.

Alan Blumlein is usually given credit for most of the development but Mr. Olson was certainly there plus lot's a Decca and RCA people from the 1930's up. I enjoy listening to the RCA classical recordings from the 1950's in LC_R, transferred to SACD.

I found this nice history of stereo techniques and it is very interesting to see the flip side of stereo playback.


This DPA link gives examples of the relationship between microphone spacing, positioning and the resultant playback image. A very nice link.

If you would ever like samples of a solo, acoustic guitar recording with three mixes, Mid only, M_S, and then M_S with Omni's I could send them to you if you supplied an email address.

Best regards,

Tom
 
rob@earsmedia.co.uk will find me - I'd certainly be interested to hear them - Blumleins papers I studied in the late 90s and they were very heavy on maths and the DPA info is well presented and has the details, but what I don't really get is the notion of near field capture? Closer than what? 1m? and it gives exaggerated width and is low on the room sound. My own experience with grand pianos, probably the instrument with the biggest physical size and separation between bass and treble is that closer than a metre and you are just getting a close miked sound, which I don't think is realistic on speakers wider apart than the instrument is wide?

The time adjustments you mentioned earlier now sort of fall into place, because you are creating a phase coherent recording where you can, but doesn't this go against the purist approach of no eq, no compression and no tweaking? It's very similar to the experiments with binaural sound in the 90s - where the mic positions would be adjusted to get the 'best' headphone image - bit this never caught on because on speakers there was no stereo image at all, or a badly skewed one?

A couple of figure 8's in a Blumlein inspired cluster work wonderfully in free space - 3m or so from a small ensemble or further from a big orchestra, but they sound horrible close in, absolutely needing a centre mic to be blended in, and then the stereo image trails away?

The idea of this intrigues me, I must admit, but trying to time align four mics in close proximity to an instrument with limited width in real life makes me wonder what is happening?
 
Back
Top