Phase and Mastering

Yes, I absolutely love the Blumlein figure eights in a great room.

I also have great respect for ORFT and tweaking the angle in anticipation for the speaker playback sound field. I have used it very successfully on string quartets and String quartets with piano. One set for the string quartet and another for the piano (live concert recordings as well).

The last time I did a live concert recording of this situation I used an ORFT in front of the traditional semi-circle formation of the string quartet so I could use the L_R pan position to create a realistic sound field width in the playback/speaker systems, then I could place the piano using the same technique in the sound field where it had actually been. I also used another ORTF pair at the center, front stage to "hear" for comparison the sound field of quartet and piano (panned hard left and right) dead center.

Now onto the "time alignment", yes you can un -due the real time delay between the arrival of the piano (behind the String quartet in this case about 15 feet) and the arrival of the quartet relative to the listening position out front.

I approach it as reconstructing the sound field from the best seat in the hall, usually the front row dead center for this purpose. I measure the hall's width, stage depth and plot out the quartets and piano's positions as well as where my reference listen position is (laser measurement has made this easy).

Then the arrive times are easy to calculate.

The third set of ORFT mic's at the front and center edge was a great way to verify this. I recorded on all three sets hand claps from the piano position and from the center of the quartets position. The high impulse makes it great to verify actuals against calculated.

Now to the practical or where the rubber meets the road.
1. The third set (reference set, at the front and center of the stage) gave me a very realistic sound from the room.
2. The hall wasn't a great venue. Heavily carpeted stage that the strings don't like, and about 2.2 sec RT60. So the 3rd set was useful for the time arrival check and knowing the "hall sound"
3. Both the piano (about 5' high, 3 feet back and looking into the mid point of the open lid) pair and Quartet pair ( fairly tight quartet about a 4' radius, with the pair about 3.5' out front and centered) gave a pretty dry sound (little hall content).
4. So I made the decision to create the sound field like the listener was 10' back from the quartet with the quartet going from speaker to speaker ( first violin left, viola slightly left of center, cello slight right of center and second violin to the right) panned hard left and right. Then I placed the piano around 42L and 42R which made it sound slightly back from/behind the quartet.
5. The only arrive time or phase alignment that anyone would possibly hear (but not know why) would be the piano and cello landing on the same lower/bass note at the same time. I used the difference between the arrive times when I clapped at the piano position (arrival at the quartet mic minus arrival at the piano mic) to offset the stereo quartet to be time aligned with the piano.
6. It's really close to around a msec per foot so it was close to the 15 msec.
7. The other useful thing for the hand claps is that I could add stereo room reverb to a dry hand clap recorded in my dead studio end until it matched the clap from the hall.
8. Then I added a very small stereo room reverb to the mix with the matching 2.2 RT60 time. It sounded like the hall but not muddy like the 3rd mic set did.

Relative to the beginning of this thread, I try to find out the elements that add up to create the offset in the final stereo mix, that the gentleman in the video was negating through software and for me to understand what contributes to this and whether it can be avoided or can actually be heard? He said that he is creating more head room but with digital capture and easy scaling it is easy to offset the complete mix? So is it detectable? I still think only in the lower bass, but by the end of the final mix was it a natural combination?? What are your thoughts?

It appears to me that once again the only place we might hear this is in the instruments in the lower bass region hitting the same note? The classic kick drum versus bass guitar or double bass or piano? If they are totally isolated when recorded and are playing with good timing, a simple check of inverting one of them should get you there like reversing an out of phase speaker. If they are recorded all together with mic bleed then I pick the reference spot, clap and check the arrival times, then check phase inversion??

I'm just hanging around my house while my garage door is being replaced, then I'll be heading back to the studio to continue mixing two projects. I'll try to remember the samples and get them to you. It is interesting how we hear the stereo pairs and the sum of the information.


I'm not sure this really is the way to do it? I have never been able to blend more than one true stereo source - adjust for one thing and something else goes astray. Single spot mics with the right panning, delay and treatment I've got away with, but the main array does the hard work. In acoustic spaces, with the reflections and larger dimensions I'm very unsure about this maths approach. The concept of a laser measurement really worries me - I'm really wary about making decisions by the physics alone. I've been at many sessions at quite prestigious studios and I've never seen any of these techniques used, I was a lowly assistant - my job to place microphones under direction, but every time the key feature for the orchestral stuff was the capture area - the orientation of the mic and the distance from the player - designed to reduce spill and capture the right sound, as in no direct path to a French horn bell, only the diffused sound. Measuring a 3 dimensional orchestra and trying to squeeze it into two dimensions by tinkering with timing and distance? I think I'll stick to my system. Put the mics in first choice position. Record - listen - adjust position - add extra track for anything weak. In the studio it's balance and blend. The clap with an orchestra also worries me. From leader to the percussion at the back, the timing is different. From the organ in a big space to the conductor is a sizeable delay. From basses to violins is a delay. When is now? This is why I resist the urge to adjust, because were those times actually playing a little ahead to compensate for them being behind due to distance. Do the musicians sync to what they hear or what the conductor suggests. Every musician is time delayed from the others - so are you fixing a problem that doesn't exist? There is no time coherence in an orchestra or choir.
Yes, the best is still to have a great hall/sound, place a stereo pair or deca tree (there is another pre engineered setup, distance between, angle, mic combination..) in the best sounding/representative position and capture at good resolution.

My objective is this first but I usually have little control over the "good hall", physical positioning and stage cosmetics (we don't want to see those mics) and budget. Ha Ha.
Then I have to start planning to get the best that I can. So far, so good.

However Engineers are now dealing with integrating spot microphones with their Decca arrays or surround arrays.

It is an interesting world?

I'm still waiting for my garage door while programming a new fibre optic interface for my TV.


Have been for years to be fair. It's just using wide area techniques on small sound sources that seems flawed. Most close so called 'stereo' techniques on guitars or string instruments aren't stereo at all - they're just twin channel.

If your results work and you're happy that's great - I've been listening to some of the stuff on the AIX Records site and even with YouTube audio, I'm left with the impression (using Laurence Juber as an example) that what we're hearing is a close miked recording with reverb that does not match what we can see. A very close perspective,, with a double bass quite artificially prominent. No idea about the surround - but why would you actually want it when it does not sound like what you can see in the video. It's clean, well recorded and nice musicianship, but not remotely sounding like the space. Looking at that guitar array it's also got two mics, one pointing at around the 12th fret and the other to the space between bridge and the strap end - the one place virtually all guitars sound thin and weedy - not a critical point for tone. Based on my ears, these mics have been panned much closer together and are just two different blended sounds. Not 'stereo' in any real sense. The room is way down, but the reverb sounds like a cathedral? How is this an example of high definition audio in anything other than a purely technical sense with higher sample rates and bit depth etc? The double bass sounds more realistic, but the spaced 87s(??) again are NOT stereo - one of them would have done the same job. Two mics like that on MY double bass fight like mad, so again I'm left with the conclusion they were just one up, one low. The drum kit sounds awful - just echoey bangs and crashes that sound like they are in a different space. Don't get me wrong, I like the quality I hear, but it's mixed so artificially - not badly, but it sounds artificial and simulated. If they'd cut the mic count down by 50% it would have sounded like every studio album - or they could have recorded them all in stereo and it would have sounded very different. That's my viewpoint I think. I don't like what I hear from AIX - it's just not to my taste.
Just got home to enjoy a bite of dinner and saw that you had left a note.

I don't have that LJ recording but I will get it and I've have never listened to the youtube samples, but I will.

LJ Guitar Noir was recorded here;
"Zipper Auditorium at the Colburn School for Performing Arts on Grand Avenue in Los Angeles has terrific acoustics — and a really fantastic sounding 9-foot Model D Steinway piano." A concert hall. I haven't found a quote for the reverb time but you have referred "reverb sounds like a cathedral." I usually see specs on cathedrals of > 5 Sec??

Here are the Tech notes from the recording;
"1.2 Tech Notes All of the AIX Records tracks on the disc were recorded at 96 kHz/24-bit PCM to a Euphonix R-1 multichannel digital audio recorder or a PC running Nuendo DAW software (depending on the year of the production). Multiple stereo pairs of microphones were connected to Benchmark microphone preamps, converted by Euphonix ADCs (Crystal Semiconductor), and recorded on the workstation. During the postproduction phase, the multitrack master was digitally mixed through a Euphonix System 5 console and monitored through 5 B&W 801 Matrix III speakers powered by a Bryston 9B power amplifier, and a Bryston 4B-powered TMH Labs Profunder subwoofer. The mixes were captured digitally on Sound Blade by Sonic Studio at 96 kHz/24-bits. There was no analog or digital signal processing (other than amplitude) used on any of the recordings. No equalization, no dynamics compression or limiting, and no artificial reverberation were used in the preparation of these tracks. As a result, the frequency response and dynamic range of these tracks exceeds the fidelity of standard definition compact discs, analog tape, and vinyl LPs. When a performer plays a loud transient, it is recorded and preserved. This is what music can sound like when optimal fidelity is the goal. Listen for the rich variety and real world amplitude variations that an uncompressed vocal or percussion instrument possess. The emotional connection of the music can be enhanced by a purist approach to audio engineering."

As I had understood with AIX recordings there is no reverb added (dependant on the concert venue selected), no compression and no EQ.

It will be interesting to get your view on my guitar recording example. I look forward to it.


Tom eh
Get you - but there is some very odd reverb audible on the dry guitar. It certainly doesn't sound like anything natural - maybe you have to get into the acoustics - but it doesn't sound like a real space at all - it sounds mega produced, and I just can't take in the notion that there's no EQ, treatment or processing, because it sounds like lots of effort has gone into it?

Did the garage get fixed?
Rob. I just sent your sample guitar tracks using wetransfer. You will get an email invitation to download them.

Tom eh