Wow, I step away from the computer for a couple of days...
Morningstar is absolutely right about the 3:1 rule. It has ZERO to do with double miking the same source, only as a general guideline for reducing bleed from other sources.
However, black circle's erroneous belief. namely:
If mic 1 is 0.5 inches away from source, mic 2 needs to be placed directly behind mic 1 by a distance of 0.5in x 3. Otherwise it will phase.
is really the same kind of error folks were making on the first page of this thread, just in a different flavor.
BlackCircle, the reason that statement of yours is incorrect is because it ignores the fact that every frequency has a different wavelength. This causes two problems with your example:
First, extending the mic out to 0.5 * 3 inches (or simply 1.5") only keeps things in phase for frequencies with a fundamental or whole harmonics with wavelenths of 1" - the only one of which, BTW, fits into the audible spectrum would be a fundamnetal frequency of approx. 13.5kHz. That spacing, however, at, say 10kHz (wavelength 1.35") means that the 1" gap would actually be a phase shift of 1/1.35 or 74%, or about a 266°. At, say, 1kHz, with a wavelength of about 13.56", a 1" gap would mean a phase shift of 1/13.56, or 7.3% or about 26°.
In other words, with that miking everything is actually thrown
out of phase by different amounts, except for the one frequency of about 13.5kHz, which just so happens to coincide with that spacing.
A second, more minor point, is that with a spacing of 0.5" and 1.5", the one frequency that remains in phase happens to be a frequency that you are capturing at it's inverted phase (wavelength = 1", distance of mic 0.5" or 180° phase shift.) They may be in phase wit each other, but at that frequency wit would be almost as if you flipped polarity for that frequency alone (minus any DC offset effects).
But this is really the exact same problem I was try to describe on page one; except looking at it from the standpoint of frequency rather than of wavelength. One cannot just slide a complex waveform like one from a miked guitar amp, down the timeline and throw the whole waveform in or out of phase by a set amount, because every component frequency in that waveform will have a different relation to the amount of time offset you introduce. Sure you may be able to shift the timeline by 0.00735 milliseconds and bring the 13.5kHz component into phase, but it won't do the same for 10kHz or 1kHz or any other audible frequency.
And as far as the time vs. no time argument, I think where we're getting hung up on the following:
When one adjusts their snare mic to the OH by sliding it on the timeline (just for one real-live example), all they are doing is lining up the main attack transients; i.e. they are bringing the spacing of the main drum hits "into phase". A close examination of the waveforms, however, will show that this does not being the actual full waveform into phase, but rather that there will still be plenty of phase conflicts in the details of the waveform between the loudest transients. This is because the spacing of the two microphones is such where as they cannot be in phase for all frequencies, and will, just like with BC's guitar amp example, be out of phase by a differing amount for every frequency comprising that snare hit.
Two waveforms that are out of phase, however, do not need to have a time delay, they just have differing values at each moment in time, such values corresponding to those of a change in phase value.
Look one more time at reel's original sin/cos chart. Let's assume that sonixx is right, and that the x axis does indicate time. Even still. there is no time shift here because both waveforms start at x=0. They start at the same time! The second one is not pushed down the timeline, it is in the exact same position along the x axis as the first one. Or put as I have been trying to say all along, there is no time shift there, and there is no time shift required to have a phase shift.
In fact, if you took one of those drum tracks and change the phase (and phase only) on it consistently along all frequencies (e.g. by, say, 90°) it would not move down the timeline, it would stay in place, but change it's shape and would barely resemble the original wave, because the amplitude changes at each frequency would differ.
Go back to the drum mics example; if one ignores the transients and looks in detail at the rest of the waveform, they will, in fact, look like quite different waveforms, even though they are capturing the same source (let's pretend we're in an anechoic chamber that there is no room verb or bleed). The reason is, even though it is the same source creating just one waveform, the two mics are capturing each frequency at a different phase, bending the waveform around itself. Even if one adjusts the timeline for the delay, that alone will never bring the two waves in phase, because the phase difference is different for each frequency.
OTOH, to go back to reel's eample; if you take the original wave copy it as is, and just move it down the timeline, sure it will no longer be phase coherent with how it was in it's original position, but that's NOT because the phase of the waveform itself has been altered. The waveform in fact has not been altered at all; it has simply been moved down the timeline - i.e. it has been delayed. The fact that it's no longer going to line up with the original and that there will indeed be incoherency does not mean that there's been an actual phase change to the waveform, it simply means that they are acting like any other two incoherent waves that are put together and clashing with each other.
G.