WARNING: One of my looooong posts ahead
Believe me mate, I'm not in the habit of collapsing my piano to mono. Piano is normally the main ingrediant of my music. I like my bass C to be slightly to the left and my highest C to be to the right. I just borrowed a Piano track from one of my mixes for the experiment
Ah, OK. *Whew*, that makes much more sense!
I think though that it's artificial widening that causes the 'out of phase' peaks though, surely if we didn't widen or use reverb, there would be no dropouts and this thread wouldn't exist ?
Keep in mind that most (if not all?) phase coherence meters are simply looking at the left and right channels and calculating how much and (more or less) how often the peaks and troughs of the left side waveform happen to compliment or at least to some degree coincide with the peaks and troughs on the right side. The only time this will ever happen is when the waveforms in both channels are identical. The only time they will be identical is when they are duplicate tracks - which is in effect just a single mono track panned down the middle.
This means that any truly stereo signal - where there is any difference whatsoever between the left and right channel information - is going to measure as less than perfectly phase coherent most of the time, and is also bound to swing to the "anti-phase" side - i.e. register phase interference and even phase cancellation - at least part of the time.
This will be true regardless of what the actual content is of the two channels. It doesn't have to be a time-based effect line delay or reverb; pumping a Di guitar through one side and a didgeridoo in a dead room on the other side is also going to provide some measure of coherence and incoherence on the meters, simply becaus ethe two waveforms are not going to be identical.
I bring this up just to illustrate that the meters don't know and don't care what the source is or what kind of processing you're doing to them, it's just reporting the result. Just like a VU meter doesn't care what instrument you're playing, it just impartially reports the amplitude regardless of what it sounds like.
Now, with delay, for example, the amount of incoherence (or "anti-phase", as your display calls it) depends greatly upon the dominant frequency of the signal as compared to the length of delay. As an over-simplified example, just to illustrate this point, let's say you have a pure 10kHz sine wave, meaning that the wave cycles from peak to peak and from trough to trough 10,000 times a second. If you through a 5ms delay at a copy of it, it will throw the copy 180° out of phase with the original, and if you sum the two together they will perfectly cancel each other out (full phase incoherence or full "anti phase"). If you increase the delay to 10ms, however, the two waves will once again line up - i.e. they will be fully coherent and complimentary. In-between delay values will result in in-between phase relationships and a mix of compliments and cancellations.
Interestingly enough, if we change the frequency of the sine to 1kHz, the amount of phase coherence at 5ms and 10ms delay will be identical; they will both result in full 100% coherence. In fact any delay that is an exact multiple of 1ms, 3ms 7ms, etc. will give that result, since they all will wind up re-aligning the peaks with the original peaks and the new troughs with the original ones. It's only when we add fractions of milliseconds to the delay value (e.g. 10.5ms, 7.3ms, etc.) that we'll wind up throwith the new wave "out of phase" with the original.
In the real world outside of a simple oscillator, however, with real world instruments like pianos and didgeridoos, the resulting waveform is going to be a mix of frequencies. The more complex the waveform, the harder it can be for the dominant frequency (like the fundamental note) to dominate the phase coherence calculations because there are so many other frequencies mucking up the purity of tone and adding to the unique timbre of that instrument. predicting just how increased or decreased delay times may affect the coherence calculations in such cases requires increasingly difficult math and and therefore an increasing degree of difficulty in predicting the result. It's much harder to determine just how a given amount of delay is going to affect a saxophone that it is a triangle strike.
With that in mind, reverb can indeed muddy up the coherence because it involves both multiple reflections each with it's own amount of delay and a shift in frequency response over time (the high end tends to decay faster than the low end.) Thus it's often easier for heavily reverbed sounds to introduce phase problems than the identical non-reverbed signal.
This is very similar to those who use electric guitar with heavy distortion, and why the headbangers on this board often seem to have more issues with phase than many of us geriatrics are used to seeing. When one is doubling and quadrupling Gibsons with heavily sustained power chords run through over-driven amps driving the tubes into non-linear distortions, it's a formula that can just scream "phase issues". (This is not an anti-genre point, jut a statement of the fact of the physics that have to be dealt with when dealing with that stuff.)
But yea, I think that I should start only widening things to the point of where they dropout and perhaps I'm right in thinking that if I can avoid the anti phase...then I should, reguardless of sound either in mono or stereo.
Well, to sum up (weak pun intended), you'll never completely get rid of it, nor should you necessarily. But keeping them from building up to the point where they significantly degrade the sound of the mix - mono or stereo - is the best any of us can hope for.
Cheers, Mart! Sorry about the looooong post (you should know that I'm prone to those by now

) Good luck with your ivory tickling!
G.