what they do to the mix is. It's hard for me to completely judge what it is that they're doing. It feels like reverb, but I can't put my finger on it.
I wonder if we'll ever get to a point where the room CAN be simulated in this way. It would be a huge change in the studio world if the "studio" weren't necessary anymore.
Here's what it says on the HDPHX page:
It takes advantage of the Haas Effect to provide a delayed and attenuated cross-channel mix in each ear to present the sound that would be heard with a stereo speaker system with speakers placed at +/-30 degrees azimuth from front-center.
It sounds roughly what it's doing is to introduce a combination of delay and amplitude differently to each channel, with some mixing of the two channel's signals, to synthesize the sound of two speakers 60° apart from each other. This would theoretically make it sound like you were the third apex of an equilateral triangle, with the two "speakers" being the other two.
My guess - though it doesn't explicitly say this - that they use the average distance difference from the stereo speakers to each ear (i.e. the average width of the human head) as the basis for calculating the delay and amplitude mix differences for each ear of the headphone.
IOW, when we listen to normal speakers, our brain localizes the direction/location of the sound source by subconsciously calculating the delay and amplitude distances between the same sound hitting the different ears. The left channel sounds a little bit left because it's hitting the left ear slightly louder and slightly earlier than it's hitting the right ear, and vice versa. So my guess is this software just figures that if you want to make the left channel like it's 30° off-center, that it needs to sound x dB quieter and y milliseconds delayed in the right ear. So it takes a copy of the left channel information, quiets that copy by x dB and delays it by y milliseconds, and then sends that delayed and quieted copy to the right channel. It does the same kind of thing to the right channel info and the left phone, and
viola!, what was hard-panned in the phones now sounds like it's only 60° apart.
It's an intriguing idea, but it has to be lacking as compared to reality, because there's so much more spacial information in a real room than there is in that simple closed calculation. Plus, because of the shape of the ear lobes and directionality of hearing and such, there is slightly more than just amplitude and timing differences between the two ears, there is also going to be some (admittedly small but still naturally important) difference in the frequency response between the two ears do to directionality of our ears. Put another way, I think that the assumed HDPHX algorithm acts like we have two ears facing forward with no head in-between instead of two ears on the side of our head facing sideways
I think it comes down to the same thing that it comes down to whenever we're talking about any kind of studio monitoring; whether one finds it useful or not depends entirely upon who you ask; i.e. what their ear/brain wiring can adapt to better. I almost wonder if trying to make headphones sound like "not-headphones" might be just as confusing to some heads as standard headphones are to others.
G.