Optimizing for Headphone Mixing

  • Thread starter Thread starter Horkin My Lunch
  • Start date Start date
Horkin My Lunch

Horkin My Lunch

New member
I currently don't have a pair of monitors, just a pair of DT 770s. I'm not sure if it even exists (other than Redline), but are there any stereo or EQ tweaks that would give a better picture of everything? Basically, having the mix stand in front of you rather than in your head.
 
Yes there is.

Its called getting speakers.

You will never achieve that effect with headphones.
 
I'm well aware of that, but you're missing the point. Mostly, I'm curious how programs like Redline Monitor work. How exactly does it work?
 
If you're talking about those "realism" algorithms for headphones, all they're doing is "faking it" -- Summing a portion of the signal, adding some light early reflections to mimic a small room, etc.

It can make headphone listening "more pleasant" and "more realistic sounding" - But not "more accurate" by any stretch.

There is no substitute.
 
Space Simulator

The four that I know of are Redline Monitor, Isone Pro, HDPHX (free) and the Virtual Room Monitoring (VRM) system that's on the Focusrite Saffire Pro DSP24. There's another program that does somethng similar called Crossfeed, but doesn't work as a VST.

The way most of these things work is by attempting to simulate the way your ears hear a set of speakers. When you listen to monitors your right ear hears the sound from the right monitor first, and then the somewhat quieter sound from the left monitor, slightly delayed a nanosecond later (because the left monitor is farther away from your right ear). You can't consiously recognize this effect, but your brain can interpret that sound coming from the left monitor (spatial recognition) and interpret the distance and rough direction that the sound is coming from. The other part of this is the natural reverb that the room adds to the sound coming from the monitors, which can vary greatly depending on where you are.

The most common way to simulate this is to take some of the signal that's coming from the stereo right and mixing it (slightly delayed, and less loud, probably with some kind of reverb, filtering and layering) with the signal that's coming from stereo left. And vice versa for the right side.

How well this works is up for debate. Obviously there are TONS of factors that go into how you hear sound coming from a real set of speakers, compared to how you hear sound coming from a set of headphones. But that is (as I understand it) the way that these programs typically work.

I've tried Redline and Isone Pro. I can't imagine that using either would significantly improve my ability to mix with headphones. They also both add a significant "color" (for lack of a better term) to the sound. Particularly with Redline it add a lot of reverb, filtering and layering "effects" which you have no control over. So using it on a mix is adding an essentially unknown variable. Which is typically not a good thing.

Redline has a 2 month demo last I checked, so give it a shot. HDPHX is free, so you can try that one too. I think you'll find that it's basically impossible to tell if it would improve your mixes. In short, I don't think they're worth the price, and I think they change the sound enough that it adds too much unknown to be able to objectively hear the mix.

I'll end by referring to the above (and likely hundreds to follow) posts saying that you'll never get headphones to sound like monitors. Whether you can mix on headphones is a subject for much debate, but it's pretty safe to say that they'll never sound like each other.
 
Headphones work much better for mixing if you cut off your outer ears.

'Cause they skew the EQ balance and mess with the dynamics.

Just sayin'.







JK:D
 
binaural ?

If you're talking about those "realism" algorithms for headphones, all they're doing is "faking it" -- Summing a portion of the signal, adding some light early reflections to mimic a small room, etc.

It can make headphone listening "more pleasant" and "more realistic sounding" - But not "more accurate" by any stretch.

There is no substitute.

are you saying that true binaural would be completely accurate

it plays into each ear exactly what it would have heard live at that ear -- records it then later plays that into the earphone --
why wouldn't that be realistic?
 
I recommend HDPHX (mentioned above). You can find it here:

http://refinedaudiometrics.com/products-hdphx.shtml


It's a pretty subtle thing, but it does provide a pretty good measure of making extreme stereo panning more "speaker-like" in that the extremes don't sound as wide or fatiguing to listen to. I also find it very transparent in terms of sound coloration: your headphones still sound like they have the correct frequency response.
 
are you saying that true binaural would be completely accurate

it plays into each ear exactly what it would have heard live at that ear -- records it then later plays that into the earphone --
why wouldn't that be realistic?
Let's detach "accurate" from "realistic" --

It's not realistic to have an immobile room attached to your head. It's not realistic to expect accuracy from something that changes drastically with a movement of less than a millimeter.
 
Let's detach "accurate" from "realistic" --

It's not realistic to have an immobile room attached to your head. It's not realistic to expect accuracy from something that changes drastically with a movement of less than a millimeter.

Massive, have you used any of these plugins? I'd be curious to know what your professional opinion of what they do to the mix is. It's hard for me to completely judge what it is that they're doing. It feels like reverb, but I can't put my finger on it.

I wonder if we'll ever get to a point where the room CAN be simulated in this way. It would be a huge change in the studio world if the "studio" weren't necessary anymore.
 
I've tried a few - Actually thought one or two of them made listening more pleasant to some extent - But certainly didn't make headphones more suitable for mixing...
 
what they do to the mix is. It's hard for me to completely judge what it is that they're doing. It feels like reverb, but I can't put my finger on it.

I wonder if we'll ever get to a point where the room CAN be simulated in this way. It would be a huge change in the studio world if the "studio" weren't necessary anymore.
Here's what it says on the HDPHX page:
It takes advantage of the Haas Effect to provide a delayed and attenuated cross-channel mix in each ear to present the sound that would be heard with a stereo speaker system with speakers placed at +/-30 degrees azimuth from front-center.
It sounds roughly what it's doing is to introduce a combination of delay and amplitude differently to each channel, with some mixing of the two channel's signals, to synthesize the sound of two speakers 60° apart from each other. This would theoretically make it sound like you were the third apex of an equilateral triangle, with the two "speakers" being the other two.

My guess - though it doesn't explicitly say this - that they use the average distance difference from the stereo speakers to each ear (i.e. the average width of the human head) as the basis for calculating the delay and amplitude mix differences for each ear of the headphone.

IOW, when we listen to normal speakers, our brain localizes the direction/location of the sound source by subconsciously calculating the delay and amplitude distances between the same sound hitting the different ears. The left channel sounds a little bit left because it's hitting the left ear slightly louder and slightly earlier than it's hitting the right ear, and vice versa. So my guess is this software just figures that if you want to make the left channel like it's 30° off-center, that it needs to sound x dB quieter and y milliseconds delayed in the right ear. So it takes a copy of the left channel information, quiets that copy by x dB and delays it by y milliseconds, and then sends that delayed and quieted copy to the right channel. It does the same kind of thing to the right channel info and the left phone, and viola!, what was hard-panned in the phones now sounds like it's only 60° apart.

It's an intriguing idea, but it has to be lacking as compared to reality, because there's so much more spacial information in a real room than there is in that simple closed calculation. Plus, because of the shape of the ear lobes and directionality of hearing and such, there is slightly more than just amplitude and timing differences between the two ears, there is also going to be some (admittedly small but still naturally important) difference in the frequency response between the two ears do to directionality of our ears. Put another way, I think that the assumed HDPHX algorithm acts like we have two ears facing forward with no head in-between instead of two ears on the side of our head facing sideways

I think it comes down to the same thing that it comes down to whenever we're talking about any kind of studio monitoring; whether one finds it useful or not depends entirely upon who you ask; i.e. what their ear/brain wiring can adapt to better. I almost wonder if trying to make headphones sound like "not-headphones" might be just as confusing to some heads as standard headphones are to others.

G.
 
Last edited:
Back
Top