
Mo Facta
Farts of Nature
Just because it's not immediately perceivable, doesn't mean it's not registered in the brain.
The localization cues that the ear/brain has evolved to is as a result of our millions of years as primitive humans, hunting and being hunted in the wilderness. It is very useful, for instance, to be able to hear from the direction from which your enemy is approaching, be it from in front of you, to either side, or behind you. Over time, humans survived annihilation and evolved into the creatures we are today as a direct result (among others) of being able to hear from whence their predators were coming. The ones that couldn't, died out and the survivors bred their strong aural genes together. The ear/brain complex essentially decodes two streams of information from two aural sensory organs (the ears) to extrapolate the localization of a particular source based on level and phase disparity, frequency content, etc. This is why it is very effective to include early reflections in reverbs because it is in these first couple of initial milliseconds that the information is gathered and a localization is perceived. This is how stereo sound works and also kind of how 3D video works by taking advantage of the stereoscopic nature of having two eyes. (We have two nostrils so why don't we smell in stereo?
)
Interestingly, most mammals, including us, can only distinguish localization on the horizontal plane and not vertically due to the position of our ears being symmetrical. Ever have a plane fly overhead really low and it scares the shit out of you? We are not programmed to perceive it due to most of our predators being terrestrial and on the horizontal plane. Handy to know if you're going to ambush someone from above.
Another thing is that the ear's response is not linear. Trained listeners are able to dissect complex audio material and isolate specific elements but for the most part the ear/brain interpolates what it hears and essentially focuses in on what is louder and upfront. Frequencies and sounds that are similar in spectral content and level will be masked by the louder sound. This is why you don't hear the reverb while the music is playing. Reverb is largely a mid-range focused process and most of the song's harmonic content will be in this range. However although it is masked during playback, the ear can still pick up the spacial cues it presents because of the minute delays involved in producing a reverberated sound.
Cheers
The localization cues that the ear/brain has evolved to is as a result of our millions of years as primitive humans, hunting and being hunted in the wilderness. It is very useful, for instance, to be able to hear from the direction from which your enemy is approaching, be it from in front of you, to either side, or behind you. Over time, humans survived annihilation and evolved into the creatures we are today as a direct result (among others) of being able to hear from whence their predators were coming. The ones that couldn't, died out and the survivors bred their strong aural genes together. The ear/brain complex essentially decodes two streams of information from two aural sensory organs (the ears) to extrapolate the localization of a particular source based on level and phase disparity, frequency content, etc. This is why it is very effective to include early reflections in reverbs because it is in these first couple of initial milliseconds that the information is gathered and a localization is perceived. This is how stereo sound works and also kind of how 3D video works by taking advantage of the stereoscopic nature of having two eyes. (We have two nostrils so why don't we smell in stereo?

Interestingly, most mammals, including us, can only distinguish localization on the horizontal plane and not vertically due to the position of our ears being symmetrical. Ever have a plane fly overhead really low and it scares the shit out of you? We are not programmed to perceive it due to most of our predators being terrestrial and on the horizontal plane. Handy to know if you're going to ambush someone from above.
Another thing is that the ear's response is not linear. Trained listeners are able to dissect complex audio material and isolate specific elements but for the most part the ear/brain interpolates what it hears and essentially focuses in on what is louder and upfront. Frequencies and sounds that are similar in spectral content and level will be masked by the louder sound. This is why you don't hear the reverb while the music is playing. Reverb is largely a mid-range focused process and most of the song's harmonic content will be in this range. However although it is masked during playback, the ear can still pick up the spacial cues it presents because of the minute delays involved in producing a reverberated sound.
Cheers
