What sound is in the 15K and up range?

  • Thread starter Thread starter Projbalance
  • Start date Start date
giraffe said:
i'd be willing to bet that in the 15K and up area the differences between a
U47 and a B1 (i'm assuming SP) are much grater than 1/2a db.
Ok, at some specific frequencies that could be a couple of dB. Frankly I just pulled a couple of condensor model numbers out of my ass to try and illustrate the greater point. To pick at that is to miss the forest for the trees.

When you have a difference in overall curve that varies from -.5dB here to +1dB there to -.75 dB there and back to -.5dB over there, it's not a difference that varies by a dB, though that's the maximum deflection. It's a difference that varies by a range of as much as 1.75dB in a single locale, and greater than that overall.

The sum of the differences builds across the spectrum as a much larger difference in perceived response overall. I picked the microphone example to point out that even though the differences in value in the curves are not large (they may be larger than a half dB, but rarely are they more than a few dB for any given frequency), the difference in overall sound is much greater than the charts make it appear. The same can be said, perhaps even more so, for monitors. The vast majority of quality monitors manage to show an extremely flat response from 40-15k. You'd think they all sounded the same. We all know how blatently untrue that is. :)

This effect is even more critical at the high and ultrasonic frequencies. A 1dB difference at 15k is a much larger deflection in terms of percentage of the total amplitude than a 1dB difference at 80Hz is. Put another way the diference between a 70dBSPL and a 71dBSPL signal at 15kHz will be much more audible than one at the same levels at 80Hz.

The bottom line is, what are apparently small differences in the overall shape of the stuff above 15k are actually quite audible (assuming, of course, one has the gear to reproduce it properly.) Moreso than at the more conventional frequencies. And it doesn't necessarily take a 3dB+ bump anywhere to be able to do that.

This is the stuff that makes the difference in both cost and sound between a $500 preamp, microphone, or monitor, and a $2000 one. It's not all just penis extention...unless the engineer behind the knobs is a knob. :D

G.
 
Last edited:
timboZ said:
I have to find a new hobby.

I'm going to give BINGO a try, or make some stupid crafts out of pipecleaneds and beads to sell at craft shows.

I'm going full-time into macaroni pictures and hiring an assistant to handle the macaroni jewlery. Our target season is right around mother's day.
 
relax guys, I was just having some fun. I read somewhere that the smallest change in volume a human can hear is 1 db. its probably not true. I don't even know wtf a half a decibel is on the fader anyway.

if the original poster hasn't yet gotten a sufficient answer, maybe they could post some screen shots of what they are doing?
 
mshilarious said:
mp3s will typically cut everything over 16kHz. I don't know exactly why the algorithm does that, but it doesn't enhance the sound quality any.
It does actually indirectly enhance the overall quality of the encoding. And it's usually a gentle roll-off at 18.5Khz, not 16Khz.
The idea is that frequencies that high pack so many changes per second into the file that storing them takes up a lot of room. So if you just get rid of them, because many speakers and most ears won't observe any difference anyway, that leaves more space available for lower, audible frequencies - that phasey shimmer sound in the 12Khz that we all love to hate about mp3's can be significantly reduced just by encoding properly - and one trick to help this process is to make sure a low-pass is engaged. The lower you can go without noticing a difference, the better for the rest of the spectrum.
 
bleyrad said:
The idea is that frequencies that high pack so many changes per second into the file that storing them takes up a lot of room. So if you just get rid of them, because many speakers and most ears won't observe any difference anyway, that leaves more space available for lower, audible frequencies

Shelving out the higher frequencies does not affect the size of the audio file, and it does not leave more room for lower frequencies. Just as much data storage is applied to the higher frequencies after they've been shelved out as if they hadn't been. Data storage is not pitch-related or pitch dependent.

It's kind of an interesting idea though.
 
SonicAlbert said:
Shelving out the higher frequencies does not affect the size of the audio file, and it does not leave more room for lower frequencies. Just as much data storage is applied to the higher frequencies after they've been shelved out as if they hadn't been. Data storage is not pitch-related or pitch dependent.

It's kind of an interesting idea though.

I think Bleyrad's quote definitely applies to lossy compression...frequencies that aren't heard by most people as well as frequencies that aren't reproduced well on most consumer (read: absolute shit) listening devices are tossed out. Instead of operating on the Nyquist theoremthat sampling rates should be double the maximum frequency desired, they just flat out toss out the information. Lossy compression also kills anything that it deems as too quiet compared to the rest of the program material.

But all of that is immaterial in WAV or broadcast files with a fixed sampling rate that actually subscribes to the Nyquist theorem for most accurate response through the hearing range.

Or so I heard. ;)
 
Back
Top