24 bit in and around, 16 bit out?

  • Thread starter Thread starter jedblue
  • Start date Start date
Well...AFAIC...it's also just a theory that nothing happens above 22kHz with human hearing..... ;)
There's all sorts of problems with that:

First, there's far more behind that than just theory, almost all of which has to do with the anatomy of the ear, including the ballistic properties of the eardrum responding to the vibrations entering the ear, the size and radius of the cochlea which is designed to bend and focus frequencies in well-measurable ways, and the positioning and response characteristics of the hairs within the cochlea which actually transfer the vibrations into nerve impulses.

Add to this the scores of truly scientific hearing tests over the past century, only starting with Fletcher and Munson and refined and repeated by many, many other scientistsand international organizations, not to mention the millions of standard hearing tests run by physicians and audiologists all over the world every day since then, the occasional pseudo-scientific-at-best test purporting to something super-audible in anything other than maybe a tiny percentage of golden ears aside.

Then importantly, let's not forget that one needs to both capture and reproduce those frequencies in order for it to even make a difference. Easy to do in the digital realm, perhaps. But at the interfaces with the Real World - where we're using things like microphones, preamps, loudspeakers, ADCs, etc., the reality is that most of the gear is not designed to or going to handle that EHF stuff with any accuracy, if even at all.

But wait, there's more! The higher the frequency, the lower the energy levels. By the time you get to 20k, any energy that does remain for anything beyond some dog whistles or hyper events such as lightning attacks is so minimal that even if one could hear that high, they'd still miss most if not all of it simply because of the very low audible level to begin with.

G.
 
I wish there was a way to calculate how many man hours have went into this thread so I could compare it to how many songs I've written and recorded in that same time frame.
 
I wish there was a way to calculate how many man hours have went into this thread so I could compare it to how many songs I've written and recorded in that same time frame.

Three. No matter the question... the answer is always three. After all, del . R = 3 in Cartesian coordinates. :)

Cheers,

Otto
 
Didn't know which thread to post this in :D

I'm looking at the 2 wav's through a Wavepad Sound Editor and the B wav has higher peaks and more detail(resolution). It looks like the A wav is more compressed. Which would sound louder? Why is that?

Don't the differences become undetectable when you have complex wavs?

Not a loudness issue. They have been both recorded at the same amplitude. In the example, the only difference is that one is done at 44.1kHz the other is at 88.2kHz.

If you listen carefully, you will hear some "dud" notes in the one done at 44.1kHz not that either example sounds pretty anyway :D. The effect of aliasing BTW doesn't sound too dissimilar to what you get with ring modulation.

This was an example of aliasing at lower sample rates. Definitely different from this 16bit vs. 24bit example.

Anyway, this is more of an audio example rather than a visual example. Although visually you can see a strong presense of sub-harmonics as evidenced by the amplitude changes in each consecutive ramp of the sawtooth. With some notes the subharmonics are at a higher amplitude and lower frequency compared with other notes.
 
I track at 24bit/48kHz and mix through a big analog console to CD.

I avoid all these wars about digital stuff 'cause it all works for me. My stuff on tape sounds like my stuff on a HD recorder. I never think about it and neither do any of the people I record (including myself).

get good equipment and all this stuff should be a non-issue. All the wars should be about the music and not about a few extra bits.
 
Did you listen to the examples?

What examples, where? That's why I asked you to render both ways and post the results here.

If you can't hear the differences in the examples in that post, then your ears are made of wood.

Are you 12 years old? If you can't have a discussion about science without being insulting, I have no interest in participating. I think the saying is, "Insults are the last refuge of the incompetent."

Or maybe that's incontinent? :D

--Ethan
 
What examples, where? That's why I asked you to render both ways and post the results here.
Ethan, you guys are talking two different things at this point; I know, because I was in your place just a few days ago in that thread that George is referring to ;).

What he's talking about is the difference in quality of the sounds that the VSTis themselves make when they are run at the differing sample rates, apparently due to apparent weaknesses in the lowpass anti-aliasing filtering in their waveform generation software (or at least, in Reaktor, used in the test.) He gave a test which pretty definitively shows the VSTi aliasing at 44.1 in a *very* audible way, but keeping the aliasing to in inaudible minimum at 88.2. Again, to stress the point, not because of the sample rate of the converter, or even of the editor (at least not directly), but because of artifacting within the VSi itself.

HERE is the whole thread, with the links to his files in Post #1. There is unquestionably a difference, and visually one can see the telltale fingerprints of audible levels of aliasing in the slower sample (see post 18 in that thread for a sample FFT analysis of both) .

So, George's point - and one well taken, I believe - is that while esoterica like sample rate and bit depth may have only wispy phantoms of difference when it comes to the capturing and reconstruction of analog sources, they can make an important decision point when it comes to the use of some VST/VSTii software such as Reaktor, because that software itself can act audibly different because of it's own design.

G.
 
There's all sorts of problems with that:

First, there's far more behind that than just theory, almost all of which has to do with the anatomy of the ear, including the ballistic properties of the eardrum responding to the vibrations entering the ear, the size and radius of the cochlea which is designed to bend and focus frequencies in well-measurable ways, and the positioning and response characteristics of the hairs within the cochlea which actually transfer the vibrations into nerve impulses.

Add to this the scores of truly scientific hearing tests over the past century, only starting with Fletcher and Munson and refined and repeated by many, many other scientistsand international organizations, not to mention the millions of standard hearing tests run by physicians and audiologists all over the world every day since then, the occasional pseudo-scientific-at-best test purporting to something super-audible in anything other than maybe a tiny percentage of golden ears aside.

Then importantly, let's not forget that one needs to both capture and reproduce those frequencies in order for it to even make a difference. Easy to do in the digital realm, perhaps. But at the interfaces with the Real World - where we're using things like microphones, preamps, loudspeakers, ADCs, etc., the reality is that most of the gear is not designed to or going to handle that EHF stuff with any accuracy, if even at all.

But wait, there's more! The higher the frequency, the lower the energy levels. By the time you get to 20k, any energy that does remain for anything beyond some dog whistles or hyper events such as lightning attacks is so minimal that even if one could hear that high, they'd still miss most if not all of it simply because of the very low audible level to begin with.

Only point I'm making is that JUST 'cuz you can't *hear* it...the sound wave is STILL acting upon our bodies....we don't just shut off those waves that are outside of the fundamental human hearing range.

I know this gets into all that "sixth sense" kinda stuff, but it's too early for science to have complete answers...or even know how our brains fully process all those sound waves we can't *hear*.
The tests and conclusions so far are based on our limitations of measurment and understanding.
 
What examples, where? That's why I asked you to render both ways and post the results here.



Are you 12 years old? If you can't have a discussion about science without being insulting, I have no interest in participating. I think the saying is, "Insults are the last refuge of the incompetent."

Or maybe that's incontinent? :D

--Ethan

Hummm... and judging from that reply up there, you must be a 5 year old and shit for brains? ;)

I am sorry Ethan, I shouldn't assume that people read all the posts in the thread. :) Had you read all the posts in this thread... or at least Ofajen's, then you would have seen that I had linked to the thread I was referring to.

I will play around with the bit-depth test, as I too am curious what will come of it, as the findings in an article I had read some time ago (unfortunately don't remember who wrote it or where I read it) were quite interesting....

So, is that an adult answer enough for you? :p
 
Only point I'm making is that JUST 'cuz you can't *hear* it...the sound wave is STILL acting upon our bodies....we don't just shut off those waves that are outside of the fundamental human hearing range.

I know this gets into all that "sixth sense" kinda stuff, but it's too early for science to have complete answers...or even know how our brains fully process all those sound waves we can't *hear*.
The tests and conclusions so far are based on our limitations of measurment and understanding.
While such speculation can be fun, and may be something for theoretical physicists and biologists to ponder, it is useless to us *as engineers*.

I could just as easily speculate that there is a magic frequency at ten thousand MHz that, when modulated properly, turns the 11 dimensions of spacetime inside out (Some might argue that all it really takes is a couple of shrooms and a Pink Floyd album played backwards ;))

I have no real evidence of it, our current science gives no indication of it, but it's a fun belief I have and have managed to talk a few others looking for religion into. Does that mean I should give that even a whit of consideration when I cut my next track? Of course not.

It's called "fiction".

Christ, we all have a hard enough time just juggling the audible stuff between 20 and 20k; let's not start worrying about stuff we can't even hear, doesn't make it through our gear anyway, and science doesn't even validate.

I have made the argument in the past that a good compromise - just to keep the zealots quiet - would be a sample rate of 66.15k, but the more I think about it - and the more I experience the huge mounds of bullshit called the public internet - the more I realize just how counterproductive, and therefore stupid, such a concession would be.

G.
 
Hummm... and judging from that reply up there, you must be a 5 year old and shit for brains? ;)

LOL, no, I'm 12 years old. :D

you would have seen that I had linked to the thread I was referring to.

I'm glad Glen chimed in, because I thought you were talking about the superiority of 24 bits for soft-synths. I thought that's what this thread is about, not about aliasing artifacts.

So, is that an adult answer enough for you? :p

Almost. :eek:

(Kidding. But there's still no need to ever toss insults.)

--Ethan
 
I could just as easily speculate...

I'm not looking just to debate for the sake of argument… :) but one can just easily say that we are also speculating that anything outside our range of conscious hearing is "useless information"...because that speculation is based on the current limitations of our scientific understanding and measurement capabilities.
I really don’t think science has uncovered all there is about hearing and how our brains interpret sound, and what effect sound has on us at a more cellular or even molecular level, and not just what is hitting our ear drums….
…has it?

I don’t think it’s too wild a speculation to suggest that sound may have a greater impact/effect on us than just what we consciously hear. If there are definitive studies that prove to the contrary…I would love to read them. If anything, most studies have focused on just what we consciously hear with our ears.
Did you ever notice how some frequencies can actually “tickle your bones”…or…cause goose bumps down your spine. I’m talking about harmonic excitation of our full bodies…not just the sound waves entering our ears, though has anyone proven that the above-22k frequencies that even enter our ears do nothing…???...are completely ignored by our brains/cells???

AFA "we all have a hard enough time just juggling the audible stuff between 20 and 20k"...yeah, OK, we do, but I don't really see how extending reasonably beyond 22k or using 24/32bit depth is really taxing our current engineering capabilities any more so…???
I’m just saying…what does it REALLY hurt to hedge our bets until at least there is more definitive proof…which is what I think a lot of guy are doing anyway in recording studios.

Bottom line..there are 3 camps right now.
1.) Those that are convinced that 16/44.1 is all that is really needed.
2.) Those that are convinced there is more to it than has been clearly defined.
3.) Those that are not sure either way.

While we can end the debate for the sake of or lack of “tangible” evidence at this time…science has only proven things to a certain point, and while that point may be adopted as “good enough for R&R”(I don’t have a real problem with that)…I don’t see how anyone can be so absolutely sure that anything above or below that limited, defined range is just useless information….???
When a full symphony orchestra plays…is there or isn’t there harmonic frequency information outside the 20-20kHz range? Those are natural occurrences…I see no reason to unnaturally limit them if it is possible not to.
It was/is bad enough with the limitations of some other recording technologies…so if digital has the greater dynamics & frequency range capability, why argue against using it…??? :D
 
one can just easily say that we are also speculating that anything outside our range of conscious hearing is "useless information"...

I'd take it even further than that and say that much of what is inside of our range of conscious hearing is useless information.
 
I'd take it even further than that and say that much of what is inside of our range of conscious hearing is useless information.

Yeah...I felt that way during some of the "Grunge/Alt" years. ;)
 
I don't see that, what am I missin?
It can help when looking for aliasing if you think of the Nyquist frequency like a mirror. Frequencies above the Nyquist frequency that sneak past the anti-aliasing filter will be "reflected back" into the audible spectrum. These reflected frequencies are called "aliases" (hence the terminology :) ). Here's an example of a typical iealized graphic representation of this effect (this one from SOS mag):
sos_aliasing.jpg

In the FFTs from George's samples, the two charts are taken from the same part (same note) of each sample rate example, with the 44.1k one on top (A) and the 88.2k one on the bottom (B). I put the arrows in to show the most visible of the aliased frequencies, which you'll see are far more subdued in the 88.2k version:
reaktor_test_2.jpg


G.
 
Back
Top