Digital sampling and stair-stepping explained

That's a fabulous video, and everyone who is interested in audio and music at any level should watch it.

--Ethan
 
great video! Very interesting. Would someone mind answering a question? How can you add dither to a recording in a DAW such as Cubase? Or is it sort of built in now days?
 
I have analogue stairs in my building. They're smoother... :D

(good vid... watched it in the Analogue forum thread.. except the guy's a bit beardy...)
 
Excellent video! I thought I understand the relationship between sample depth and sample rate, as well their relative effects on sound, but I was dead wrong! This video explained everything clearly.

Thanks for sharing!
 
Maybe the title of this thread should be changed from "explained" to "de-bunked".
one thing I still don't quite understand.
If there is only one response curve that goes through all the sample points, WHAT exactly is calculating what that curve is? A puter chip or program in the D/A converter of some kind?
Isn't it possible for it to calculate wrongly ?
 
one thing I still don't quite understand.
If there is only one response curve that goes through all the sample points, WHAT exactly is calculating what that curve is? A puter chip or program in the D/A converter of some kind?
Isn't it possible for it to calculate wrongly ?
The thing that is hardest to wrap your head around is the part about the signal being band limited. The fact that any 'detail' that would exist between the samples would be at a frequency above nyquist (and above our ability to hear) rules out all of the other, seemingly possible, paths.

For example: at 44.1k sample rate, if the original waveform had a squiggle in it that fell between two samples, the frequency of that squiggle would have to be higher than the sample rate, which is twice as high as the nyquist frequency, and higher than anyone claims to be able to hear. So that squiggle really couldn't exist and would be filtered out in the conversion process, so it could not possibly be part of the reconstruction. That's why, even though the possibilities seem limitless, there really is only one possible waveform that intersects those points.
 
one thing I still don't quite understand.
If there is only one response curve that goes through all the sample points, WHAT exactly is calculating what that curve is? A puter chip or program in the D/A converter of some kind?
Isn't it possible for it to calculate wrongly ?
The only place that the curve is actually "calculated" is inside your computer for display on the screen. I guess you might say that the reconstruction filter is what does the final "calculation" to generate the smooth curve that we hear coming out the speakers, but that is a little bit of a stretch.

At the beginning of the video he says that it is a response to some debate he stirred up in something else he had posted. I would imagine that has the answer you're looking for. I recall having read something that explained pretty well a while back, but have no idea where to start looking for it, and I know that I don't understand it quite well enough to describe it any better than I just did.
 
The thing that is hardest to wrap your head around is the part about the signal being band limited. The fact that any 'detail' that would exist between the samples would be at a frequency above nyquist (and above our ability to hear) rules out all of the other, seemingly possible, paths.

For example: at 44.1k sample rate, if the original waveform had a squiggle in it that fell between two samples, the frequency of that squiggle would have to be higher than the sample rate, which is twice as high as the nyquist frequency, and higher than anyone claims to be able to hear. So that squiggle really couldn't exist and would be filtered out in the conversion process, so it could not possibly be part of the reconstruction. That's why, even though the possibilities seem limitless, there really is only one possible waveform that intersects those points.
Note that he said "squiggle" there, not "peak". A squiggle implies more than one peak between samples. As the video demonstrates, it is perfectly possible for the DAC to produce single peaks which fall between samples, as long as they are part of "squiggles" with frequencies below Nyquist. And it is perfectly possible that these "intersample peaks" can be higher than the adjacent samples which imply them. And that is what causes "intersample overs", and why we need to be a little careful about slamming the signal right up to 0dbfs. It is possible (maybe even likely) that no single sample quite reaches "all bits on", but when the signal is reconstructed it ends up louder. At that point you're at the mercy of the headroom of the analog circuitry downstream.
 
informative post, video was well done too. 16/44.1 and the 24/48 and all the AD/DA better understood. Dither was well explained too.
 
You add a dither plugin in the bottom insert on the master output channel. Cubase comes with a dither plugin.

Isn't this usually done at rendering or capture?

Really good video and the guy gave a great vid. Might be too much information, but seems it would make a good sticky.
 
Isn't this usually done at rendering or capture?
.

It should be done as late as possible. Preferably the last step when you render to your final version's sample rate and bit depth; like 16bit/44.1khz for CD's.

Hmm, let me rephrase that... it should be done whenever you down convert from a higher bit depth to a lower bit depth; say 24bit to 16 bit. Usually, you will do all your mixing and processing at the higher bit depth then down convert to the lower bit depth when you render your final version.
 
something made me think of this thread, so I'm giving it a necro-bump!

I really do think this video should be a sticky
 
Well then I'll go ahead and reply to this finally.

It should be done as late as possible. Preferably the last step when you render to your final version's sample rate and bit depth; like 16bit/44.1khz for CD's.

Hmm, let me rephrase that... it should be done whenever you down convert from a higher bit depth to a lower bit depth; say 24bit to 16 bit. Usually, you will do all your mixing and processing at the higher bit depth then down convert to the lower bit depth when you render your final version.
It's usually best to add the dither when you are done messing with the levels. If you add just barely enough noise to mask the quantization, and then turn the whole thing down, the noise basically disappears and stops doing what it's supposed to be doing. If you add the noise and then turn the whole thing up, you've now got more noise than you really need, and it might even kind of "un-dither" in a way again. I suppose that's what this post says, but just to put a bit finer point on it: Don't fuck with it after you dither.
 
Back
Top