i thought it is best to record around -18dbfs

  • Thread starter Thread starter djclueveli
  • Start date Start date
Not really. Remember that when we digitally increase the volume, all we're doing is moving bits over. The LSBs are just getting filled in by zeros, so the precision of the value is not really changing.

It's rather like in regular digital math. If you have a value of, say, 5, and you multiply that by 10 to try and get a 10x "closer look" at that value, you wind up with 50. No more real resolution as been added to the "5" by putting that 0 on the end.
G.

OK, to me the value thing is working with my brain better than the bits thing. Yes we are talking about 16 bits, but these are just the binary system we use to encode our voltage values, of which we have +/- 32767 in 16 bit. So that said, we are not only moving the bits over, we are spreading them out over a wider range of values. As we go up in db number toward 0, our voltage range gets much larger but our values are always in .00030518509476V steps. So I guess I'm saying, a db isn't a db isn't a db. It depends what range of the scale we are talking about. At -20db you are working with a smaller range of possible voltages (from +/- 1V to 0V), but the chunk of voltage that can be represented by your bits doesn't scale down with the range. It is still the big fat .00030518509476V chunk. And when we are talking about shifting a 6db range up to a new 6db range higher up the scale, there are a lot of possible values that will get "skipped" over because we have to take a small voltage range and spread it into a larger range of voltages. I'm not sure I'm making sense anymore, but it feels like it to me anyway...;) Also, starting to think it doesn't really matter...
 
OK let's try this:

Would it be true that it is possible for a signal going into an A/D approaching full scale will be able to represent much smaller changes in db than a signal approaching 0VU could represent? So like, a signal coming in at -18 could only vary by as fine as a .02db step, whereas a signal coming in at -6dbFS could be represented by as fine as a .0055db step?
 
OK let's try this:

Would it be true that it is possible for a signal going into an A/D approaching full scale will be able to represent much smaller changes in db than a signal approaching 0VU could represent? So like, a signal coming in at -18 could only vary by as fine as a .02db step, whereas a signal coming in at -6dbFS could be represented by as fine as a .0055db step?
*Assuming* that the actual physical precision of the converter curcuitry could match the theoretical precision of the binary math, then yes, that would theoretically be true.

(It would, however only be true if the higher digital range were reached during conversion like you state it here. Once converted, however, the precision - and therefore also the resolution - are set. Digitally changing the volume will not increase resolution.)

But in the real world that was one of the things that Roger Nichols was talking about in the article you cited a couple of posts ago; that when you hit the higher registers of the the converter, non-linearity in the circuitry itself becomes a problem, and the non-linear distortion introduced most likely swamps any theoretical resolution differences.

Hmmm...I wonder if this is the same thing that Pipeline was reporting on with his testing of the Pipeline Effect a few pages back....

G.
 
Reggie, yes.

But you'd have to overdrive your preamp to do it. So it's still not a good thing to do. It keeps coming back to to the same thing.

I'm really starting to think that "digital resolution" is a crappy way of looking at the bit phenomenon. Seems like it would be more appropriate to call it voltage.

I can't remember anyone ever thinking that more voltage resulted in better "analog resolution"...

Also, remember that tracks sum when you mix. Whether or not you blow it in step 1, you're going to probably have to lower that value (the signal level) back to where it should have been in the first place, or lower. It's going to counteract any imaginary advantage you might have had.

Something that might help out a little, which goes back to the original article about recording vocals, would be a small amount of compression. Again, you'd have to have a good understanding of compression, attack & release settings etc... to be able to get it right. Just like the wrong gain staging, the damage can't be undone.


sl
 
It won't get represented more accurately, but it doesn't matter. This all gets taken care of in the reconstruction process. Any of those 'lost' discrete steps would represent something happening at a higher than nyquist frequency.
I'm not sure I follow that one, Jay. Even if one is sampling at the Nyquist frequency and no more, the reconstruction is based upon a manipulation the values of the samples that are indeed there, is it not? So wouldn't A) the exact value of the samples affect the accuracy of the reconstruction, and B)the fact that we're talking steps of amplitude in samples that are indeed sampled at Nyquist, and not an increase in the frequency of samples beyond Nyquist mean that we're not talking about high-frequency, Nyquist-forbidden events here, but rather a property that *does* have meaning to the reconstruction process?

G.
 
Reggie, yes.

But you'd have to overdrive your preamp to do it.

Quite likely, yes. Just trying to figure out some concepts. I have no problem with the idea that our gear and even our converters are set up to operate around 0VU. And I'm not having problems keeping my levels in check on the way in either. I am starting to think there is a slight flaw in the way this whole PCM thing works, though. When are those cheap multichannel DSD interfaces gonna come out already? :D
Thanks for providing a good brainworkout
 
I'm not sure I follow that one, Jay. Even if one is sampling at the Nyquist frequency and no more, the reconstruction is based upon a manipulation the values of the samples that are indeed there, is it not? So wouldn't A) the exact value of the samples affect the accuracy of the reconstruction, and B)the fact that we're talking steps of amplitude in samples that are indeed sampled at Nyquist, and not an increase in the frequency of samples beyond Nyquist mean that we're not talking about high-frequency, Nyquist-forbidden events here, but rather a property that *does* have meaning to the reconstruction process?



G.
The exact value of a single sample is not very important because of the samples around it. At the frequencies we are talking about, the rise time of the voltage is slow enough that it wouldn't matter if a sample was 1/1000th of a db off. The next sample might be off by that amount the other way and so on. Over time, this all gets smoothed over and averaged out. Any deviation of a single sample would represent a hump on the wave that is happening at a frequency above Nyquist.

The sample error is not cumulative. One sample being off does not lead to the next sample being off, so even if one sample was off by 12db, that would be a 12db spike at 44.1k. Well above Nyquist. All of the errors will be above Nyquist during reconstruction.

I guess what I'm trying to point out is that all the 'detail' that everyone is so worried about losing is stuff that can't be captured, reproduced, or heard anyway.

There is a lot of hand-wringing over minute little details like 'resolution' in a 24 bit world, while proper gain-staging gets tossed out the window when that actually will make a big difference.
 
Last edited:
I'm really starting to think that "digital resolution" is a crappy way of looking at the bit phenomenon. Seems like it would be more appropriate to call it voltage.

Agreed!

I can't remember anyone ever thinking that more voltage resulted in better "analog resolution"...

I seem to remember the guy from spinal tap said you get better resolution at 11.:p



Maybe Walters can step in and straighten this all out for us!!!!!:p:p


:D
:D:D
:D:D:D
 
The exact value of a single sample is not very important because of the samples around it. At the frequencies we are talking about, the rise time of the voltage is slow enough that it wouldn't matter if a sample was 1/1000th of a db off. The next sample might be off by that amount the other way and so on. Over time, this all gets smoothed over and averaged out.
...
I guess what I'm trying to point out is that all the 'detail' that everyone is so worried about losing is stuff that can't be captured, reproduced, or heard anyway.

There is a lot of hand-wringing over minute little details like 'resolution' in a 24 bit world, while proper gain-staging gets tossed out the window when that actually will make a big difference.

Ah come on, let's not take the easy way out just yet, let's explore a little more...
So at what point does it matter how far off the amplitude of a sample is? Would an inaccuracy of .5db matter to you? Cause I think that is about where it is once you get around -65. You try to feed -64.4 into your 16 bit convertor, but all it registers is sample value #21 = -63.85 . I think.
;)
 
Ah come on, let's not take the easy way out just yet, let's explore a little more...
So at what point does it matter how far off the amplitude of a sample is? Would an inaccuracy of .5db matter to you? Cause I think that is about where it is once you get around -65. You try to feed -64.4 into your 16 bit convertor, but all it registers is sample value #21 = -63.85 . I think.
;)
I edited my post after you quoted it. I said:

The sample error is not cumulative. One sample being off does not lead to the next sample being off, so even if one sample was off by 12db, that would be a 12db spike at 44.1k. Well above Nyquist. All of the errors will be above Nyquist during reconstruction
 
Ah come on, let's not take the easy way out just yet, let's explore a little more...
So at what point does it matter how far off the amplitude of a sample is? Would an inaccuracy of .5db matter to you? Cause I think that is about where it is once you get around -65. You try to feed -64.4 into your 16 bit convertor, but all it registers is sample value #21 = -63.85 . I think.
;)
There are two answers to this question.
1. You can't do anything with an idividual sample. The samples are turned into vectors that recreate the wave. It's not the stair-step thing that everyone seems to imagine, it doesn't work that way. Any signal under nyquist will have at least 3 samples to define it. Any small deviation will not be reproduced.

2. I don't think I would notice the difference between a -63.85dbfs signal and a -64.4dbfs signal. If there is other stuff going on in the recording, I doubt I would hear either.
 
The samples are turned into vectors that recreate the wave.

Interesting , we need more ways to visualize or to try and imagine whats going on during reconstruction . Thats a good way to put it ; why do all the tuitorials always show the dam steps ( Continuous to discreet, quanitization I suppose) Maybe this thread is'nt so bad after all:p:p:p

Thanks for a new way to look at it!


:D
:D:D
:D:D:D
 
The steps help you visualize how the information is stored. It says nothing about reconstruction.

Remember that all the information between each 'step' would be super high frequency that would get thrown away.
 
The next sample might be off by that amount the other way and so on.
I get what you'e saying...well, probably not, otherwise I'd simply be saying, "OK, thanks, I get it", and be moving on with my life :D.

I understand that any single sample is irrelevant, and that even if one single full cycle within Nyquist were off, that it would probably be audibly irrelevant.

What still trips me up though is how the probability of the whole thing works. While over the course of the entire recording, any errors will indeed average out because half will be as far over as the other half are under, all that seems to guarantee is that the overall RMS will not be affected.

But at Nyquist, like you say, it takes slightly over 2 samples per frequency cycle to accurately reconstruct that cycle later on. While the probability is that the erros will "balance out" over time, I don't see any guarantee that two or three samples or even eight samples in a row will even come close to balancing out. One could easily have 8 samples in a row that are undervalued - equalling a full 3-4 cycles that are "wrong" - followed by 8 samples and 3-4 cycles that are overvalued and equally wrong in the other direction. So we have 6-8 cycles in a row that are reproduced wrongly even though over the cycles the overall average is balanced and accurate. Then repeat those 8-sample couplets over again, and we have now doubled the number of cycles that are inaccurately reproduced. And so on ad mauseum.

That can even virtually as easily be stretched out over a string of 1000 or 10,000 or 50,000 samples before the probability distribution between overs and unders actually breaks even, and not just 16. I just don't get how one can guarantee any kind of audible accuracy due to probability distribution under those circumstances.

There's probably some kind of mathematical "magic filter" built into the Nyquist reconstruction algorithms that takes care of that issue. But I'll be damned if I can find any explanation or discussion of Nyquist whatsoever that actually explains in fairly lay terms what is acctually going on. We either have a choice of that basic but inaccurate "stairstep" baloney, or we have an engineering thesis that's nothing but pages upon pages of equations.

They can explain quantum physics in books for lay people that, while maybe not scientifically 100% accurate, explain things well enough to get a handle on without having to do the calculus. You'd think someone could do this with Nyquist information theory, which is MUCH simpler mathematically.

And yeah, I completly and totally agoree that from a practical standpoint, all this talk about voltage/decibel resolution is much ado about nothing - except for the excess of misinformation out there. I'd really would just like to have a better understanding of the underlying mechanisms behind it all anyway, at the least for my own edification, and at the most to be able to try and correct the misinformation at the core and be able to understand and explain WHY.

Not everybody cares about the why or the how, they just want to know the what; I know that. "Don't tell me how a compressor works, just tell me what to set it to" and all that crap. But I'm wired in a way where the "whys" and "hows" are fundamental to understanding and mastering the subject.

So, sorry for that mini-rant, and sorry for not asking the easy questions :o. If this makes someone's brain hurt or strikes them as irrelevant, they can always change the channel...especially considering that the questions in the OP of this thread were already answered back on the first page ;) .

G.
 
But at Nyquist, like you say, it takes slightly over 2 samples per frequency cycle to accurately reconstruct that cycle later on. While the probability is that the erros will "balance out" over time, I don't see any guarantee that two or three samples or even eight samples in a row will even come close to balancing out. One could easily have 8 samples in a row that are undervalued - equalling a full 3-4 cycles that are "wrong" - followed by 8 samples and 3-4 cycles that are overvalued and equally wrong in the other direction. So we have 6-8 cycles in a row that are reproduced wrongly even though over the cycles the overall average is balanced and accurate. Then repeat those 8-sample couplets over again, and we have now doubled the number of cycles that are inaccurately reproduced. And so on ad mauseum.

That can even virtually as easily be stretched out over a string of 1000 or 10,000 or 50,000 samples before the probability distribution between overs and unders actually breaks even, and not just 16. I just don't get how one can guarantee any kind of audible accuracy due to probability distribution under those circumstances.
G.
OK, I'm getting a little over my head as well.
First, audio is not random. It will average over very few samples because the vector can only do so many things in the audio band.

Go back to the stair step analogy, only redraw it with a real-world signal instead of an 20k sine wave. Even simple averaging across several samples will be able to perfectly reconstruct a 1k sine wave. The errors are too small and go by so quickly that they don't matter.

Second, the audio that has the least precision is also the audio that is really, really quiet. Could you tell the difference between -65dbfs and -64.5dbfs in a mix of other audio that's averaging around -20dbfs? By the time the audio gets up to a usable level, it has enough precision.
 
Could you tell the difference between -65dbfs and -64.5dbfs in a mix of other audio that's averaging around -20dbfs?

Maybe. For example there's definitely a difference between dithering algorithms at an even lower level. It depends on what the audio at that level is doing in comparison to the other audio and if we subjectively mask it out or have no other frame of reference to compare it to.

There are a lot of very good questions in this thread, keep 'em coming.

Quite a few of the questions brought up in this thread are answered here, see section on quantization:
http://www.headwize.com/tech/dharma_tech.htm
 
Last edited:
Second, the audio that has the least precision is also the audio that is really, really quiet. Could you tell the difference between -65dbfs and -64.5dbfs in a mix of other audio that's averaging around -20dbfs? By the time the audio gets up to a usable level, it has enough precision.

Depends on how much I want to smash and boost that particular instrument track after it has been recorded. Maybe I want those tails down around -60 to be brought up nearer to the -20 average. ;)
 
Depends on how much I want to smash and boost that particular instrument track after it has been recorded. Maybe I want those tails down around -60 to be brought up nearer to the -20 average. ;)
30db of boost?

I wish I was better at graphics programs. If you make a graph where amplitude is the Y abd time is the X. Make the Y a logrythmic scale representing the possible voltages. (the positive side should be enough to prove the point) Now, superimpose an audio signal on top of it. You will see that no matter what you put on it (in the audio band) you won't have 100's of errors in the same direction in a row. It doesn't happen because of the sample rate vs. the frequency of the signal.
 
OK, I'm getting a little over my head as well.
First, audio is not random. It will average over very few samples because the vector can only do so many things in the audio band.
The idea that it can average out or smooth out the quantization errors correctly after only a few samples would seem to me to be dependant upon a balanced distribution of over-value and under-value in the quantization errors. For example, just to pull a number out of the air, if it took only four samples to guarantee good reconstruction, what happens when all four samples' err on the low side, which seems to me to be an entirely feasible event?

Yeah, I'm having a hard time keeping my nose above the water level at this point too. I'm sure my question comes out as ignorant to those who truly inderstand Nyquist on the schematic level, and not just the conceptual level. I have many of the general concepts down, but this question goes to the mechanics of it.

And, yeah, it is a relevant topic, IMHO, even if it does cut rather deep into the engineering. The answers to these questions provide the definitive explanation for the answer as to just how much, if any, difference "precison" or "resolution" in the amplitude of the sample actually makes.

It's actually quite amazing the lack of real info actually out there on Nyquist theory itself. Everybode dances around it's ramifications, and folks like Lavry hint at much of it while explaining something related, but nobody - that I have found yet, anyway - actually explains on a non-math formula, PhD thesis level how it actually works.

Anybody know of any books or websites that can do for Nyquist sampling and reconstruction what someone like Timothy Ferris can do for science?
masteringhouse said:
Quite a few of the questions brought up in this thread are answered here, see section on quantization:
http://www.headwize.com/tech/dharma_tech.htm
I read the whole thing, and it really is quite a good article in the flavor of the kind of thing I'm looking for (I understand the concept of 1-bit DSM for the first time now ;) ). Thanks a bunch for that link, Tom :).

But he still didn't quite scratech my itch. It sounds like he's simply dismissing the quantization error as noise that's fudged via the use of cheats like oversampling, filters and dithering. Which, of course, it is. Not exactly headline news there :D.

But I stilll don't get how the waveform can be reconstructed from inaccurate sampling that's biased either over or under. It would seem to me not that it would create a slightly inaccurate sine wave that can be dithered smooth, for example, but rather that it would - at least potentially - create an entirely different sine wave of different amplitude and slope altogether.

G.
 
Back
Top