Bit-depth info / question

  • Thread starter Thread starter aughii
  • Start date Start date
So you didn't start with a sine wave and sample it at 8 bit, you told your program to generate one and this is what it gave you?

Yes. There is no converter in the process until you try to play the output of the wave file. When I previewed the sine waves before rendering they sounded like sine waves at all levels. After rendering to the file, the damage was done and the D/A reproduced the distortion artifacts. The first one obviously sounds like a DC square wave. The only parameter I changed in the tone generator was level. I don't see how recording an externally generated analog sine wave through a converter could make a difference, but it's easy enough to set up if you have the tools.
 
FYI, I've tried to duplicate Snow Lizard's waveforms in the current version of Audition (Cool Edit is 10+ years ago) and can't get anything other than a good looking sine wave, even zoomed into a point where individual samples (marked by dots) are visible. I've also taken the level down to around -70dBFS. This was with the internally generated tones.

I've also done similar with an external tone generator--with the gear I have, this moves the noise floor to around -85dBFS but I can still get down to very close to that without "funnies" on the wave form.

If I go digging in boxes in the garage I may be able to find a copy of Cool Edit 2.0. If I find it--and can get it to load on any computer I have--I may try again tomorrow.
 
I've tried to duplicate Snow Lizard's waveforms in the current version of Audition (Cool Edit is 10+ years ago) and can't get anything other than a good looking sine wave, even zoomed into a point where individual samples (marked by dots) are visible. I've also taken the level down to around -70dBFS. This was with the internally generated tones.

Same here. Another clue that SL's waveforms are "suspect" (I'm being kind) is the ringing at each transition on the 8-bit sine wave at -42 dBfs. I've never seen an audio editor generate a waveform like that.

--Ethan
 
I couldn't make soudforge do that either. Its a rather ancient version too.
 
You guys are rendering these to 16 bit fixed point files, right? 32 bit float looked fine and sounded okay at -80 - this is as low as it would let me go.
 
Same here. Another clue that SL's waveforms are "suspect" (I'm being kind) is the ringing at each transition on the 8-bit sine wave at -42 dBfs. I've never seen an audio editor generate a waveform like that.

--Ethan

The waveform generated was a sine wave. The amplitude it was set at was -46.6, not -42.

It came back as -42 because I did this at the least significant bit and the only information it has to store dynamics is 1 bit - either there is sound or no sound. Zero or one. It covers a 6 dB range so it put the wave at 6 dB away from nothing. There is no other way for 8 bit audio to do this. It also explains the DC offset.

The frequency did come back at the same pitch.
 
The theory about sampling frequency is that two samples is enough for the DAC to accurately reconstruct the waveform. More than that isn't necessary for more accuracy at that frequency, it's to extend the frequency response. Actually I think you need just slightly more that two samples, which is why the sampling frequency has to be more than double the highest frequency.

Yes, two samples is the MINIMUM a DA reconstruction filter needs to reproduce the highest desired frequency. Nyquist doesn't specify how ACCURATE the reproduction of that frequency may be at other sample rates. Regardless of the theory, we still end up with more samples per waveform cycle for the higher frequencies at higher sample rates.

And no, it's not more than two samples at 44.1. The extra samples (44.1 as opposed to 40kHz) is there to accommodate anti-aliasing filters. The only difference between 44.1 and 48kHz is a bandwidth of 20,000 and 22,000Hz respectively.

Farview said:
Besides the fact that this is a thread about bit depth and you seem to be talking about sample rate...

The thing that everyone seems to get tripped up on with sample rate and nyquist is that there is no more resolution to be gotten out of a 20khz sine wave. Once you can express the wave, that's it.

If there were any more to capture with a higher sample rate, it would be at a frequency above 20khz.

The only thing a higher sample rate allows you to do is record higher frequency waveforms. That's it.

Well, I was responding to a previous post that asserted the sample rate argument was similar to the resolution argument and claiming kind of what you're claiming above.

I'm not getting tripped on sample rate or Nyquist. I understand them both very well. I'm just reporting my own findings. What I think everyone needs to realise is that Nyquist specifies (again) the MINIMUM amount of samples to represent the upper limit of the frequency bandwidth. My point is that either way you slice it (again), there are MORE samples at 20kHz (and across the board, for that matter) at higher sample rates. I'm not stating what that actually MEANS, however, but it is definitely a notable difference worth stating from 44.1 or 48.

I've attached three sine waves generated in Wavelab representing 20kHz at three different sample rates: 44.1, 96kHz, and 192 kHz. You can clearly see that there are more samples per waveform cycle and therefore more "information" for the DA reconstruction filter to reconstruct the waveform from.

44.1:

20 kHz 24-bit 44.1.webp

96:

20 kHz 24-bit 96 kHz.webp

192

20 kHz 24-bit 192 kHz.webp


Now, I just want to clarify here I'm not saying that higher sample rates are better, but that my point was merely that there are more samples to represent the higher frequencies IN ADDITION to extending the upper limit of the bandwidth.

Take it how you will.

Cheers :)
 
That's what confused me, you were stating the obvious and I assumed you were trying to argue something. My bad.

Btw, what the wavecom looks like in the daw iz not necessarily what an o-scope would read at the output. It's just whatever graphical representation the developer decided to give the gui.

Bottom line is: sampling only works inside the limits of frequency and dynamic range that the sample rate and bit depth can capture. As soon as you go over the edge, everything falls apart rather quickly.

Even if sl's daw is working correctly and ours aren't, it makes perfect sense that attempting to create a wave 2db above the noise floor is going to be a mess.
 
Yes, what we're witnessing there is truncation distortion affecting the waveform at a very low level.

In any case, I'm glad my point is taken now. I didn't mean to derail things so much and was merely addressing an earlier post.

And yes, I realise an oscilloscope won't see the waveform as it's represented in the editor, but the GUI does give a good REPRESENTATION of what INFORMATION the DA is being fed. More dots, so to speak.

All's well that end's well.

Cheers :)
 
And no, it's not more than two samples at 44.1. The extra samples (44.1 as opposed to 40kHz) is there to accommodate anti-aliasing filters. The only difference between 44.1 and 48kHz is a bandwidth of 20,000 and 22,000Hz respectively.

It just needs the tiniest fraction over two samples to qualify as "more than" two. That takes effectively none of the bandwidth used by the filter.

But I think the important question is, does higher sampling frequency above 44.1 or 48 sound better?
 
It just needs the tiniest fraction over two samples to qualify as "more than" two. That takes effectively none of the bandwidth used by the filter.

But I think the important question is, does higher sampling frequency above 44.1 or 48 sound better?

That's a hard question to answer because some converters sound better at different t sample rates than others do. That is obviously due to some design thing and not the sample rate on its own. This is one of the reasons why there are so many different opinions. If it was a night and day difference, everyone would agree. (no one argues that recording drums with the internal Mic on a cassette deck sound better than a pair of u87's into a 2 inch deck)
 
allrite ..... I'll ask.
I understand basically about digital even though I do little woprk in it specifically so I've had little reason to go into depth on some of the theories and fine details but I've always wondered about the folowing.
The D/A does the converting of bits to analog right?
And it takes those samples , individual specific moments of time along the waveform, and reconstructs the original waveform. I've also read that some early poorly designed converters could have errors as severe as even playing wrong pitches.

So there's some, I'm guessing, some algorithm the D/A uses to reconstruct the original.
Now .... there are many many places along that waveform that it didn't sample and it has to fill in the blanks so to speak.
But doesn't that mean it has to, for instance, that the overall wave is a sinewave?
What if it isn't?
And it's not cut and dried that specific digital samplings will sound a definite way since different D/As do sound different.

And also, since the D/A is literally producing every sound we hear doesn't that make it essentially a VERY sophisticated synthesizer?
 
Any part of the wave that is small enough not to get sampled is at a frequency above nyquist. Therefore it would be filtered out and you count hear it anyway.
 
Mo Facta said:
Yes, what we're witnessing there is truncation distortion affecting the waveform at a very low level.

That's pretty much where I was trying to go to get to the resolution thing. The papers I've read on in call it quantization error or quantization noise. Definitely a truncation effect.
 
Lt. Bob said:
The D/A does the converting of bits to analog right?
And it takes those samples , individual specific moments of time along the waveform, and reconstructs the original waveform. I've also read that some early poorly designed converters could have errors as severe as even playing wrong pitches.

Now .... there are many many places along that waveform that it didn't sample and it has to fill in the blanks so to speak.
But doesn't that mean it has to, for instance, that the overall wave is a sinewave?
What if it isn't?

I've never heard of a converter that got the pitches wrong. To me that would indicate that a sample rate was out of sync, maybe trying to play a 44.1 file at 48 (which I've done before - my bad...) or else the clock was defective.

As for the shape of the sine wave, it's pretty much a pure tone with no distortion. I chose that for the examples I posted to try to get the limitations of the file format to distort the wave so it would be readily apparent and highly visible. Presumably if you can get a pure tone with no distortion to not distort, the bit depth should be sufficient to not worry about. It might also be dependant on the frequency of the wave - lower ones take longer to develop.

How do I get there?

For us to hear anything we need something to vibrate. If it vibrates fast enough, say 20 cycles per second, this is the bottom threshold where we can hear a pitch. This is the horizontal axis of the sound sampler, determined by sample rate. Assuming a rate of 44.1 kHz (it really doesn't matter so much) a lot of the accuracy of the sound has to do with the vertical axis which is going to give the sound enough power or volume for us to hear the vibration. The shape of what happens on the vertical axis doesn't affect the speed of how it oscillates. If we hear 440 cycles per second as a sine wave, it's an A note. If we hear 440 cycles per second of a badly distorted sine wave, it's still an A note.

When there's something that happens where it didn't sample, it's the vertical axis, or the bit rate at fault that causes distortion. If we can get the horizontal AND vertical axes to line up properly to the input signal, the sound should come back clean.

Relatively.

Put it this way, the scale of sample rate should be sufficient to capture the upper limit of frequencies you want to capture. After that it's up to the bit rate.

With PCM encoding using linear quantization, the "linear" part (this is for PCM intended for audio recording) means equal step size.

The A/D converter reads the voltage coming in and assigns it a value depending on its amplitude and polarity. This gets coded to a binary number. The D/A takes the number and converts it back into voltage.

We know that each bit operates on 6 dB of range. 16 bit gives 96 dB range. 24 bit gives 144 dB range.

The voltage steps from the A/D that get converted into binary place markers are all an equal fraction of a volt.

If we want a relative 6 dB increase we have to double the voltage. If we want a relative 6 dB decrease we have to cut the voltage in half.

So at 0 dBfs in 16 bit, we have 65,532 levels to quantize the strength of the signal. At -6 dBfs we have 32,766 levels. Effectively we can sample to very, very minute fractions of time and power within this range so the resolution or accuracy should be very good. The system should be able to handle changes in level much better than our ears can.

The resolution cuts in half every time we reduce the thing by 6 dB. On a simple sine wave test it started to fall apart for me at -60 dBfs at 16 bit with a 440 cycle wave. This is having the system operate on around 6 bits, or 64 discreet quantization levels. Approximately, anyway. The errors start coming in where the A/D starts to get values like say 30.98, 31.65, 31.79, 31.93, 32.06, 31.94, 31.8, 31.66, 30.97.

PCM stores the numbers as 30, 31, 31, 31, 32, 31, 31, 31, 30. Regardless of the shape of the wave, the numbers are close, the frequency is correct and the result is distortion. For practical purposes this is only going to concern someone recording the most dramatic, dynamic, uncompressed symphonic performance.

24 bit resolution can help because it parks most of the junk at the extreme bottom beyond the reach of preamps, mics, amps, speakers and ears. -60 dBfs just became 14 bit audio instead of 6 bit.


Does this make any sense?
 
The reconstruction filter at the DA employs a low pass filter and an interpolation function based on the Whittaker-Shannon interpolation formula to "fill in the blanks" in order to reproduce a smooth analog signal.

More info here:

Reconstruction filter - Wikipedia, the free encyclopedia

Cheers :)
cool .... thanks ..... I guess my thing for the New Year will be to study up and finally get a handle on the details of the digital process so I can be positive about some of this stuff when I reply to threads.
 
If you ignore antialiasing needed to comply with the Nyquist requirements, you'll have trouble.

If you ignore reconstruction filtering at the output, you'll have trouble.

If you use dither, you get rid of quantization distortion and can easily resolve signals below the noise floor.

That Dither Thing - [English] (nice demo of the last point)

This is signal processing 101. 16 bits is more than enough to exceed any possible dymanic range that can be reproduced in a quiet room. 44.1k is more than enough to perfectly reproduce signals with frequency content to 20kHz.
 
pcnj50a said:
If you ignore antialiasing needed to comply with the Nyquist requirements, you'll have trouble.

If you ignore reconstruction filtering at the output, you'll have trouble.

Sort of. These functions are built into the converters. If I understand the reconstruction filter correctly now, it's essentially the same as the anti-aliasing filter. If you don't remove the frequencies above Nyquist there are artifacts. I don't think the audio could be listenable but I haven't seen a converter that lets you bypass these filters.

My point to Ethan is that permanent data loss from numeric truncation at the bit level isn't countered by removing harmonics above Nyquist. You can't draw a line between two points that aren't connected.

pcnj50a said:
If you use dither, you get rid of quantization distortion and can easily resolve signals below the noise floor.

Understanding bit depth is the cornerstone of understanding dither and noise shaping. Dither doesn't resolve signals in such a way that restores the lost data from truncation, it adds noise to cover up the square wave with something much more pleasant. Where it becomes relevant is at any point in the process that can truncate data. Very low signal levels, downsampling from 24 bit to 16 and running plugins. Distortion is cumulative so post processing is going to magnify the issue. And apparently there are some plugins that automatically dither their own output and some that don't.

Getting back to 16 vs. 24, I've seen a lot of reference on the audio forums that 24 bit improves the performance of plugins. I'm not sure if this is really an issue in a DAW enviornment because the mix engine will be running at even higher resolutions again. 32 bit float, 48 fixed, 64 or what have you. 24 bit still has less truncation as the file is written. I'd be interested in seeing other peoples opinions on this, but mine is that dither and/or noise shaping becomes necessary at the last stage when you render the output file.

That Dither Thing - [English]

This is signal processing 101. 16 bits is more than enough to exceed any possible dymanic range that can be reproduced in a quiet room. 44.1k is more than enough to perfectly reproduce signals with frequency content to 20kHz.

Very nice link. Thank you for that.
 
Back
Top