bleyrad said:
Whoa, hold it there. The only smoothing going on is from the antialiasing filter. This is not a form of noise.
Erroneous or misunderstood on both points.
The purpose of the DAC is transfroming quantized data into a linear waveform. That, by definition, is a form of smoothing. Your own description of a rounding error is an inadvertant form of smoothing.
Using the classic definition of noise as any modification of the original data not resulting in a net gain of informational content, ADC is by definition adding noise to the signal by approximation. You'd have to have an infinite number of bits sampling at an infinitely fast frequency for this process to be noise free.
While it could be argued that the intra-sample extrapolation performed during DAC is adding information, and is therefore not noise by the strict definition, the fact remains that such information is a synthesized "guess" as to how to fill in that information, and compared to the original analog signal it could be called noise in the fact that it is falsely representing itself as a copy of the original, which it is not. And turning a digital representation of a triangle wave into an analog sine wave is most certainly noise.
I'm not even talking about stuff like filtering off aliasing or stuff like that. I'm talking about the error (read: noise) inherent in the conversion between analog and digital and back again. If those errors infest the data even one bit beyond the least signifigant bit of the digital word, it can be said that those errors are at a noise level greater than the theoretical noise level defined by the word itself.
Put bluntly, it is possible for the A/D//D/A conversion process to operate at a higher noise level than what is theoretically (read: mathematically) possible by the definition of the size of the digital data stream itself. In the real world a 16-bit scheme would never truely attain a full 96dB range if the conversion process itself injects a few dBFS worth of noise into the stream.
All my point meant to be was that this does not render discussion of the "resolution" of any given digital scheme as meaningless. The higher the bit width, the higher the overall signal resolution, regardless of noise induced in the conversion process.
In fact dynamic range and resolution are actually two descriptions of the same property; much like electricity and magentism. I'll grant you that, unlike electricity and magentism, perhaps here the fact that they are two faces of the same head renders discussion of resolution somewhat redundant, and therefore discardable. But I only understand that that now after this conversation.

. I didn't grasp that 24 hours ago

.
However, I do still have a bit of (perhaps semantic) disagreement with this passage of the conversation:
bleyrad said:
gullfo said:
and if we take low level 16 bit and try to raise it, guess what? we're effectively raising the distortion level because we're taking less resolution data and moving up into a more audible region.
Ah see, you've helped me explain my point right here.
You are effectively rasing the distortion level relative to the signal, because the signal is now closer to the level of quantization error (-96db).
You are not "raising" the distortion level at all when you do this. All you are doing is increasing the number of bits by which the origial digital value is represented. In essence, the "resolution" of the data has increased, but the amount of error/noise/distortion has not.
By moving it "up" in the position of a larger word you are increasing the volume of the signal and therefore making it more audible to the human ear. This is of course ultimately important, but it should be understood that this is happening because of a shift in position (effectively a change in represented volume), not because of an increase of noise in the data.
G.