Farview said:
I wish this myth would die already. This isn't true. Any time something is resampled, it is upsampled into the MHz range and then downsampled.
You're right that it is not as simple as throwing out every other sample because of the need to avoid aliasing (Shannon-Nyquist limit). However, you don't have to upsample to downsample unless the old sample rate is not evenly divisible by the original rate.
Downsampling to an uneven rate is done by adding a low-pass filter to remove aliasing, then boosting the rate to a higher frequency (probably doing interpolation to generate intermediate sample points), then downsampling it. Downsampling to a rate that is an even multiple can skip that second step entirely, and thus is as simple as applying a low-pass filter and then throwing out every other sample. If you upsample to any frequency that is an even multiple of both sample rates, you will always get the same results back as though you just threw out every other sample.
If you'd prefer a more accurate set of values, you could weight the sample as 1/2 S[k] + 1/4 S[k-1] + 1/4 S[k+1] to smooth things. It probably isn't a bad idea to do that, really. Either way, the math is a lot simpler than doing an uneven frequency reduction.
If you have a theoretically perfect interpolation, then an uneven division is just as good as an even division. In reality, an equal division is slightly more precise, though the difference is so minimal that it would be way below the noise floor of even the best converters. If you did it a few thousand times, you might notice a difference....
Farview said:
You've got to remember that your converters are actually sampling a lot faster than the sample rate (oversampling). That gets paired down to the sample rate that gets stored on the computer. There is a theory that super high sample rates (192k) are actually less accurate because each sample is based on less information.
If that's really a theory, then it is complete crap. The 192 kHz sampling is the average across a shorter period of time, and is thus, by definition, a more precise approximation of the value at a single point in time.
Consider a survey conducted by random telephone polling. You want to know the typical opinion of a person in San Francisco on some subject. You interview people in SF, San jose, and Sacramento and average the results. This might be a more accurate representation of the opinion of Northern California or it might be less accurate if Sac and SJ are not representative of other areas. In any case, it is a less accurate representation of the opinion of San Francisco.