No. 44.1k can reproduce a 1k sine wave perfectly. You can sample it at 100MHz and it would still be the same and no more accurate.
Not true. 1kHz tone may not show up as many discrepancies as say a 20kHz tone (or even more realistically a complex sound wave), but there would still be discrepancies. On A-D conversion a signal is more oversampled at X times the sample rate. Groups of samples of this higher sample rate are then averaged out to one level to represent the mean voltage over the period of time one sample must represent (1/44100th of a second for 44.1kHz or 1/96000th of a second for 96kHz). Then on D-A conversion the signal is then over sampled again, however this time the extra samples are an estimation of the original oversampling. This changes the larger stepped voltage signal (44.1 or 96) into finer steps, which are then 'ironed out' by using low pass filters, effectively smoothing the voltage into a continuous one rather than a stepped one. more analogous to the original. Therefore when averaging out the oversampling levels into a single 'sample', you're much more likely to have an accurate average if you take a smaller number of oversampling samples to create your single sample (ie 96kHz).
This averaging and then re-guessing effectively slightly changes the shape of the waveform reproduced, added to further by the fact the clocks in an audio converter (due to impurities in the crystals used) aren't 100% accurate and so timing discrepancies are introduced with the sampling and re-sampling. The argument comes into it's own when you introduce harmonics as you say, but just because you can't detect a specific frequency (ie about 20kHz) doesn't mean you can't detect the effect it has on the overall signal. I have experimented with a very well renowned engineer friend of mine, who can't actually directly hear frequencies about 10kHz, but has without fail detected the difference between a 44.1kHz sampled sound and a 96kHz sample sound (where sound coming from exactly the same source material) as well as a 1dB adjustment of a 16kHz HF shelf when recalling mixes.
The fact is that discrepancies are there, whether you can hear them or whether you decide they are insignificant enough for the lower sample rate to suffice... they are there.
Could you possibly have meant 0VU?
My apologies there. I meant -14dBFS = 0VU, but the figures were almost irrelevant. My point was that your analogue input amps to the converters have an inherent noise floor/head room ratio. By using 16bit, each bit/quanta is representing a larger proportion of the analogue signal than with 24bit. More precisely in the 16 bit system the OR gates used in the converters to denoted which bit should be used to represent a certain voltage, only trigger once a higher electric charged has been reached. If the case was that the OR gates triggered at equal levels regardless of bit depth then when flicking between the two bit depths you would have to recalibrate your input amps each time as, when your 24bit depth read 0VU=-14dBFS, your 16bit depth would then read a higher dbFS level. As it is the calibration level stays the same, but the electrical charge represented by each bit differs.
Granted, yes, you lose bits whenever you attenuate a signal in the digital domain but you only lose 1 bit for every 6dB of attenuation. If you find your levels need more than that, then there's something seriously wrong with the recording's gain structure. This is also why we have floating point and double precision which can accommodate these sort of things. Dude, I'm sorry, but I get the feeling you are grossly misinformed.
Firstly, one doesn't always record one's mixing material, so it can be the case that you need more attenuation/gain than 6dB. I generally record in a way that means at mix down my fader rarely moves beyond +/- 4dB (disregarding fading in and out, which happens).
Secondly and the point here, is that attenuation/gain in the digital domain is not as simple as truncation. The algorithms used to generate attenuation/gain introduce artifacts that truncation doesn't. Fade movements are real-time calculations, rather than off line bit truncation.
No offense, but I think you may have been hearing things.
No offense, but if I was, so were some other highly experienced engineers and producers. It was done on the album 'Mandé Variations' by Toumani Diabate, with engineers Jerry Boys, Tom Leader and producer Nick Gold also present. The mix was running out of Pro Tools 7.x, through 192 convertors at 96kHz into an SSL E Series console. During the test the 192s were calibrated to -14dBFS with the faders at 0 and a mix put down to 1/2 tape, then at -18dBFS with the faders at -4dB, and a mix also put down to the same 1/2 tape. This meant that signal to the console was the same level for each mix and all that had effectively changed was the Pro Tools fader level. In an A/B playback at equal listening volumes, the unanimous verdict was that stereo width was compromised with the lower fader level in Pro Tools. The fact that the 192 output amps changed was negated by the result of this test matched precisely with result when the fader levels alone were altered and the monitoring level was then adjusted for equal listening levels. Admittedly this was on a solo kora recording, which is much more susceptible to noticeable degradation, however the artifacts were there.
Cheers
Well, now that I can agree with.