If you have a high quality audio interface, ideally it should make no difference whether you record at 44.1kHz or 88.2 kHz (or whatever). In the real world, it isn't that easy. To avoid aliasing, analog-to-digital converters (ADCs) in your audio hardware have a low-pass filter. It rolls off the sound at a particular rate, with essentially a complete cut from the Nyquist point (half the samle rate) upwards.
Since there is no such thing as a perfect filter (and particularly so in analog electronics...
, this may result in high frequency roll-off significantly below the Nyquist point. Most people won't know the difference, but if you have good ears, you very well may.
Thus, with lesser quality filter hardware, it can often be better to record at a higher sample rate, then down-sample it to something more sane. Also, I'm told that if you're doing any audio frequency or time alteration, higher sample rates result in less artifacting. Your mileage may vary.
As for the question of dithering and sample rate conversion, there's no reason you couldn't add a little bit of time-domain noise (a random fraction of the sample preceding or following the one you actually use) to make factor-of-two rate conversions (e.g. 88.2kHz to 44.1kHz) sound better. In theory, it might give the perception of better high frequency performance in much the same way that dithering gives the appearance of higher bit depth. I doubt humans could perceive it, though.
What might be nice would be something similar to dithering to help mask the artifacting inherent in non-factor-of-two rate conversions like 96kHz to 44.1. That might actually not be a bad idea.... At best, those sorts of conversions hurt my head to think about.