Resolution of digital audio goes down as the signal level goes down.
This has absolutely nothing to do with gain staging or where your faders are set or how loud something should be. Line level is a happy place for recording and mixing. Especially tracking. Track too hot and things will run out of headroom and get crunchy.
What happens in PCM audio formats with low bit depth (8 bit or 16 bit) is that as the audio approaches the low end of the scale, there is the potential for something called quantization error.
In basic theory: an analog signal that passes through a D/A converter has to be sampled and quantized. The sample rate of the target format, eg. 44.1 kHz means that there will be 44,100 snapshots of your signal taken per second. It mostly has to do with the upper limit of frequencies that can be captured. Quantization of the signal is digitized information about how much power the sample has. (volume) When the incoming signal cannot be represented with reasonable accuracy, it gets rounded. The rounding errors, usually called truncation, result in something called "digital noise".
"Digital noise" is a bit misleading. Noise in general usually refers to a sound other than your signal. White noise, pink noise, etc. Static. Hiss. "Digital noise" or "quantization noise" is actually distortion. It generates harmonics. A lot of harmonics. A veritable spew of nasty sounding harmonic distortion. If you want to hear what it sounds like up close and personal, there are several videos on youtube that demonstrate this. If you have a signal with real noise and the signal stops, you can still hear the noise. If you have a signal with quantization noise and it stops, you get nothing. The noise stops with the signal. Digital noise is correlated to the signal - it IS the signal.
A sound wave will oscilate between positive and negative values. In order to get from one side to the other, it has to cross something called zero. This is called the zero crossing.
Pheww...
So anyway, if you're in 16 bit audio or something, the distortion from quantization error only really happens at very low signal levels. 16 bit has a range of 96 decibels, so the minimum signal you can have is -96 dBfs. Things will only really start to distort at around -80 or so. So if you have a sample that lands close to the zero crossing of a waveform, it's a candidate for quantization error. This can happen hundreds of times per second, at any signal level, since all waveforms have to cross zero with rapid frequency.
Getting closer to the top doesn't help the problem at all, because it happens at the bottom.
The solution is dither. Dither is a device that controls the LSB of the audio. So the very first "bit" in 16 bit audio or whatever, is now being controlled by some kind of random probability generator instead of the signal itself. It creates a sound much like broadband noise or hiss at a really low level. The quantization errors are still there, but dither traps the harmonics generated within the noise it creates. Essentially it decorrelates (removes) the distortion from your signal. The result is your actual signal. At the correct level, which in theory doesn't exist. Our ears are not really drawn much to broadband noise at infantissimally small levels. They are drawn to harmonic distortion that sings the very same song.
As an example, if you were to record a 1 kHz sine wave at a level of -100 dBfs in 16 bit audio without dither, you would get nothing. Digital black. No signal. Range is only -96, so you're on the outside. Once the converters start dithering, you'd hear white noise at a very low level. And somewhere underneath that, a 1 kHz sine wave.
Dither should be applied any time the signal gets quantized. It's mostly automatic (unless you're running plugins with bad code or something), so the only time you should really have to pay attention to it is when you mix down and render a project. Some workstations will have it as an option when you render a mix. Others might require you to put a dither plugin on the master buss.