To amplify what TexRoadkill said, in the analog realm the signal is passed along as a continuous representation of the voltage of the incoming signal. When it hits the limits of the recording medium, as one example (in the case of analog tape, the physical limits of the magnetic particles to be aligned to the signal), it starts to distort, but the onset of the distortion is smooth, not a sudden on-off, so the "clipped" waveform is still rounded off. It's subtle right at the boundary area of the signal being just right or too hot, les subtle as you push it further with a hotter signal. This is what "tape saturation" is, and some amount of it is desireable, as it's a warm and aesthetically pleasing () distortion.
However, in an digital system, the sound is represented by "snapshots" of the waveform's amplitude, 44,100 of them every second in a standard audio CD, for example. And each of these samples can only contain a discrete value within the range defined by the number of bits used to represent the sample. A standard audio CD has a 16-bit resolution, meaning each sample has 65,536 possible values, with 0 representing silence and 65,535 representing the loudest sound.
If the signal is too hot for the analog-to digital converters, the samples it generates will have a value 65,535 at some level, and every amplitude that's hotter than this will too. So there can be no smooth transition to the distortion; it's either clipped or it's not. When this sound file is played back, the waveform is severely distorted becuase of the sudden flat top of the wave form, a condition that does not occur in nature. The resulting sound is far from pleasant, and there is no gray area where there's a "little" distortion that sounds good.