There are a lot more technical explanations available, but take multiple pages to explain. As briefly as possible, here are some points -
When you record digitally, you are storing x number of bits per sample, at whatever frequency the samples are taken. If you can control levels BEFORE digitization so that you use nearly all the availiable headroom on the system you use, and ONLY if you then simply burn that EXACT file to a CDR, then until CD's have a real standard for higher than 16 bit 44.1 k sample rate, you would not see any improvement by using higher than 16 bit conversion for recording. (ignoring different manufacturers and converter quality)
However, in a Digital Workstation of any kind, the minute you change any track in any way, including level, you are performing MATH. In any math operation, it is rare for there NOT to be a remainder. If you do the math on a digital audio signal at 16 bit, you are rounding off a lot more information than if you carry it out to 24 or 32 bits, or 32 bit float which is 24 bit with an 8 bit exponent. This means that every thing you do to a 16 bit track causes more information to be lost because you are rounding off every answer to 16 bits before using that truncated answer for yet another calculation.
I'm running out of time for now, but basically the higher resolution you do the MATH in, all the way through a project, the less error you introduce into the final 16 bit product and the closer to a REAL 16 bits of information you end up with on the CD.that's why the same bit rate/depth (supposedly) on two different CD's can sound so much better or worse. One CD may really be the equivalent of maybe 10 bit 22 kHz, while the other could actually be using ALL the availiable quality inherent in 16/44.1 -
Hope that helped some... Steve