So I appreciate and agree with what Kenny’s saying there, but...
In that gain staging video he keeps saying things like “it’s distorting here, but we’re fixing it over here.” That’s not actually true. If it was actually distorting at one stage, then turning it down further down the line wouldn’t fix anything, it would just turn down the distorted signal. The real point that he’s making is that it’s NOT distorting at the track, FX, or bus. It’s going over 0dbFS at that point, but the internal processing can handle that. It won’t actually distort at all until you try to push it out your DAC or render it to a fixed point file format. 0dbFS is literally defined to be as loud as a fixed point file (and your DAC) can get, but until you actually try to do that, it can get a hell of a lot louder without distorting.
If you were to look under the hood of that JS Volume plugin, you’d find that the individual samples are not really “talking” in dB. The actual sample values are represented on a linear scale pretty much like a voltage. In fact, they are exactly a ratio of the voltage that would be generated by the DAC to the maximum voltage that DAC can produce. A sample which would make the meter show 0dbFS will be either -1 or 1, and most normal signals exist between those two extremes. If you add 100db of gain, what the plugin actually does is multiply each sample by 100,000. That’s A LOT, and obviously the result is a heck of a lot more that +-1. But the 64 bit floating point environment can handle numbers that are like millions of billions. 10^15 or so (in both directions from 0) which is like 300dbFS. Fixed point files and your DAC only go up to 1, but Reaper has no problem handling numbers way bigger.
BUT he chose plugins which do not artificially impose their own limits. Many popular plugins do, though. Real analog hardware has hard limits on how much voltage it can pass, and anything emulating such hardware will also emulate those limits. If you try to push a 100dbFS signal through one of those, it’s going to distort, and trying to turn it down after will definitely not “fix” that distortion.
It’s perhaps worth mentioning that this works in the other direction, too. That is, fixed point files (and DACs) have a limit on the smallest number (think like absolute value - how close to 0 we can get, not how far negative) they can represent. If a signal is too quiet, the area around the zero crossing starts to kind of get chopped out and we get crossover distortion, sometimes called “quantization” distortion. With 24 or even 16 bit fixed point, that’s still so quiet that it’ll be lost in the analog noise floor and you’ll never hear it. But if you were say to render a very quiet file to 16bit and then try to amplify back up, you’ll definitely start to hear it. In the floating point engine, that’s not a thing either. You can turn down your signal by 100db and then turn it back up and (again as long as no plugins are imposing artificial limits) you won’t have lost any information near those zero crossings. There will be no crossover distortion, and it will null perfectly with the original.
Now we’ve pretty much been talking about “real-time” playback, but what happens if we do something like Glue items, Apply Track FX, or Render stems? All of these actions create new audio files, and what happens there depends on the format of those files. If those files are going to be fixed point, then they will have limits in both directions. Render too hot, it clips. Render too quiet, and you get crossover distortion. Either way, you’ve completely lost information and there’s nothing you can do to get it back. That distortion is baked in and can’t really be fixed.
If those files are floating point, though, then it’s again just not an issue, almost as though you never left the floating point mix engine to begin with. You can render a floating point file that peaks at +100dbFS, bring it back in and turn it down 100db, and there will be no distortion. Now for some stupid reason, the default for these kinds of renders is “automatic” which I think uses the (fixed point) resolution of your interface, but it almost seems arbitrary and is never actually ideal. I strongly suggest that you go find that setting (in Project Settings) and change it to a floating point format instead and then save that as your default project settings. (Note that this won’t help any previously saved projects). I personally think 32bit is plenty. This way you don’t have to worry if whatever you render is a little too hot or too quiet. You can just render and know that all the information you need will be there.
In fact, ANYtime you’re rendering ANything other than your final distribution file it should go to floating point. Well, OK, like test mixes or whatever will have to be fixed point if you’re going to play them on other devices, but the mix file that you’re going to bring back into a mastering session should be floating.