Since AFAICT, nobody really answered the original question....
RMS:
Root
Mean
Square
This is the same thing as the quadratic mean. To calculate this correctly, take a period of audio (choose the period as desired) and do the following:
- Take every sample within the period and compute the square of the value.
- Add those squares together into a single total value---the sum of the squares.
- Divide that total by the number of samples. This is the mean (average) of the squares.
- Take the square root of the result to get the root of the mean of the squares.
It's that simple. The RMS value for a whole track just involves a very long period and a heck of a lot of summing.
Now if you mean for a meter, that's a little different. RMS metering is really a rolling average metering. That means that you choose a period---maybe 1 ms, for example---and perform an RMS calculation for that entire period and display the results. Every time you get a new sample, you update your mean to include the new sample and discard the oldest existing sample.
As a result of doing things this way, short highest peaks are reduced in magnitude, short troughs increased n magnitude, etc. The whole thing acts kind of like an amplifier with a horribly slow slew rate.
The key question is "What period should I use?" I have no good answer to that question.... The nice thing is that because you are squaring everything, all your values are positive, so you don't have to worry about any frequency being cancelled out entirely by a poor choice of periods. The question is how long a signal should have to be at a given level before you consider it to no longer be a transient that you want the filter to diminish. *shrugs*