What's wrong with using "Normalize"?

sjfoote

New member
I see that a lot of folks are not in favor of using the normalize function on a recorded track. I find that when I record an audio track in my Cakewalk program and import that track into another program like Sony Acid or MAGIX Music Maker, when compared to pre-recorded loops that I am using in these programs, the level of my Cakewalk track is always a lot lower. In this case, I usually use normalize on my Cakewalk track and it then appears to be a lot closer to the looped tracks. I shoot for a level of -6 to -10 on the Cakewalk audio meters when recording and this level may be too low, but I want to avoid clipping when recording a new track.

So what's wrong with using normalize in a case like this?
 
The main problem with using normalize is that it fosters bad tracking habits. The normalize function itself just brings up the track's gain (and noise floor) to the level it should have been tracked at in the first place.

It's easy to get lazy when tracking and say "I'll just track it low to keep from clipping and then I'll normalize it later". This brings the noise floor way up and makes for generally crappy sounding tracks.
 
So what's wrong with using normalize in a case like this?
Because it's unnecessary, and implies a wholesale misundertanding of the mixing process.

First, it doesn't matter if your raw tracks are not the same volume. Hell, even with good tracking, the chances that two tracks from two different instruments playing two different parts are going to have the same apparent volume is slim. There's nothing that says that they should be - let alone have to be - the same volume.

Second, there's nothing anywhere that says that the tracks will come together in the mix at the same volume anyway. In fact, a mix may call for one or more of your loops to actually come down in volume (pretty common with loops, actually). If that's the case, than there's no point in using those track volumes as a standard by which to judge other track volumes. There is no "right" track volume.

Third, peak normalization doesn't make them the same volume anyway. Perceived volume is determined more by track density and average signal level than it is by peak signal level.

Fourth, the higher you push your raw tracks, the higher you push your overall noise level.

Fifth, the higher you push your raw tracks, the less headroom you're giving yourself for plugin effects and for actual mixing. Don't spend headroom until you need to.

There are two or three more reasons along these lines, but I think the point is made.

G.
 
5:
I thought that most plugins and audio software use 32bit float maths, so could you name those who don´t and have to be taken carefully?
 
I see that a lot of folks are not in favor of using the normalize function on a recorded track. I find that when I record an audio track in my Cakewalk program and import that track into another program like Sony Acid or MAGIX Music Maker, when compared to pre-recorded loops that I am using in these programs, the level of my Cakewalk track is always a lot lower. In this case, I usually use normalize on my Cakewalk track and it then appears to be a lot closer to the looped tracks. I shoot for a level of -6 to -10 on the Cakewalk audio meters when recording and this level may be too low, but I want to avoid clipping when recording a new track.

So what's wrong with using normalize in a case like this?

-6 to 10dB recording input is totally fine. Nothing wrong there.
The thing is samples and loops are usually normalised to 0dB.

There is no point in normalising your recorded tracks to 0dB as it will mean bringing the faders right down so you don't clip the seperate tracks and the master out.
Think about it if every track you had was at 0dB. Then your master out would be sky rocketing past 0dB most likely!

Eck
 
Thanks for the feedback guys...

I don't want to increase the noise on a recorded track and it looks like that will happen using the normalize function. I was concerned by the visual differences between loops and my recorded audio, but as you guys have mentioned, it will all come together in the mix anyway, so why worry about that.

I will continue to shoot for -6 on the Sonar recording meters because at this level I don't hear any distortion or noise in my tracks (although sometimes my external guitar processor adds a little bit of high-end hiss).

As always, I knew I would get good answers/advice here....
 
5:
I thought that most plugins and audio software use 32bit float maths, so could you name those who don´t and have to be taken carefully?
It doesn't matter. If you have audio that is peaking at -1 dbfs and you insert an EQ and add 5db of 100 hz, you will clip.
 
5:
I thought that most plugins and audio software use 32bit float maths, so could you name those who don´t and have to be taken carefully?
I think you're asking about the fifth point in my reply?

Floating point math doesn't change the fact that one still has an ultimate ceiling to the signal level. The less headroom one has in a track, the less space there is for things such as EQ and MBC boost.

And, as eck also pointed out, the higher the individual track levels, the higher the summing level will be when mixing the tracks together, and the harder it will be to avoid clipping.

Sure, one can always turn the output gain down on a plug or pull down the faders when mixing, etc. to keep the levels in line, but that's just introducing extra steps and extra processing (with potential extra quantization errors creeping in as well). Keeping the levels sane to begin with not only automatically eliminates one unnecessary processing step of normalization, but potentially reduces the need for extra such reverse steps later on.

EDIT: Oops, I didn't see Jay sneak in there with the answer already :)

G.
 
I don't want to increase the noise on a recorded track and it looks like that will happen using the normalize function.

Yeah the noise is increaased but it's increased relatice to the recorded source also.
So normalizing doesn't add noise all it does is bring up the level of track.
 
I think you're asking about the fifth point in my reply?

Floating point math doesn't change the fact that one still has an ultimate ceiling to the signal level. The less headroom one has in a track, the less space there is for things such as EQ and MBC boost.

And, as eck also pointed out, the higher the individual track levels, the higher the summing level will be when mixing the tracks together, and the harder it will be to avoid clipping.

G.

Yes, but the ultimate ceiling is very very far away with 32bFP.

My question is: when I have a project with all data 32bitFP, and the program claims to use floating point summing, do the plugins get data in floating point? I think they do, if the plugins are capable of handling floating point data, and output it as well.
So my original question was more about which programs and plugins don´t accept floating point?

Also on thing came to my mind about normalization. I use Cubase sx, and if I would use the tracks in levels I recorded them, then there is a problem because all faders can go only +6db, so the output level to monitors will be way too low. And when I turn them all up theres not enough room for adjustment. So for me it is not too possible to have tracks at -18db peak or something like that.
 
So my original question was more about which programs and plugins don´t accept floating point?
I don't have a list, but you can easily test your plugins with Adobe Audition. Just take any float wave, apply the effect you wish to test, and click on "statistics..." in the "analyze" menu. It either tells your the actual quantization or float in case the effect did not quantize.
There are probably more tools available to test this.
 
Yes, but the ultimate ceiling is very very far away with 32bFP.
No matter how many bits of float you have, there's still no such thing as +1dBFS.
So my original question was more about which programs and plugins don´t accept floating point?
I don't have the answer to that one, it hasn't become important yet. Either a plug sounds good and fits the task at hand or it doesn't.
Also on thing came to my mind about normalization. I use Cubase sx, and if I would use the tracks in levels I recorded them, then there is a problem because all faders can go only +6db, so the output level to monitors will be way too low. And when I turn them all up theres not enough room for adjustment. So for me it is not too possible to have tracks at -18db peak or something like that.
Nobody here said anything about -18dBFS peaks. That's a reference to the average level as indicated by the conversion of 0VU to -18dBFS as performed by the converter.We've been talking peaks in the -6dBFS area (give or take.)

Furthermore, your monitor level is completly independent of the record or mix level. You should be adjusting the volume on your monitoring chain if your loudspeakers are too quiet to hear it, not increasing the recording level.

Finally, if you have a problem where one or more tracks are so hot that you can't mix in your quieter tracks because there's more than a 6dB difference in apparent volume between the two, there's a simple solution: pull down the faders on the hotter tracks a few dB while you pull up the quiter ones. That's what mixing is all about.

I'd also speculate that if there's THAT much of a difference between your raw track volumes, that you have made some gain mistakes in tracking that you probably should look at fixing the next time.

G.
 
Last edited:
Because it's unnecessary, and implies a wholesale misundertanding of the mixing process.

First, it doesn't matter if your raw tracks are not the same volume. Hell, even with good tracking, the chances that two tracks from two different instruments playing two different parts are going to have the same apparent volume is slim. There's nothing that says that they should be - let alone have to be - the same volume.
G.

I whole heartedly concur with this statement. Using the normalize function 1) assumes that all freqs at equal volume visually (the wave) are equally heard by the ear, which is not true. Lower frequencies take up more space in a mix. So two different instruments side by side (bass guitar wave versus keyboard wave) would appear to be the same volume, yet one will sound louder than the other (usually the bass I believe).

Secondly, normalizing doesn't force you to use your ears instead of your eyes when mixing. What you see isn't necessarily what you hear.

I've found the hotter I track/record into the board, the better overall the quality of the track sounds. Plus I usually capture harmonics that I would have missed recording at a lower volume (especially true for acoustic guitar). And I want those harmonics in a mix cause the outlying frequencies (aka timbre) is what makes it sound so gooooood.
 
I've found the hotter I track/record into the board, the better overall the quality of the track sounds. Plus I usually capture harmonics that I would have missed recording at a lower volume (especially true for acoustic guitar). And I want those harmonics in a mix cause the outlying frequencies (aka timbre) is what makes it sound so gooooood.
This would depend entirley on the board you are using. Record hot through a Mackie VLZ and it will just get screechy and thin.
 
Also on thing came to my mind about normalization. I use Cubase sx, and if I would use the tracks in levels I recorded them, then there is a problem because all faders can go only +6db, so the output level to monitors will be way too low. And when I turn them all up theres not enough room for adjustment. So for me it is not too possible to have tracks at -18db peak or something like that.

That sounds like a problem with your monitoring system.
Turn your monitors up. :)
Or if you can't then you can always use a brick-wall limiter on the master out if need be to boost the volume.

Eck
 
Back
Top