Let's talk about normalization.

Eurythmic

majordomo plasticomo
I'm currently using my computer for recording, to bridge the gap between my Tascam analog recorder the next machine I'm able to buy. I'm still working on transfering what analog knowledge I have over to the digital platform, as well as learning about the software (Cakewalk 9), and I have a question.
How much normalization is too much? Normalization is basically a cheap answer for compression, right? The concepts behind the two techniques seem fairly similar to me. I've done a couple of just-for-fun recordings in Cakewalk, and I've normalized every track. For those of you who also work with computers for recording, is this what you'd recommend? I do not have access to compression, as this is part of a seperate effects package in Cakewalk (which I haven't purchased).
 
I've never used Normalization (aka "Peak Normalization" usually) in cakewalk, but in most digital audio applications it just raises a track by a constant volume so that the highest volume peak is at whatever level you want. In order to get your final mix up to a level where it can compete level-wise w/ pro CDs, you'll need to use compression *somewhere* to bring down peaks so the whole mix can rise w/o the highest peak clipping digitally. SoundForge has a nice RMS normalization feature that combines both quite nicely.

if you're *peak* normalizing every track before mixing, you're just adding unnecessary calculation and processing to your tracks.
 
Normalization isn't cheap compression because it isn't compression at all. It is a form of limiting, however, in that it looks at the waveform for the highest given Db peak and raises it to 0Db (limit) without clipping. The rest of the waveforms amplitude is raised by the same ratio.
This is why compression is much more important to get right. If you have a bass track or vocal waveform with Big Db peaks and valleys, Then normalizing ain't gonna do nothing. You still got the same vocals, just with louder peaks and valleys. To me, normalizing helps maintain consistent volume dynamics in songs that have been mixed well. If you do a crappy job recording and mixing, normalizing is worthless. If you've left yourself no headroom, and already have a near 0Db peak, it won't make a difference anyway.

Hope this helps.

Regards,
PAPicker



[This message has been edited by PAPicker (edited 04-05-2000).]
 
Check out Sonusmans referals at www.digido.com

The Myth of "Normalization"

Digital audio editing programs have a feature called "Normalization", a semi-automatic method of adjusting levels. The engineer selects all the segments (songs), and the computer grinds away, searching for the highest peak on the album. Then the computer adjusts the level of all the material until the highest peak reaches 0 dBFS. This is not a serious problem esthetically, as long as all the songs have been raised or lowered by the same amount. But it is also possible to select each song and "normalize" it individually. Since the ear responds to average levels, and normalization measures peak levels, the result can totally distort musical values. A compressed ballad will end up louder than a rock piece! In short, normalization should not be used to regulate song levels in an album. There's no substitute for the human ear
 
This is also good stuff at the same site.

The Secrets of Dither or

How to Keep Your Digital Audio Sounding Pure from First Recording to Final Master
 
Wow!!! Somebody actually has read that stuff on www.digido.com !!! Cool.... :D

Normalizing is a joke. Digital compression is not anything like analog compression. Keeping the audio somewhat pure with digital compression requires at least 20 bit, but preferably 24 bit files. Dithering is a must with ANY digital processing that changes amplitude.

Ed
 
Normalizing suggests that you can make a group of sources all the same volume. In the whole technical sense, a Normalizer will do that. It will increase the loudest signal in a sourse to 0db on a digital recording, the highest you can go.

Many people believe that by Normalizing, they are getting the most volume possible out of the track. But what they are really getting is the maximum PEAK volume out of the track, and this has nothing to do with average volume, and has nothing to do at all with making a track with a lot of low and high frequency sound as loud as a track with a lot of midrange information at the same average volume because the ear perceives different frequencies to be different volumes, even though the have the same sound pressure level in real life. The ear just does this and there is no way to changed your ears (except for the worse.... :D)

If all you want to do is make sure that your track's highest volume actually reach's 0db, then normalization will do that. But, you may never really perceive this increase in volume because the audio track could have transient signals (very fast jumps in the volume of the audio, usually very hard to hear high frequencies) that can be up to 6db or more louder than any other part of the audio.

Let's say that your audio has one transient that peaks at -1db on the recording. A normalizer would detect this and would turn up all the audio 1 db. Hardly a noticable difference at all!!! What have you gain from the processing? Nothing really. You have done little to improve the average level of the music which can be widely different.

Now, let's say you have another track that has hardly any difference between the PEAK level and the average level. The Peak level is at -6db, and it is only 1 db higher than most of the rest of the track. By applying a normalizer, you would get an over all +6 db gain out of the average level! +6db is a lot of increase. Technically, that is twice as much sound pressure level (db).

So, you have increased a track with very high transients but very low average volume, and a track with little to no transients and very low average volume. Let's even say that the average volume of both tracks is about the same. The track with less transients will enjoy a hugh increase in volume while the track with many hot transients will get little benefit. So in effect, you really haven't "normalized" anything. You just increased the Peak level of the recording to the highest level possible.

People talk about using a Normalizer plugin for mastering their recordings. This is not a very good idea at all. Mainly, it does not really work except in the crudest ways. It is seldom what the person is actually after, which is a better average level for the tracks.

Part of the job of mastering is to make each song to be "perceived" as being the same volume. To accomplish this, mastering engineers have to make many decisions about how they will make a song that contails a lot of transients, but low average level sound as loud as a song that has little difference between the highest transient and the average volume. Generally, the mastering engineer is going to use some kind of compression or limiting on the song with hot transients to that the over all level can be brought up without creating digital distortion due to the transients being brought up too much. So the limiter would be set to only engage at a level just above the average volume level of the song, thus, the transients would be the only parts that would get limited. Then, the make up gain on the limiter can be turned up to make the overall volume of the song louder because the gain reduction would tame the hotter signals only, thus making more room for a overall volume increase.

Their are problems with doing this though, depending upon how much higher the loudest peak is over the average problem. Too much compression and/or limiting can create distortion. It can also tend to make the song sound dull and lifeless. It can in effect make the audio sound to muddy.

Transients are a normal part of music. It is when the transients are too far above the average volume that they become a problem. This holds true regardless of whether you are recording analog or digial. With digial, overly hot transients make your average record level come down quite a bit, thus, you use less bit on the A/D/A converters, thus creating a boxy, kind of edgy, dull digital sound that everyone seems to complain about. With analog, transients would cause lower record levels too, but the adverse effects are tape his and equipment noise. Dolby helps some, but not totally. All in all though, overly hot transients just make the recording too low in volume. Not really a problem untill you start comparing your mixes to mixes where the transients where tamed properly.

Most people are looking to get an overall volume increase in their recordings. Normalization can help, but their is one other little problem with using it after a track has been recorded.

Normalization cannot make up for the lack of detail in the sound that is the result of low bit resolution. Well you record something too low in digital, the sound will lack much of the original detail because the A/D converters did not get enough volume to offer enough data on the recording. So, normalization just basically turns up the already bad sounding audio. Since digital enjoys a pretty damn good sound to noise ratio, this is not really a big deal. It is not like a digial mixer is going to add more equipment noise because you had to turn a low volume track up a bit to get it as loud as you need it in the mix. So really, the gains of applying a normalizer on an audio track are minimal.

The last and final point that needs to be addressed about this whole thing is relative loudness.

Like I said earlier, mastering is where you usually make all the songs of a CD "seem" to be the same volume.

If you where to analyse many recordings, you will find that they do not sport the same average volume. They can differ widely from song to song. Why? you ask. It has to do with the relative perceived volume of frequencies to the human ear.

I have suggested many times that anybody audio engineer become very familiar with the Fletcher/Munson Relative Loudness Curve. These two scientist provide a very detailed and usable graph of how the human ear perceives the sound equal sound pressure levels of different frequencies.

Without going into too much detail here ( I am trying to get you all to read it damnit!!! :D) the relative loudness curve suggests that the human ear hears midrange frequencies much better than low and high frequencies. The Fletcher Munson Relative Loudness Curve shows a graph of the relationship of this phenonomon based upon a 1 kHz test standard. Basically, it show how much louder a certain frequency needs to be to be perceived as being as loud as 1kHz at any certain volume.

It also shows that as audio starts getting louder, the difference between the perceived volumes of frequencies starts to even out. But at low volumes, the low and high frequencies need to be much louder than 1kHz does to sound the same volume.

I am assuming that you are all smart enough to figure out how this could apply while mixing music. It REALLY applies to mastering because one of the aims in mastering is to make all the songs sound about the same volume. Some songs have more midrange information then others, so these songs will tend to sound louder at the same average volume. Usually, the song with more midrange info needs to be turned down a db or two to make it seems the same volume as a song with more low and high end information, especially in the case where the average level of the song is pretty much as loud as it can get without distoring because of the transients on the track.

So, study up guys. Some of you may think that this kind of stuff is not relavent to producing clean
 
Whew! Ed...you're going to have to start posting these long ones as mp3's. We're all gonna go blind reading them.
 
Damn, Ed, I'll be studying this one for a while. It's not just an answer, it's an essay! There's something immediate that pops up though - why try to get all the tracks to sound as if they're the same volume? Is it because that's the way it's always been done? I don't mind it if somebody playing live goes loud on one song and soft on the next, so why homogenize recorded volumes?

Actually, I don't care if you don't answer this one now. I can ask again later after I've absorbed more information. After all, the last time I asked you a question, I got more than I bargained for (great big dobro grin goes here). All the best, dudeman.
 
First of all, Sonusman: Thanks for suggesting digido.com. I've since read every article on the site (and boy, do I feel humbled! :) ).
I do want to clarify my question, though. I had no idea until reading the articles at digido.com that making mixes sound hotter/louder is so in vogue. That wasn't my intention at all. Ear fatigue sucks. All I'm looking for, through reading every article I can find, and asking questions (I'm really happy to have found this resource!), is every little tip I can possibly find that will help me make my recordings better.
Keep in mind, I'm only a college student - and a poor one, at that. It'll be years before I'll be able to work with the home recording budget that some of you seem to have. I can only hope that by then I'll have the knowledge to make the most of it.
But I'm also a firm believer in "working with what you have", and I definitely don't think that I can say my knowledge has exceeded what can be done with my equipment, yet.
So my question was based on this: I've always been under the impression that two of the main keys in making your home recordings sound more professional and less like demos are as follows:
1. (And this is THE most important, no?) Do a good job of capturing the music in the first place.
2. Tame the more wild instruments. Every instrument is potentially pretty hard to control, volume-wise - especially acoustic instruments like the guitar, violin - and voice!
I always thought more even volumes on individual tracks contribute greatly to a more professional sounding recording, and that this was the point of using compression. Since compression isn't available to me at the moment, I thought normalization would be the next best route.
Are my preconceptions wrong?
 
OK, Ed, you've convinced me that I'm wasting my time with normalization . . . you're above novel and the digido.com material. But, since I won't be dithering with Cakewalk anytime soon,is the best answer for us soundblaster and SM57 users to learn proper input settings and EQ techniques . . . sonic spacing of the instruments, etc.?
BTW, The munchkin :D curve info is interesting and made a lot of sense.

Thanks for the informative and time consuming post.

Regards,
PAPicker
 
You are not too far off of the beat and riff.... :D

Capturing as close to the exact sound you want while recording produces the best results. The second you start appling eq, compression, etc...you gain byproducts in the audio that may or may not sound good. Sometimes the gains of processing outweight the somewhat negilable audio purity loss.

What should concern you more about any kind of digital processing is the extended bit depth that is created as a result of the processing. If you are processing 16 bit files, and do not have a killer dithering application to sort of "average" out the extra digits of info the digital processing creates, your 16 bit converters will, and it won't sound as good as proper dithering will. There is an article on www.digido.com that deals with this issue and you should study it closely. Also, dealing with 24 bit audio definately increases the accuracy of your final product, even though it will only be 16 bit.

Anyway, I am not in total agreement with Bob Katz views of compression. Better put, I think that he is easy to misunderstand about what he thinks about compression. He is a smart man who is writing articles that us dummies can understand (hopefully), so I think that he tends to side with caution. His references to compression being bad for audio is technically true, but in real life not as bad as he may make it seem. I know that he knows this, but the issues involved with compression is deep and varied. I believe that he is just trying to keep from having to delve into the really deep stuff, so he just recommends that you avoid using conpression too much. This is easier to express than trying to describe all the in's and out's of compression and bit length.

Also, much like me, I don't think he likes to reveal too many of his "tricks" that make him a living. I am sure that he is a fine mastering engineer and mostly makes hugh improvements to his clients audio. But if he was to start saying "do this, and do that" to solve one particular problem, many not in know would assume that his suggestion is more or less a rule instead of a solution that fits the goal of the production.

I keep making this point. You need to have a clear idea of what you want out of your audio and a clear idea of how your tools will help you achieve that before you can have much success making it work all the time. There are no real "standard" things you do with audio. There are many things that work in many cases, but this is not a standard, and they ususally work best for the person who uses them as it is a approach that the user likes.

This is no bull, when I started learning sound engineering, nobody was willing to help. I would ask questions and get vague and sometimes rude answers. But often, I was just told to go read about it. So I did. And now I know why the answers where vague, and why people were not that willing to give me this knowledge for free. It would have made it too easy, and I would have missed out on trying new approaches. Learning seems to work better when you experiment. Now while this may not accomplish your goals in the near term, I think that later on you find that you didn't really know what you wanted anyway, because as you start to learn more, you start hearing more, and you start to realize that many of your past wishes for audio would not have worked at all. Does this make sense?

So, I am glad these people challenged me to learn on my own. For one, I proved to myself that this is something that I wanted to really excel in. I wanted to excel enough to take the time REALLY get into audio. Now I am a better engineer for it. But had I just got the answers for free, I would have never explored the important stuff, which often is little obscure things like what you find on digido.com.

So, read. Read a lot. Read everything. But don't take too much of it as law. After a time though, you will start to learn the kinds of information that is good and what is mostly misinformed. Only experience and patience will gain this for you.

Ed
 
I've this whole column, and haven't seen anyone with the same view of normalization as I have. So, whether you agree or not, here's my take:

By far, my primary use of normalization is to eek the most dynamic range out of your digital tracks as possible. If I recorded a track at pretty good levels, but not optimal, I normalize the track to or very close to 0db. How does this increase dynamic range? Well, by itself, it doesn't. But once your levels are at optimum, you can use a very low-level noise gate application to make the tacks 'silent' parts truly silent. That way, you have excellent levels without bringing up the noise floor with it (not an exacting breakdown, but I relaize readers in here are at many different levels of experience and knowledge).

I'm surprised to see that a column on normalization has become a discussion of compression. The two are really not interchangeble in any way.

If this helps even one of you, it was worth the post.
 
Regarding compression: Compression is one of the most powerful tools we have in recording sound. I think warning people against using it may be a mistake, but I do understand your reasons. Especially when people start out recording, their impression of effects are things like reverb, delay, chorus ... Things you can 'hear'. Compression, on the other hand, save for certain situations, should really never be 'heard.' I think we'd all agree on that. The problem is, this idea makes the use of compression a hard concept to get one's mind around.
There's not a lot you can learn about the application of compression without experimentation, and even then I think it takes a certain 'ear.'
I'm far from a recording god, but I think this needed to be said here.

As has been said a few times here, there are very few all-encompassing 'answers' to most questions in this field. it's been my experience that the best way to learn is to do it. Use what you know, but most of all, use your ears! Don't be afraid of doing things wrong the first time ... or the second ... or sometimes even the tenth. Wrong is a completely subjective thing, especially when it comes to the arts ... And recording is definitely an art.
 
Well, I read the Bob Katz article on compression, dithering, and his PDF document "The Secret of the Mastering Engineer" and then sonusman's piece above. The dithering article, where digital word length and normalizing were talked about, got me to really thinking. I always thought that normalizing was the best way to raise gain in unity, meaning it's a one time calculation, rather than a little here a little there . . . but, what I didn't realize was that doing this to a stereo track, from a mathematical standpoint, kills my mix more so than if I normalized say, just a guitar track . . . which I never felt a need for. Maybe some of you see it already, and correct me if I got this wrong, Ed : When I normalize a stereo track that is one waveform ( or stream of data ), it means that at any given point along the waveform, where I might have low, mids, and high frequencies fighting to get heard, the final fate of the audio is based on a culmination of computer number crunching, truncating, and number generation. In a nutshell, the data I had . . . I lost. On the other hand, if I normalize just a guitar track ( limited frequency variation within a range ) I would at least end up with a clone more like the original. That being said, I can't ever see the need to normalize a single instrument or frequency for that matter. I think I'm abnormal now ! :eek:

Gabriel > Why do you have to normalize before using the noise gate to remove floor noise ? I didn't quite catch that concept. Unless you have like a steady guitar (pause) guitar and the paused area has some hiss or a string pluck or something.

Regards,
PAPicker
 
A very good point Gabriel about trying things to learn. That is the point I am trying to make. Try it all, try it many different ways, try it in ways you "shouldn't" do it. If anything, you prove that it doesn't work in many cases. But, don't expect cheapo solutions to produce big time results. Normalization is in my opinion a cheapo solution that is misunderstood, and has a very limited application in music production.

I hope that you are using at least 20 bit original .wav files when you apply normalization though because the volume change is effecting the bit length a lot and 16 bit converters will not be able to deal with this the way that 20 or 24 bit converter can. That is why I strongly suggest that people don't use normalization. It is really an unneccesary step in digital. You will end up applying yet another volume change to the original audio, thus, increase the errors associated with volume changes more. Then add any dynamics processing, effects processing. Damn, pretty soon your track don't have any life left on a 16 bit converter. The more processing you do, the worse it gets.
Working strictly in the 24 bit realm helps a lot, and applying a good dithering algorythym to the .wav file will create the neccessary noise in the quiet parts of the .wav file so that when you convert to 16 bit, you have noise on the track at about the same volume that the converter makes noise. Any processing you do will have the bad effects on the lowest volumes on the track, so by working with 24 bit files, these errors will end up not being on the 16 bit file after conversion because the dithering you applied creates the noise neccessary to hide the errors.

Basically Normalizers where intended to be the poor (and uninformed) man's way of mastering a bunch of .wav files. But because of the relative loudness curve, it really doesn't work as an effective mastering tool. Certainly it does not come close to offering any increase in bit resolution, except possibly at the D/A converter, but the errors on the file may outweight the slight gains in fidelity gain with better bit resolution at the D/A filter. Also, it could be said that you are just taking an already kind of digital sounding track and turning up all the bad qualities that low bit depth digital sounds like even more. Make sense? :)

So, if your .wav files are all drastically different in volume, and you just want to get them as loud as possible without distortion, or without learning how to use a good eq and compressor to increase the volume, then use a Normalizer. But, don't expect all the tracks to "sound" the same volume to your ear. They won't. Only creatively used eq and compression will do the job of "optimizing" digital audio. In fact, they are the only tools for optimizing and audio level (that even means while recording. the right eq on a guitar amp can really make a difference in how high of volume you can get to the digital A/D converter, and compression is a fact of life in all recording. what do you think slamming the tape in analog is? that's right, a form of compression.)

But Gabrial, you do have some good points. You just have to do this stuff you learn. A lot of what I have to say on hear is not fully understood until the person get's enough time behind the console to start getting their "studio ears" as I like to call them. Then they really start to hear what the big deal is all about... :D

Ed
 
Hmm.

Well, I use digital normalisation all the time. If you're going to cut a CD, then it's nearly always necessary, unless you've done a fantastic job of keeping up all the levels during recording/mixing, whatever - I tend to avoid ramming all the levels up, as I am mortally terrified of digital clipping.
Anyway, I agree that you cannot use normalisation in a creative way, but when you have a series of tracks which sound good relative to each other, and you want to write them to CD, you need to normalize to get them to the optimum level. I find that when I rip CDs, they're actually normalised to what appears to be more than 0dB! ???

But I think it's basically a useful cop-out to get things optimised for the digital domain - souncards have optimum digital output levels for DA conversion, and if you have to crank up your mixer pre-amps, you're going to generate a whole lot of extra noise. By normalizing, you can't turn up the relative noise level like you can by compressing. I just look upon it as a simple amplification that lets me run things easily!

That probably doesn't make any sense, but anyway...

matt
 
Back
Top