Normalizing and Noise Floor

mystonicrecords

New member
Normalizing is no worse than raising the gain at any point in the process.

The problem I have is anyone using software in any way to raise the level of something initially recorded. Period. Ideally it should not be done. Reduction, yes. Raised no. What am i missing?

If I have a recording booth with an actual noise floor of -70 db, that means from my understanding that from 0 level I have 70 db of room for my signal. (for instance i should be able to see my volume meter before I start singing or playing, and it will be hovering at -70 db) So if you are recording your track into the computer and the peaks of that signal are coming in around -20 you are only getting 50 db of room for your signal, as the noise is still waiting there at -70. So to increase volume, post recording in any fashion, you are increasing your noise level. 10 db increase post recording and you are now having a -60 db of room before noise.

If that's true then why are you not always going to bring in all material at just below 0 level during recording.

I thought this was a basic, recording 101 point, but I have no high degree on the subject. Please straighten me out as needed, you masters of mastering. And i know this is not necessarily a mastering issue, but rather a whole process issue. IMO mastering is a whole process issue.
 
Normalizing is no worse than raising the gain at any point in the process.

The problem I have is anyone using software in any way to raise the level of something initially recorded. Period. Ideally it should not be done. Reduction, yes. Raised no. What am i missing?

If I have a recording booth with an actual noise floor of -70 db, that means from my understanding that from 0 level I have 70 db of room for my signal. (for instance i should be able to see my volume meter before I start singing or playing, and it will be hovering at -70 db) So if you are recording your track into the computer and the peaks of that signal are coming in around -20 you are only getting 50 db of room for your signal, as the noise is still waiting there at -70. So to increase volume, post recording in any fashion, you are increasing your noise level. 10 db increase post recording and you are now having a -60 db of room before noise.

If that's true then why are you not always going to bring in all material at just below 0 level during recording.

I thought this was a basic, recording 101 point, but I have no high degree on the subject. Please straighten me out as needed, you masters of mastering. And i know this is not necessarily a mastering issue, but rather a whole process issue. IMO mastering is a whole process issue.

Not this again...:eek:
 
MystonicDude...
"If that's true then why are you not always going to bring in all material at just below 0 level during recording."

In a word...headroom. :cool:

Noise isn't the issue it used to be because of the gear available these days. Unless you've got some really crappy noisy gear, that is. :D

No point in pushing the converters, and the rest of the gear, to it's limits on the first go 'round. You lose the punch, clarity and even the end volume of your goodz when it's already been pushed to its max.

Just turn up your monitors if it ain't loud enough. Slam it at the end of the process if ya feel ya have to. Not the beginning or even the middle.
 
I brought these out of the other thread and started a new one. Thanks Myst!! Hey, it was easier than I thought. Don't get to do this stuff that often. :o

So, back to the topic at hand. Are you saying you don't think it's a good idea to adjust faders in your DAW after you've recorded the track?? That's basically the same thing as:

using software in any way to raise the level of something initially recorded.

What I'm really curious about is how 32bit floating point processing plays into tracking levels. Most DAW's use it for doing the math.

We always read to track with plenty of headroom for allow for further processing down the road. But doesn't 32bit floating point kind of take care of that for us?? I'm assuming it means the software is going to move the processing windows around the bits that change and not the whole word length. To me that would allow us to track to just under 0db and not worry about headroom later on. IDK. Anyone got a clue what 32 bit processing does for us??
 
Normalizing is no worse than raising the gain at any point in the process.

The problem I have is anyone using software in any way to raise the level of something initially recorded. Period. Ideally it should not be done. Reduction, yes. Raised no. What am i missing?
Well, you're right about peak normalizing, and that there's really no need for it in music mixing, but to say that a track's level should never be boosted in software "period" is a bit of a stretch. Sure, you want to avoid doing that when possible, but there are times when it can be not only allowed, but proper. I wouldn't state it quite so absolute and final as you did.
If I have a recording booth with an actual noise floor of -70 db, that means from my understanding that from 0 level I have 70 db of room for my signal. (for instance i should be able to see my volume meter before I start singing or playing, and it will be hovering at -70 db) So if you are recording your track into the computer and the peaks of that signal are coming in around -20 you are only getting 50 db of room for your signal, as the noise is still waiting there at -70. So to increase volume, post recording in any fashion, you are increasing your noise level. 10 db increase post recording and you are now having a -60 db of room before noise.
We'll ignore the mixing of decibel formats you're kind of doing when you express an analog noise floor in digital dBFS, as well as the probable decrease in dynamic range you'll get when inserting your analog gear between the booth and the digital recorder; I nevertheless understand your point and it's correct.

When an analog noise floor is recorded to digital at, say 70dBFS (which would mean a very quiet analog signal chain, BTW), you're right if/when you say that if you peak normalize the signal 10dB that the noise floor will also raise by 10dB. And you're also right that that is not a wonderful thing to have happen.

What can't be said, though, is that the dynamic range of the track's signal has changed, because it hasn't. The range of the signal remains untouched at 50dB. This is the point that trips up those who think you're incorrect; that because everything gets louder by the same amount, that the noise remains just as masked as it was before, therefore such gain adjustments don't "really" make things noisier-sounding.

The problem with that thinking, though, is twofold: First, the masking will not always be there. During rest beats in the arrangements, reverb tails, and fades, the increased noise floor will indeed be more audible to the human ear by 10dB. It's effect on the music may be no different, but it will indeed be intrinsically louder.

Second, is if you do this to an individual track before mixdown, it *does* increase the noise floor to -60dBFS for the overall mix, leaving less potential room at the bottom for other tracks included in the mix.
If that's true then why are you not always going to bring in all material at just below 0 level during recording.
What's the point? That's no different in effect than peak normalizing to that same level. Whether you boost the gain before or after the A/D converter, either way you're boosting the gain, including the gain of the noise floor.

But maybe even more to the point, what's the point of boosting the tracks to near zero dBFS? You're only going to gave to turn them down again when mixing them together.

G.
 
thanks for the switcharoo chili.

on your 2 points

Are you saying you don't think it's a good idea to adjust faders in your DAW after you've recorded the track??

1. not at all. i was saying that raising in the DAW, after the recording, is not the way to go, because then all your available headroom, or room above noise as i think of it, is diminished. rather you should always record initially with your peaks ideally just below the 0.


2. 32 bit processing is a higher definition of processing, but i think what you are talking about is the fact that the tech today can cause processing and manipulation, digitally, above the 0 level, without degradation, in certain situations. i was of the understanding that it is from the way the software is written and not the bit amount one way or the other.

Noise isn't the issue it used to be because of the gear available these days. Unless you've got some really crappy noisy gear, that is.

No point in pushing the converters, and the rest of the gear, to it's limits on the first go 'round. You lose the punch, clarity and even the end volume of your goodz when it's already been pushed to its max.

Just turn up your monitors if it ain't loud enough. Slam it at the end of the process if ya feel ya have to. Not the beginning or even the middle.

This troubles me. On a couple levels.

1. What are you pushing, keeping peaks below 0 from microphone/instrument to recorded WAV on your DAW? You lose punch and clarity by recording with peaks below the 0 level? You lose end volume of your goodz?

2. Turning up monitors because levels are lower to equal the same perceived volume is like filling up your swimming pool by taking a bucket of water out of the shallow end and dumping it into the deep end.

3. "Just slam it at the end" I really don't want to comment here. Unless the discussion has left the studio is now in the bedroom, "slamming it at the end" can't be good. Your initial noise floor hasn't changed, and will only be increased. Now yes, we are talking "ideal" here, but that being sad, isn't the mastering process all about what is ideal?

I think Chili was hitting the important point that once into a proper software, these things can be modified near 0, within reason, having no concern for distortion (pending output levels are set appropriately)
 
.What's the point? That's no different in effect than peak normalizing to that same level. Whether you boost the gain before or after the A/D converter, either way you're boosting the gain, including the gain of the noise floor..

Hey thanks for a great reply. I should have said more clearly, if your gear "allows for a max of "- 70db. For instance, not mic'ing properly, and recording at a really low level, and then having to raise it way up in the DAW. I agree for the most part gain is gain. But I'm still absolutely disagreeing with having a headroom of 70 and specifically cutting intake down to -16 or whatever "because that's what i want it at to mix it".

And you're right about me talking all absolute and what not. I learn the black and white stuff first, and then descend into the gray areas. :)
 
But I'm still absolutely disagreeing with having a headroom of 70 and specifically cutting intake down to -16 or whatever "because that's what i want it at to mix it".
You;re not losing any "headroom" by throttling back the digital level. You have more than enough room on the bottom end to push things down (some 140dB in 24 bit), and just like normalizing, pushing down the gain does not change the dynamic range at all.

It's not just that -16 is where one "wants to mix at". It's the fact that you're going to need to pull things down to avoid clipping when you mix the tracks together. Why push them into the converter high when you just have to pull 'em back down again when mixing? It's just extra gain manipulation for no reason.

(And no discourses about floating point here, that's just cheating; there's no reason to go above 0dBFS even if floating point lets you. Neither the 'drop the master bus' dodge; many summing engines will clip *before*they get to the master bus faders. Tip: The only time I ever use the master faders for anything is for a master fade at the end of a song. Otehrwise they collect dust.)

G.
 
You;re not losing any "headroom" by throttling back the digital level. You have more than enough room on the bottom end to push things down (some 140dB in 24 bit), and just like normalizing, pushing down the gain does not change the dynamic range at all.
G.

I'm with you on that for sure. I meant the whole mindset of recording low with the intention of beginning the process of mixing at -16 or whatever. I wasn't suggesting that there would be a loss by reducing in the DAW.

My obvious next question then is, what is the ideal number you shoot for with a raw WAV file prior to manipulation. I have been -6 to -3, and the whole high teens discussion threw me for a loop.
 
My obvious next question then is, what is the ideal number you shoot for with a raw WAV file prior to manipulation. I have been -6 to -3, and the whole high teens discussion threw me for a loop.
Well, there's another thread in this forum about recording too hot or something like that, that I personally just dropped out of after SEVEN pages of nothing but a merry-go-round of a debate going nowhere between two opposing sides to that question. You might want to look that thread up and - if you have the testicular fortitude - read through the whole nightmarish thread.

The short synopsis though is that one side comprised mostly of tactical thinkers says it just doesn't matter what level you record at. The gear on the analog side is robust enough to handle just about any reasonable signal level, and on the digital side, one bit register is the same as the next, so the digital recording level is virtually irrelevant as long as you avoid clipping.

The other side, taking a more strategic view, says that you're best off to play the gain structure game all the way through the signal chain, and that just like there's a reason for using 0VU analog as a reference level, there's a reason to continue that reference line all the way through digital mixing as well; that doing so fairly automatically and easily avoids clipping and fairly automatically and easily results in mix levels comparable to historic analog norms and highly conducive to further quality digital mastering.

It's the second approach that highlights the "teens" numbers you're referring to. It has to do with the "exchange rate" that the analog-to-digital converters are calibrated to. Most converters are calibrated so that 0VU analog converts to somewhere between -14 and -20dBFS on the digital scale. As most seasoned engineers generally average their average (not peak) signal levels in analog somewhere within a few dB of 0VU (usually a couple of dB lower than that, but not always), that letting that signal pass through the converter to digital like that with unity gain at that point results in a digital signal level that therefore averages somewhere between the mid teens and mid twenties negative dBFS on the digital side. What's attractive about that is that usually just happens to be just about right for getting the maximum digital signal level while having enough room for the peaks not to clip (often in the -6 +/- 3 range you referred to) and still have some good room left for mixing using traditional balanced track mixing methods.

It's the first approach that says that whole 0VU thing is simply a bunch of baloney over-thinking of the whole thing, and that one does not need to worry so much about gain structuring on the analog side, and not at all on the digital side. The 0VU reference idea may be pretty to some, but it's pretty useless when you look at the fine print.

I'm not going to argue it one way or another any more for now; that last thread burned me out on it for a while ;). You can decide for yourself which way of looking at the whole process works best for your own personal sensibilities; whether you're more of a tactical guy or strategic guy. Whatever one works for you.

G.
 
Back
Top