peaking question

  • Thread starter Thread starter Nathan1984
  • Start date Start date
N

Nathan1984

New member
Ok, this is probably stupid. I have a question, that I never asked before, thinking I have a fair grasp on what is going on, but maybe I am wrong. Ok, when recording anything, where should my signals be at on my vu meter say in my daw or what not. I always go for around -12 db, I don't know if I should go higher, or if I was right for trying to peak at -12 db. I didn't really know if I should be aiming for the 0 mark and put a limiter/compressor on it or what? I don't know what made me think about this, I guess I was just curious if I have been doing things smart, or if I was being an idiot for not asking sooner.
 
During tracking you want don't want your peaks to exceed around -12dBFS. Hope that answers your question.
 
Yeah, I am. I do everything digital. I have been getting decent results. I just wanted to make sure that is what I should be doing, sometimes when I do vocals, after I master my tracks, I get clipping noise, any idea why?
 
You have the faders pushed up a little too high? Or you have some transients that are peaking you out, if that's the case you can use a limiter in the track inserts to flatten some of that out. Are you peaking all the time or just once in a while? Just when the vox come in? You can pull the volume of the music back when the vox are on via sidechain on a compressor. Just a little subtle drop, nobody ever notices cuz the volume of the vox makes up for the lowering of the music volume.

If you say 16 tracks at -12dbFS, they might sum up to > 0, ie clipping. So shoot for -15db instead. If you only have 4 or 6 tracks, you can get away with -10 or so, cuz there's not that many tracks contributing to the final signal. But that's still pushing it kinda loud, most converters run optimally at -12 to -18.. On the other end of the spectrum, if you had like 80 tracks going at once or something ridiculious, you'd probably have to do -21 or something.
 
Perhaps it's just me, but I wouldn't take the number of tracks I'm planning to record into account in deciding on peak levels when tracking. When you mix, the mixer (be it hardware or software) is perfectly capable of keeping the summed tracks within the limits of your system. I wouldn't throw away resolution at the tracking stage, particularly since the data path in your software has more resolution anyway.

If you were trying to avoid forcing the mixing algorithm to reduce the overall level, you'd have to keep the peaks way down. Four tracks, each at -12 dBFS would sum to 0 dBFS if your DAW were so stupid as just to add up the values. Sixteen tracks (2^4), at -12dBFS would sum to +12 dBFS.

Using a limiter or compressor will only help you with the potential clipping problem when tracking if it's in front of the converter, i.e. it needs to be an analog/hardware device.

sometimes when I do vocals, after I master my tracks, I get clipping noise, any idea why?
You're pushing the level of the mix too high in the course of mixing (pull down the master fader), or one of the processors used in mixing or mastering is producing clipping distortion (be careful with your processors).
 
Last edited:
I will definitely take all you guys advice, everything helps.
 
Oh and as for my peaking issue, what it is. I had my music mixed, then I added vocals. Only when the vocals come in on my mastered mix, does it peak, and only on parts. I didn't uese a de-esser for one. I used a compressor, and a gate but I think maybe using to much gain on my vocals and it caused it to peak. I don't know, but it only happens when the vocals come in, once there isn't any singing, it cleans back up.
 
Perhaps it's just me, but I wouldn't take the number of tracks I'm planning to record into account in deciding on peak levels when tracking. When you mix, the mixer (be it hardware or software) is perfectly capable of keeping the summed tracks within the limits of your system. I wouldn't throw away resolution at the tracking stage, particularly since the data path in your software has more resolution anyway.

I'm sorry but I don't agree with this. Unless he's recording classical music, or music that requires 120dB of dynamic range, it is not going to help him to maximize resolution during tracking with 24-bit audio. I am assuming by saying you "wouldn't throw away resolution" you mean that he should record as close to 0dBFS as possible?

This is advice from a bygone era, I'm afraid. Tracking with levels approaching 0dBFS will only serve to create a gain staging nightmare later on down the line and will almost definitely cause overloading somewhere, particularly when processing. I can list MANY reasons why you should keep your levels conservative, the big one being that if he tracks hot, he will have to attenuate each signal within the software mixer anyway to combat clipping at the master bus, which will produce inter-sample distortion at the DA converter. He will also lose bits (resolution) after attenuation anyway. Also, plugins operate more efficiently at more conservative levels. So will his CPU. Ever summed 100 tracks digitally? Try it with slammed levels. You will almost certainly experience a smearing and blurring effect, particularly with transients. And I'm sorry, but merely turning down the master bus is not going to solve any distortion that manifested as a result of plugins or processing elsewhere in the mixer, either. It will only serve to scale down the output levels of each channel post-fader before the master bus.

If you were trying to avoid forcing the mixing algorithm to reduce the overall level, you'd have to keep the peaks way down. Four tracks, each at -12 dBFS would sum to 0 dBFS if your DAW were so stupid as just to add up the values. Sixteen tracks (2^4), at -12dBFS would sum to +12 dBFS.

Yes, for pure sine waves. However, we do not mix pure sine waves. We mix sources that have varied spectral responses that will exhibit varied results when summing based on level alone.

You're pushing the level of the mix too high in the course of mixing (pull down the master fader), or one of the processors used in mixing or mastering is producing clipping distortion (be careful with your processors).

Wouldn't it be more practical to just record at a more conservative level and audition all processing at the same perceived level? That way you would never have to touch a fader, provided your gain staging is correct, and thus never lose resolution?

In most of my mixes, if I've scheduled my gain during tracking correctly, the faders stay at unity both at the channel and master bus level. It is very seldom that I make a level change at a fader. Ever since I started making sure that my tracking levels were conservative and related well with the rest of the recording, I found a MARKED improvement in the results.

I do agree, however, that you should be careful with your processors.

Cheers :)
 
What Mo Facta said. If the 1st thing I have to do is pull every one of my faders back to -3 to stop the main mix from clipping, them IMO I simply recorded all the tracks 3db too hot. My preamps are low end, and I have no SNR problems with tracks that peak in the 20's

3 db louder is twice as loud, so theoretically,

2 tracks at -3 sum to 0
4 tracks at -6 sum to 0
8 tracks at -9 sum to 0
16 tracks at -12 sum to 0
32 tracks at -15 sum to 0
64 tracks at -18 sum to 0
128 tracks at -21 sum to 0

which is kinda how I gauge my tracking volumes. And yes, if I know I have only 4 tracks I WILL record em extra loud. Maybe not -6, but I wouldn't feel bad about -9 or -10.. And you want a little headroom anyway, especially if you're gonna master the mixes later. I don't have any projects with 64+ tracks in em, but if I did, I'd track at -18 or even lower.
 
Oh and as for my peaking issue, what it is. I had my music mixed, then I added vocals. Only when the vocals come in on my mastered mix, does it peak, and only on parts. I didn't uese a de-esser for one. I used a compressor, and a gate but I think maybe using to much gain on my vocals and it caused it to peak. I don't know, but it only happens when the vocals come in, once there isn't any singing, it cleans back up.

Don't know if the word "mastered" is being misused here, but if you're dynamically compromising a mix before adding the vocals to that mix, you're likely to have all sorts of negative issues.
 
I am assuming by saying you "wouldn't throw away resolution" you mean that he should record as close to 0dBFS as possible?
You are assuming something I didn't say.

What I was talking about was the specific topic of taking the track-count into account in determining optimum levels.

The rest of your post appears to be littered with things that don't make a lot of sense, including:

will almost definitely cause overloading somewhere, particularly when processing.
will produce inter-sample distortion at the DA converter.
"Inter-sample distortion"? Do you mean quantization noise?
He will also lose bits (resolution) after attenuation anyway.
plugins operate more efficiently at more conservative levels.
So will his CPU.
I don't know what "efficiently" means exactly. In any event, both your software and your hardware are going to process each sample at its full bit-depth, even if you don't use all the bits. It's not easier or more efficient for a CPU or software to do 0x00F1 + 0x00F5 (say) than to do 0x0F13 + 0x0F54.

You will almost certainly experience a smearing and blurring effect, particularly with transients.
You're ignoring a number of things, particularly that mathematical operations reduce precision; whenever you do any processing of a set of values, you want to carry as much precision as far as you can, and don't truncate precision until you have to. Obvious (and perhaps too simple) example: Say you want to find what the sum of two integers is, and (for some reason) you need to roung the result to ten. Now say the integers are 4 and 3. If you round them to the nearest ten before the operation, you'll get 0+0=0. If you maintain the precision until the end, you'll get 4+3=7, which rounds to 10 and is, obviously, considerably more accurate. The same happens in your computer, only you're not rounding by tens, but by bits, and you don't do one calculation but thousands.

Your analysis seems to be based on the assumption that rounding at the end produces more problems than rounding at the beginning. The opposite is the case.

Yes, for pure sine waves. However, we do not mix pure sine waves. We mix sources that have varied spectral responses that will exhibit varied results when summing based on level alone.
Nope. I don't think you understand the math. Actually, it's exactly the opposite: if you were to sum two identical sine waves (same frequency, same level), the result could peak anywhere between double the level and zero, depending on phase. If the "spectral response" is varied, you're going to wind up peaking at double the level of each.

The real advice:

With a 24-bit AD converter, there's no overwhelming reason to get pushy about getting super close to 0 dBFS. If you track at (say) -12 dBFS, you're throwing away 2 bits, and you still have 22 bits of precision, which - in the real world - is plenty. You're not going to hear the difference between a 22-bit recording an a 24-bit recording. The downside of pushing for 0 dBFS is that you're either going to go too far, or you're going to spend a lot of time fiddling with levels and retracking stuff that peaked a little more than you expected.

That advice has nothing to do with how many tracks you're planning to include in your finished product.
 
Last edited:
3 db louder is twice as loud, so theoretically
This is confused, for two reasons:

The less important one: "twice as loud" isn't the right concept, as "loud" doesn't have any meaning, other than a subjective description of perceived volume (and, if you've heard a 3 dB increase, it almost certainly doesn't sound "twice as loud:" for most people, it's barely perceivable). What would be correct to say is that "3 db is twice the power."

More important: We're not talking about power, we're talking about voltage. Six decibels is twice the voltage, because power varies with the square of the voltage (or, to put it simply, when you double the voltage, you quadruple the power).

If you were, say, to have 64 tracks (which I've never had, either), you'd need to track at -36 dBFS to be able to sum "dumbly" without exceeding 0 dBFS (not that your DAW actually does sum dumbly ... it doesn't). Unless you have some pretty fancy equipment, doing that would bring the noise floor so high, you'd wind up with a something that sounded like it was recorded in a windstorm (or something).
 
Last edited:
I will explain what i mean by master, maybe i misused the term. What i did was record my guitar's, double tracked. I use an e kit with ez drummer for my drum tracks. I mastered them with ozone using the cd master- exciter and widener preset. I brought the mixed down track back into sonar, recorded two tracks of vocals on top of that, and eq'ed and maybe threw a ozone preset on just the vocals. Exported it as a wav file, it didn't clip while it was in Sonar, once exported, i get the cracking only on vocal parts.
 
I will explain what i mean by master, maybe i misused the term. What i did was record my guitar's, double tracked. I use an e kit with ez drummer for my drum tracks. I mastered them with ozone using the cd master- exciter and widener preset. I brought the mixed down track back into sonar, recorded two tracks of vocals on top of that, and eq'ed and maybe threw a ozone preset on just the vocals. Exported it as a wav file, it didn't clip while it was in Sonar, once exported, i get the cracking only on vocal parts.

Don't "master" the track until the mixing is done. You need the headroom throughout the process. But now that it's done you could simply bring down the "mastered" backing track down 12dB or so and mix the vocals to that, then re"master" the whole mix.
 
well, i wouldn't see why it would matter that I mastered the music, it only clips when the vocals come in, once the vocals aren't in the mix, it stops. Shouldn't I just lower the vocals in the mix?
 
well, i wouldn't see why it would matter that I mastered the music, it only clips when the vocals come in, once the vocals aren't in the mix, it stops. Shouldn't I just lower the vocals in the mix?

No, you should put the vocals where they sound good, but since you're mixing them to a "mastered" backing track they end up clipping the output. Just turn down the backing tracks so the don't peak above -12dBFS and mix the vocals to that, then do your "mastering" on the resulting mix.
 
You were right on man, thanks. I did just that, and sure enough, no more distorting.
 
You are assuming something I didn't say.

What I was talking about was the specific topic of taking the track-count into account in determining optimum levels.

I'm still no closer to what you actually meant by "throwing away resolution" during tracking. Can you please explain further?

The rest of your post appears to be littered with things that don't make a lot of sense, including:

"Inter-sample distortion"? Do you mean quantization noise?

No, I do not. I mean inter-sample distortion, yes, a true phenomenon. I'm not going to take the time to explain it so please read this paper:

www.tcelectronic.com/media/lund_2004_distortion_tmt20.pdf

I don't know what "efficiently" means exactly. In any event, both your software and your hardware are going to process each sample at its full bit-depth, even if you don't use all the bits. It's not easier or more efficient for a CPU or software to do 0x00F1 + 0x00F5 (say) than to do 0x0F13 + 0x0F54.

I'm not clear on what you mean by "hardware", but your DAW will most likely process each sample at more than it's intrinsic bit depth because most DAWs use at least 32-bit floating point precision. Many plugins use 48-bit and now some use 64-bit. In any case, all this does is add resolution on the bottom for processing (floating point processing exhibits a shifting noise floor which is signal dependent) and does not change the fact that any audio recorded with a modern AD converter will always be 24-bit.

And yes, mathematically it all makes sense on paper. It's binary arithmetic after all. Yet we still have miscalculations due to developer error, particularly in 3rd party plugins and even more particularly when the signal approaches 0dBFS. As explained to me by a plugin developer that I know and trust:

"Plug-ins internally represent the (audio-) sample values with floating-point values between -1 and +1. Those 2 values, when put on the output of a plug-in, correspond to 0db (+1 is the maximum positive phase, -1 is the maximum negative phase. Think of it as the maximum values of a sine). For ease of explanation I use +1 below, but for -1 it's the same.

Depending on what the plug-in does there's a lot of mathematics inside, additions, multiplications and whatnot. This processing results often in (internal) temporary values bigger than +1. That would 'normally' be not a problem as long as after all those internal processing steps everything's back to -1 ... +1, but...

...a lot, and with that I mean a LOT, DSP algorithms are designed to work ONLY in this range. Some work for all values, some more work for values slightly above +1, some work bad outside that range, some just crash or start to oscillate.
"

Besides all that, I've personally heard a HUGE improvement in the performance of plugins when operating under -10dBdfs. By performance I mean the cumulative aggregate of perceived distortion at the end of a mix and particularly during mastering. It all piles up and if you have enough of it your mix will suffer and fall apart.

You're ignoring a number of things, particularly that mathematical operations reduce precision; whenever you do any processing of a set of values, you want to carry as much precision as far as you can, and don't truncate precision until you have to.

Precision is not able to truncate. The term precision is limited to the overall operating bit depth of an algorithm. You truncate bits below the LSB (least significant bit) when the first quantization tier is not reached by a given sample. An example would be converting a 24-bit recording of a cymbal hit to 16-bit and at the very end of the sample the audio distorts out (truncation distortion) as the LSB of 16-bit has been reached. A square wave is the result, albeit at a very low level. This is truncation.

Obvious (and perhaps too simple) example: Say you want to find what the sum of two integers is, and (for some reason) you need to roung the result to ten. Now say the integers are 4 and 3. If you round them to the nearest ten before the operation, you'll get 0+0=0. If you maintain the precision until the end, you'll get 4+3=7, which rounds to 10 and is, obviously, considerably more accurate. The same happens in your computer, only you're not rounding by tens, but by bits, and you don't do one calculation but thousands.

Your analysis seems to be based on the assumption that rounding at the end produces more problems than rounding at the beginning. The opposite is the case.

I'm sorry but I disagree and I think that you are taking mathematics on paper as the authority over your ears. This is AUDIO after all. The bare fact is that there is around 120dB of dynamic range in 24-bit audio and slamming your levels at any point WILL introduce distortion at some point, be it at the input of your AD converter, within plugins, or at your DA converter. Furthermore, the issue of analog components in the chain have not even been touched on here either and it's fair to say that manufacturers often skimp on these components in order to save costs and, ironically, the cost for us is distortion. While you may be able to show me on paper that the mathematical theory is correct, the issue still remains that many analog amplifiers exhibit distortion WAY before they reach 0dBFS. This is another reason not to go near full scale, ESPECIALLY with prosumer gear.

Nope. I don't think you understand the math. Actually, it's exactly the opposite: if you were to sum two identical sine waves (same frequency, same level), the result could peak anywhere between double the level and zero, depending on phase. If the "spectral response" is varied, you're going to wind up peaking at double the level of each.

I don't think you got my point in that in the real world, things do not behave like they do in a mathematical equation. A kick drum peaking at -12dB combined with a cymbal peaking at the same level will not combine to double the intensity because they have different spectral responses. This was my point.

With a 24-bit AD converter, there's no overwhelming reason to get pushy about getting super close to 0 dBFS. If you track at (say) -12 dBFS, you're throwing away 2 bits, and you still have 22 bits of precision, which - in the real world - is plenty. You're not going to hear the difference between a 22-bit recording an a 24-bit recording. The downside of pushing for 0 dBFS is that you're either going to go too far, or you're going to spend a lot of time fiddling with levels and retracking stuff that peaked a little more than you expected.

That advice has nothing to do with how many tracks you're planning to include in your finished product.

I'm confused now. Are you saying that the OP SHOULD track at -12dBFS? Please can you explain this then?:

I wouldn't throw away resolution at the tracking stage, particularly since the data path in your software has more resolution anyway.

And just for the record, there's no digital converter out there that can even do 24 bits. Sure, the file results as 24 bits but the best you can hope for is about 21 bits due to limitations in the design of ICs and other analog components. The cost is noise.

Cheers :)
 
Back
Top