Got some great takes, but the levels are too low. Best way to fix?

What DAW are you using? Pretty much all of them have a "normalize" function which will boost the signal so that the largest peak is at 0dBFS, or to whatever number you choose. I use Reaper and you can select multiple track, then raise them all by the same amount, or boost each track to it's max level. It won't affect the S/N ratio, it merely raise the level a constant amount for everything. As long as you have a good signal to noise level, you're good.

Most times, I find that the noise floor is due to the ambient noise in the room, not any noise from the mic or the interface, unless you have a very noisy mic, or were playing/singing extremely quietly.
For some reason, Mr. Insecurity chose to barf on his keyboard for half an hour rather than address your post. I was open to getting educated. All I got was sore ribs. :D
 
If I may dude, let me do my best to expand and explain my conclusions.

"What's that all about" are my 30 years of audio schooling here in the film, motion picture, and cue industry here in Hollywood. What's that all about" are my good fortunes to have worked (in the early days) with the incomparable Michael Omartian and his equally talented son Chris. Who by the way came to the same conclusions as I. These conclusions not only covered normalizing audio but also concluded the "maximizing" process equally degraded the original audio. "What's that all about" are my 23 Emmy wins and my 4 Pro-Max wins. What's that all about" are my fun and fond days at the Radford Lot in Studio City creating cues (and long-form cues) for the amazingly creative, brilliantly written, daytime drama, "Passions". "What's that all about" are my oh-so-challenging days working on and for the Ellen DeGeneres show. "What's that all about" was my battle for survival with Sony Pictures who have the single most picky and demanding Project Coordinators on the planet. "What's that all about" is, above all, I learned beyond every conceivable measuring stick, that mixing audio is singular and distillable to using one's ears.

In the end, I hope I've managed to move the understanding yardstick perhaps an inch further than, "I have no idea what he's talking about" At this point, I'd consider that a mild win.

This, like most forums, are melting pots of ideas and notions. That's really how it should be. What's perhaps odd or beyond the lines just may well produce mightily in the future. It has for me. I've submitted what I believe in, and that belief is based on 10 hours a day. 6 or 7 days a week, for 30 years. This post was received with an obvious "are you dumb" reception. In the end, I never post with any grandiosos, not my M.O. It would appear however that this group has surpassed my learnings and distilled me to nonsensical. I'll gladly slink away and yield. Perhaps to a forum more on my level. Is there a forum called "Audio for dummies"? I'll leave this place to the more enlightened and let the big guns have at it. :)
Gee, I simply asked in which way does normalizing degrade the sound any more than raising the fader would, which you didn't answer. A simple explanation would have been much better received.
 
LOL^^^^Look at this^^^^^^Hahaha.......:D :D :D

Nobody asked for your resume. Share your insecurity with someone who cares.

As far as Normalizing is concerned, you still didn't answer the question...But thanx for the laugh. :)

LOL! Wow. :D
That’s just it. I did share the answer. Use your ears to make audio conclusions and decidedly not an internet warriors opinions. If you just "know better" than my shared beliefs, by all means use that as a base. Far, far from me to ridicule. Nobody ask for my resume, this is true. They did (implied) ask me for my opinion, which was whole heartedly dismissed. As I said, I don't share my resume with anyone. Never needed it until here. The sharing was merely a pushback and you've already dismissed that as insecure and comical. So be it. Anyway, must be me since my responses and resources have never been met by folks with so much more contrary concrete notions. Anyway your're welcome for the laugh's and wow's. Let's just hault the vitriol ,bury the bad vibes, and move on. I've had my fill. Nothing here by way of anything remotely positive.

Moving on👍
 
I use normalize in Samplitude sometimes but if I have multiple tracks (to mix) and just one or two are too low I put up Sam's mixer, shove the fader up a few dB to taste then "save as" the result. "Save as" means the original recording is not changed so I can always go back, but then Sam Pro X also has (infinite I think?) undo.

However, if you have a track down at -23dBFS say it can be better to pull down the rest to match it? Don't forget, tracks sum at about 3dB for each two so you can soon be crowding 0dBFS!

Jusfort, the normalize function in Samplitude allows one to increase level in precise steps. Not sure if all DAWs do that?

Dave.
 
I read this back and some of the things Joseph says need him to explain how he drew some conclusions because ive searched the usual audio technical databases and i cannot find there the things he alludes to?

I think he is saying that using maths to increase the value of every sample in the stream adds compression? The snag in my head is that this is the method manufacturers and software designers use to control vcas, grouping and lots of othr processing we use all the time, and compression as an artefact isn’t mentioned that i see. His clear and solid understanding is that it does? If this is correct, where is the science to explain this. Every digital compressor uses the same maths. Do we now think that the straight line they show is now not straight but a curve.

Joseph and his colleagues think this is fact from their eminent client list and gong wins, but why is nobody saying this happens. As they are doing audio for images, there are some BIG organisations who are usually very vocal on technical subjects. He needs to explain what he believes happens, and how, so we can all learn. Im not saying he is wrong, just telling us stuff we've never heard about.

The late Tony Waldron from Cadac would beat me over the head with his science that often bucked trends, and it usually boiled down to physics that we cannot hear or measure or things we can hear and measure that are caused by things we cannot determine. Normalisation introducing non-lineality is a new one for me. I have always understood it to be the opposite, just a amplitude adjustment. If it introduces different amplitude adjustments at different frequencies, that’s not what we believed. Bring on the data from trusted sources, so we can learn please. Sadly, the eminent people you mention i have never heard of.
 
Rob, I cannot comment on digital systems, no edukashun. However, an analogue compressor only introduces distortion under 'dynamic ' conditions. If we take a FET comp' for example and we change the gate voltage from a static xV to yV the attenuation will change by some amount but that process will not introduce distortion*. When however the compressor is 'working' the gain will change in some proportion to the signal voltage. That is the very definition of distortion but the effect is only noticeable at very low frequencies and certain attack times.

By analogy (!) a normalizer just increase the 'gain' of the track by a fixed amount. What was at -23dBFS becomes -.3dBFS or whatever. AFAICS it is just another digital 'fader'? I have an old version of Adobe Audition and that has a 'peak find' function in its normalizer.

*Of course, said FET will introduce distortion (mainly 2nd harmonic) of itself, not THE most linear of things! A quadrant multiplier would do better...whats in THAT chips I think?

BTW Rob in Samplitude I can increase waveform height independent of signal level. I thought most DAWs could do that?

Dave.
 
The only two ways I'd ever raise track levels are 1) Clip Gain. Maintains the sonic of the track without reverting to compression algorithms. Far better (sonically) than Normalizing at least for preserving the integrity of the original track. 2) Some type of decent input/output gain structure plug-in. In his case, and for me, Sonimus A-Console or Sonimus N-Console, is highly recommended, not only for the tasks being discussed here but for the gaggle of other components that solve problems and/or add improved sonics. Far better (sonically) than Normalizing.
My understanding has always been that normalizing doesn't change the dynamic but rather just the level of a track. As someone who lost a step with age, I challenge my understandings on the regular. In this case I looked for what I might be missing in your statement.

From Izotope, I'm guessing they know a thing or two about audio processing.

What is Audio Normalization?

Confirms my understanding on the topic.

"Audio normalization simply adjusts the gain of an audio file so that some measurable feature—usually peak level or integrated loudness—matches a predetermined target level. The same amount of gain is applied to the whole file so that the dynamic range remains unchanged."

This left me with trying to figure out where you think integrity is being lost. As Rob said, processing digital audio is just math. The issue with audio math used to be what to do with the least significant digits from this math (quantization error). This is where processes such as dithering and oversampling comes in.

So are you talking about aliasing? Many plugins offer oversampling to avoid aliasing, which can introduce distortion or other artifacts into the recording. I've not known this to be an issue with normalization but probably because it is not a regular part of my work flow.

I took a look at the manuals and specifications of the plugins you mention. These are typical sound shaping plugins. They may sound fantastic but not sure how they maintain the integrity of the source.

I encourage you to clarify as to what I am missing with this. No critique, just curiosity as I might just learn something. Besides, I'm practically the king of not being able to clearly explain what I mean these days.
 
Last edited:
Hi All,

I am kinda new to recording and I am working on an album of solo material. I recorded an acoustic guitar song with vocals (separate tracks) and the takes were excellent. However, I realized afterwards that the levels I recorded at are low. I am ranging between -17 and -23 DB on the acoustic for example. they sound great with my studio monitors when I turn them up, but I feel like the levels are too low for me to get a good mix level with the master fader. What is a good way to fix this without re-recording? Could I increase the gain with some of the plug-ins I am using like compression or EQ ? Any advice very much appreciated.
A simple way would be adding a bus for the track and route the channel to that bus and use the channel slider for gain/signal level to the new bus and set up the plugins for that channel on the bus strip.

In reality it doesn't really matter what level the tracks are its how you work with them. Because I have done -20dbfs sessions intentionally just to see if that if its different than -10dbfs and -4 dbfs. Which it is, the hotter the signal was, the less the adjust-ability was on some plugins. Digital recording is much more forgiving in a DAW because in analog hardware you had minimums for signal level to get pass the physical signal to noise loss in the mixer.

Normalizing isn't that great of an idea because you gain everything including the noise floor. Keep this in mind.
 
Last edited:
So are you talking about aliasing? Many plugins offer oversampling to avoid aliasing, which can introduce distortion or other artifacts into the recording. I've not known this to be an issue with normalization but probably because it is not a regular part of my work flow.
only time I observed any digital signal anomaly in audio was -40 dbfs tracking experiment at 44.1 and 48 khz when boosting it I got clock bleed in the audio at -60 when the -40 dbfs signal was cranked to -10 dbfs. This is the only time I seen it but it might be worse in some systems depending on the converter.
 
Back
Top