How loud? What the articles say vs what is being done.

gbowling

New member
All over the internet and there are articles about how loud to make your songs. Most of these say around -14LUFS integrated or somewhere close.

But when you look at chart topping records, it's a totally different story. Here's an article analyzing the songs from the 2021 grammy awards.


You'll see that these songs range from a whopping -5.6 Int LUFS to -9.9 Int LUFS. Obviously all of these are mastered by top mastering people, who have been mastering for years.

So what gives, top mastering engineers and companies all write articles about mastering to somewhere around -14 LUFS, but then master to much much higher?

For what it's worth, I took one of my recordings and worked on it to see if I could get it mastered to -6 LUFS or so. I was able to do that, but in order to do that while still keeping some reasonable mix and level of distortion, I had to put limiters on each individual track and smash them as much as possible, reduce track clutter/reverb/and other things to prevent all sorts of unwanted sounds rising to the top. In the end, I achieved a somewhat ok mix and got to around -5.5 Int LUFS point.

The other good news is that I learned a few things along the way, which was the intended purpose of the exercise for me. But it does beg the question as to why we keep saying master to -14 when the top dogs in the industry are mastering to as loud as they possibly can?

gabo
 

Mickster

Well-known member
I agree with you. So many songs are in the -10.0 LUFs or louder it seems. I sometimes use Ozone to to some "mastering'. It has a fairly transparent limiter and has a few different mastering options in their sort of "automatic" mode. The results do seem to vary from about -14.0 to as much as -9 or -10 LUFS. I can't imagine trying to get my master to -6.0 LUFS however!! The one lesson I've learned (here) is to keep your tracks under -16db if you're intending to really crank up the final master. That rule seems to allow for less overall "change" to the character of the sound.

Just more rambling worth 2 cents or less.

Mick
 

TalismanRich

Well-known member
The values in the article have nothing to do with the "-14 LUFS" target for streaming services. It used uncompressed lossless file, like you would find on a CD or one of the higher quality services.

The -14 LUFS target is for services like Youtube, Apple, Spotify, etc. If you send them a file that is -5 LUFS, they will simply turn it down until it meets their criteria.
 

gbowling

New member
One thing that happens when you compress things this much, is that lower sounds and wash rise up to the same levels as the primary sounds of a track.

For "in the box" songs, comprised of loops, virtual instruments, etc. Which is what most of the grammy songs are these days, you can get away with more. Those tracks don't have much in the way of unwanted sounds.

Recording with microphones, in a room, with real instruments presents a bigger challenge. Those tracks have ambient sounds and room artifacts that can get out of control if you slam them too hard. Especially a live drum recording. To get a reasonable sound at those levels on my drum recordings, I actually had to just mute the room mics. Those mics just created too much clutter when smashed this hard. Actually the overhead mics started to sound like room mics. Close mics, snare/toms/hihats/etc, all had to be gated and tweaked to the max to eliminate as much of the sound bleed as possible because they too cluttered the mix when smashed.

What were the interesting things learned after I treated the tracks to achieve the best -6 LUFS sound I could get? Well, the first thing I did after I was finished, was remove all the track limiting/compression, all the bus limiting and compression, but leave the tracks as I had tweaked them. Some of the tracks were a bit out of balance, so I tweaked faders a bit to get things leveled out again.

From there, just sit back and listen to the new mix. There was a openness and air about it. There were a few things I didn't like, a few things needed a bit more reverb (like the snare) that had been taken out due to the compression enhancing it. There were a few things I added back, like a bit of the room mics on the drum kit.

But to be honest, after I made those few tweaks, the overall mix sounded better than what I had before. All that compression revealed things that, while they weren't completely offensive, built up over multiple tracks to influence the final mix. Cleaning those tracks of all that created a more open and better mix.

It also provides a confidence that if my new mix gets compressed "out in the wild" by some service or format, that it will survive that in a better way than it would have before.

For me it was a useful exercise.

gabo
 

Massive Master

www.massivemastering.com
Just goes to show what I've been yelling about for decades. The "loudness war" is and has always been a pissing contest between bands and labels and (I just posted about this in another post) I can't believe that the streaming services are the bleeding edge of the sword that might actually put an end to it (if it ever filters down to the labels and artists that they're destroying their music).

[EDIT] I'll leave my thoughts on that article alone. But let's just say that there is some questionable information in there right from the start.
 

gbowling

New member
Yea agree about the questionable info, but the overall gist that these songs are totally smashed to bits to be as loud as possible is correct.

It is sort of a pissing contest between the bands and labels, but both want as much play and as much revenue as they can get off a recording, so they both are after the same thing. Unfortunately it's a fact that if two songs are played back to back, the listener will perceive the louder one to be better, at least to some degree.

And in today's world of playlists, not albums, your song is almost always going to get played next to songs from other artists. So the loud one wins.

The only real limit is a digital word of all 1's. But we're inching closer to a time when all devices, not just streaming platforms, but the actual device (phones, receivers, etc.) all normalize the songs in your playlists to all be equal volume. That is a bit tricky to do cheaply in a device, but it's getting better.

That is going to continue to take pressure off making them loud for marketing's sake.

gabo
 

bouldersoundguy

<div><p>&nbsp;</p></div>
But we're inching closer to a time when all devices, not just streaming platforms, but the actual device (phones, receivers, etc.) all normalize the songs in your playlists to all be equal volume. That is a bit tricky to do cheaply in a device, but it's getting better.
It could be done with something like CDDB, the CD database, that recognizes ripped songs and assigns a dB offset value, much like the dialnorm system from Dolby. There would be no need for the device to do processing other than apply the gain offset and perhaps a basic limiter.
 

rob aylestone

Well-known member
The trouble is that for some genres volume consistency is very important and the music needs this scrunching and squashing but other types do not need it or benefit from it. The notion that you spend all your time making minute tweaks and nice it’s perfect just slap on a preset is warped and letting a streaming service do it with no user control is even more stupid. One voice and one guitar can never be treated the same way as a multitrack with dozens of separate sources but this is exactly what we seem to accept.
 

bouldersoundguy

<div><p>&nbsp;</p></div>
The trouble is that for some genres volume consistency is very important and the music needs this scrunching and squashing but other types do not need it or benefit from it. The notion that you spend all your time making minute tweaks and nice it’s perfect just slap on a preset is warped and letting a streaming service do it with no user control is even more stupid. One voice and one guitar can never be treated the same way as a multitrack with dozens of separate sources but this is exactly what we seem to accept.
Generally speaking, the dynamics of the production aren't changed, especially for stuff that's very compressed. It just gets turned down so it's about the same perceived loudness as other songs in the stream. That way the listener doesn't need to keep adjusting the volume knob. The only songs that will be dynamically altered are the ones with unusually high peak to average ratios (crest factor), which will have a limiter applied and their level raised so they aren't way quieter than everything else. Radio stations have been doing that for many decades, but much more aggressively.
 
Top