Spotify lowered their playback loudness

Usually with loudness controlled systems like this mixes with lower LUFS and peaks near 0dBFS, the peaks get limited down so the loudness can be increased. High LUFS mixes just get attenuated to match the standard.
 
Sonic conseqences to loud mixes is entirely due to them being made loud in the first place. It's no different from a listener adjusting his volume to compensate. Files mastered to the LUFS standard of the streaming service will have the most dynamics without risk of getting limited down.
 
Usually with loudness controlled systems like this mixes with lower LUFS and peaks near 0dBFS, the peaks get limited down so the loudness can be increased. High LUFS mixes just get attenuated to match the standard.
Which is, I think, what I said, and the opposite of what miro is saying.
 
Which is, I think, what I said, and the opposite of what miro is saying.

It's not what I'm saying or how I'm interpreting it...it's what the people who are already working with the standard are saying...and I've read it in other places not just this one...and that quote is from their website...those are not my words. :)

Are your interpreting that to mean something different...?


Here's a basic read about LUFS standard and it's use.

Mixing and Mastering Using LUFS | Mastering The Mix

"But, as I will explain in this post, the future of music consumption will favour dynamic music over loud compressed music."
 
Here's another site:

5 Common Myths About Loudness Metering Debunked : Ask.Audio

And this is what they say in the very last paragraph on the page:

"What this means is that pushing for high RMS values and squashing out dynamic range will now actually work against your music when your “sausage” is played against music mixed to utilize the dynamic range afforded by the -23 LUFS mix headroom.

“Loud” over compressed and brickwall limited music - read: music with no dynamics - really cannot compete sonically with more dynamic material in the new standards."



That pretty much repeats what was said on the other website I linked...and I think it's pretty much how I've been interpreting it. Loud/compressed will sound inferior to more dynamic music that is already closer to or below the LUFS standard.
 
"Over compressed music" already sounds like ass, it won't be changed it'll just be quieter.

OTOH, "Under compressed music", IF it's going to get turned up to meet the same average, will have to get squashed down in order to not peak over 0.
 
"Over compressed music" already sounds like ass, it won't be changed it'll just be quieter.

OTOH, "Under compressed music", IF it's going to get turned up to meet the same average, will have to get squashed down in order to not peak over 0.

That sounds like the preset setup with FM radio..here at least.
Anything with dynamic range won't be represented favourably. I know that's a totally different thing but I guess their in-house limiting/compression is tailored for squashed masters.
 
"Over compressed music" already sounds like ass, it won't be changed it'll just be quieter.

OTOH, "Under compressed music", IF it's going to get turned up to meet the same average, will have to get squashed down in order to not peak over 0.

I don't get why you think "under compressed" music will have to get squashed (aka compressed)...?
The LUFS standard algorithm is not applying any processing at all...just a level change..from what I've read.

But you are right, the over compressed music will be much quieter, and due to it's flat-line nature of little to no dynamic range...it won't just be quieter, it will sound "smallish" when compared to the dynamic stuff that gets it's overall level raised so they both have the same relative loudness.

I think you can easily even test that in your own studio setup...just find some really crushed music that's out there...and then take a more old-school dynamic mix...and set them both to what sounds to be about the same overall perceived loudness to your ears...and then A/B them...I'm pretty sure the dynamic mix will sound louder/punchier.

TBH...I'm happy to see this change coming...because when I take my mixes and apply some basic level boost, just to get it a bit louder...it's still not close to some mixes I've heard from people...here and elsewhere.
I can easily match their loudness...but at that point, I hear my mix level balance change, the tonal balance change, and the dynamics shrink...which I never really cared for, but at the same time, felt that it was necessary to make it sound competitive.
Now that will not ever be needed....and while there will be holdouts, who don't use these streaming services...I think over time, the broader music listener base will reject those, because they will have been "retrained" to appreciate the more dynamic mixes at more reasonable processed loudness (less compressed).
 
Anything with a crest factor over 14db will have to be limited to keep it from clipping, if it is brought up to that standard.
 
I hear what you're saying...but "they" are saying their algorithm does no processing of the audio, just basic level up/down.
So then where will that limiting come from....

...or could "they" be lying to us about the algorithm's functions? :D

I'm going to do some more checking...like see if I can find any Bob Katz comments on the subject.
He's been a big proponent of this standardization, and the guy knows his stuff.

[EDIT]

Found this SoS article that actually mentions Katz...and it also does a good job of explaining most of the issues and the the planned solutions...with examples.

The End Of The Loudness War? |
 
Last edited:
If they don't limit it, the fixed-point nature of the file format will.

Is there somewhere a confirmation that these services actually do turn up content that falls below the "standard" at all? I've never really tested, but I thought YouTube was plenty happy to let your video be way too quiet, but will turn it down if it gets too loud. If that's the case in general, then nothing ever actually gets squashed by the service itself. It might be already squashed, which will get it turned down, but I think it's important information to have.

(Not really for me. Most of what I end up mixing and mastering is noise or punk or grind or some extreme version of metal where everything really is just supposed to be as loud as possible at all times, and it's usually tough to get a crest factor as big as 14. For my own stuff, I've been shooting for between 12 and 13 for over 20 years, but some of these kids today...)
 
Since I do mostly metal of some sort, my raw mixes will sit around that level. This will make mastering more about the sound of it than any level change on my stuff.
 
I'd hope the streaming services would only reduce levels, never increase them. Except maybe to peak normalize.

It ought to be like a hippocratic oath the streaming services take, first do no distortion. . .
 
I'm not 100% positive they would increase low-level mix up to their new standard level...though, I just seem to recall reading stuff awhile back that implied that..but I could be wrong about that.

The real thrust of this is more about heavily compressed/loud mixes VS mixes that were done already at, or close to, the new level standard, mixes that have lots of dynamic level content....and that's it's a bit thing.
The SoS article I linked do does a good job of demonstrating what happens.

If they do normalize the very low-level mixes up to the standard level...I don't think it would hurt them at all. It would only allow them sit comfortably in playlists against other mixes...and that's actually a good thing, no different than someone just constantly turning their car radio knob up/down to keep the perceived loudness the same.

I plan to use the Wave Loudness Meter plug on current/future mixes to see where they fall, though knowing from the past, I think most of mine already fall in line, considering I use an analog/tape front end, and then mix back out of the DAW through and analog/tape back end...so I'm looking at your old-school VU meters mostly.
Only when I pull the stereo mix back in the DAW do I apply some level boost...and that's where I want to see what the WLM shows.
 
If you want an exaggerated illustration of what I'm trying to say, it's a pretty simple experiment. Take two tracks - a heavily distorted guitar track and a raw (uncompressed) drum track. Normalize them both to -14LUFS integrated. Observe where the loudest peaks are hitting. The guitar will likely be safely below 0dbfs. The drum track will peak way above 0. What are you going to do about those peaks? Either you squash them down or they get clipped off.
 
I read the SOS article that Miro linked. What I got was that the process works on perceived loudness, so no point in squeezing and boosting to make is seem louder. You just get turned down. It sounded to me like dynamics were preserved. If so that's terrific, and the sooner the better.

I wonder how this will affect legacy recordings? Will everything mastered in the last fifteen years sound like ass?
 
K. I've made my point. Not my problem if others can't get it.

What actually freaks me out about this is that some things really are supposed to be louder than others. It really doesn't make a whole lot sense for an acoustic folk singer to be as loud -louder on peaks! - than a black metal mess. But OK, fine if it's really just individual tracks on a shuffled playlist, go ahead and loudness match them and let the listener turn it up or down as they see fit.

But what about listening to a full album in order like a normal person? When I'm mastering a group of pieces as an album, I will deliberately set the relative loudness between them. Obe song really should be louder than the next. I play these things against each other to help the album flow and move and make the statement it's supposed to make. So are they just going to override my deliberate aesthetic decisions? Am I supposed to accept as fact that Dr Katz knows better than me how my record is supposed to flow? It bugs me. A lot. I haven't ever fucked with Spotify or iTunes, but one of these days a client might want to, and I'd like to believe they're not just going to fuck up the work I've done.
 
The whole point is to have the songs be at the same perceived level. In that case files with lower LUFS levels get raised. If their peaks are high enough they have to get lowered somehow, and just clipping them off would be stupid. They get limited down.

It's no big deal, radio stations have been doing the same for decades, but squashing things down to a much lower crest factor. What's going on in streaming is far more benign. You would have to have very dynamic masters for them to be substantially degraded, and if you know the target LUFS of the service you can master accordingly to get optimum dynamics.
 
Back
Top