Master doesn't sound good on streaming platforms (hi hats are destroyed)

  • Thread starter Thread starter milosrogan
  • Start date Start date
M

milosrogan

New member
Hello,

I just joined this forum and thought about asking you something about my song.

I just finished with recording, mixing and mastering my first song. For a moment I thought it sounded decent, until it went live on streaming platforms. Hi hats are completely ruined on Tidal, on some other platforms little less, but far away from original. Here's original .wav audio if you want to take a listen: https://drive.google.com/file/d/1fk0rXqyCNrSWt-X-ik1vlEO7LjjpgWdd/view?usp=sharing

And then:
- iTunes: Somewhat closest to original audio
- Spotify: Noticeably distorted and compressed hi hats
- Tidal: Completely distorted song

(I'm sorry if links from streaming platforms are maybe forbidden on forum? Please notify me if that's the case...)

Do you have any idea why is this? Should I cut out everything from 17k and up, 'cause my overheads have some frequencies going on there? Could this be because of poor limiting and level peaks?

I would really appreciate your help!

Thanks in advance,
Miloš
 
Last edited:
Tidal is lower volume, not sure if they just turned it down or put a compressor on it.
Spotify and your original sound the same to me - listening on the computer speakers I use all the time for Spotify.
Since you specify the hi hats, not sure what you are hearing. I had to really strain to distinguish the hh from the brushed snare (at least thats what it sounds like - maybe its a brush on the hi hats?) Whatever it is, it doesn't have any real sound except noise to me.
 
And then:
- iTunes: Somewhat closest to original audio
- Spotify: Noticeably distorted and compressed hi hats
- Tidal: Completely distorted song
I would really appreciate your help!
Well Spotify and Tidal are displaying Intersample Clipping - which means you mixed too loud for the platform. iTunes tends to be easier on you track.

Tidal Masters songs are streamed at 96 kHz/24 bit BTW - which is different than Tidal HiFi which is streaming at 1411kbps or 16-Bit / 44.1 kHz. It could be you are losing
something in the translation - you might lower your master for Tidal by 1db.

As far as Spotify is concerned - louder tracks are more susceptible to extra distortion in the transcoding process - so again lowering your mastering volume by 1db or so might help.
 
Thanks for answering, guys!

Tidal is lower volume, not sure if they just turned it down or put a compressor on it.
Spotify and your original sound the same to me - listening on the computer speakers I use all the time for Spotify.
Since you said you can't hear the difference between original and Spotify, can I ask what audio quality settings do you have on your Spotify account? Because when I listen from free user account (that would be 128kbps or so), the difference is very very noticeable. If you're listening from 320kbps setting and it sounds the same as original, then I have other problems in song beside maybe limiting...
Well Spotify and Tidal are displaying Intersample Clipping - which means you mixed too loud for the platform. iTunes tends to be easier on you track.

Tidal Masters songs are streamed at 96 kHz/24 bit BTW - which is different than Tidal HiFi which is streaming at 1411kbps or 16-Bit / 44.1 kHz. It could be you are losing
something in the translation - you might lower your master for Tidal by 1db.

As far as Spotify is concerned - louder tracks are more susceptible to extra distortion in the transcoding process - so again lowering your mastering volume by 1db or so might help.
Honestly I hope this is the issue, the loudness :) Because when I was testing my song before publishing it, I did export at 128kbps just to see if it's gonna break or something. It sounded just as you would expect a 128kbps to sound, little bit of a lower quality, but nothing like this version on streaming platforms....

So, just to make sure, other than this you can't hear any other issues from the original audio that could cause this problem?
 
Thanks for answering, guys!


Since you said you can't hear the difference between original and Spotify, can I ask what audio quality settings do you have on your Spotify account? Because when I listen from free user account (that would be 128kbps or so), the difference is very very noticeable. If you're listening from 320kbps setting and it sounds the same as original, then I have other problems in song beside maybe limiting...

Honestly I hope this is the issue, the loudness :) Because when I was testing my song before publishing it, I did export at 128kbps just to see if it's gonna break or something. It sounded just as you would expect a 128kbps to sound, little bit of a lower quality, but nothing like this version on streaming platforms....

So, just to make sure, other than this you can't hear any other issues from the original audio that could cause this problem?


Well the Cymbals are blurry - that may contribute to the sound.
 
But for example, when I test the song with Loudness Penalty tool, it shows this:

Capture_loudness.webp


If this is accurate, Spotify and Tidal should turn the volume down for about -0.3 dB, which doesn't seem that much for this amount of compression to happen? Or are the level peaks in the song that are making this problem? I don't think I understand fully how it wokrs, so that's why I'm asking.
 
Thanks for answering, guys!


Since you said you can't hear the difference between original and Spotify, can I ask what audio quality settings do you have on your Spotify account? Because when I listen from free user account (that would be 128kbps or so), the difference is very very noticeable. If you're listening from 320kbps setting and it sounds the same as original, then I have other problems in song beside maybe limiting...

Honestly I hope this is the issue, the loudness :) Because when I was testing my song before publishing it, I did export at 128kbps just to see if it's gonna break or something. It sounded just as you would expect a 128kbps to sound, little bit of a lower quality, but nothing like this version on streaming platforms....

So, just to make sure, other than this you can't hear any other issues from the original audio that could cause this problem?

Free Spotify account, which I assume is the 128k rate.
 
It's chilled out brush jazz - I have no idea why you even tried to get it that hot? Cymbals and HF sizzle are notoriously hard to compress seamlessly when the levels are a bit high.

For what it's worth - I had to go to the studio and listen in the controlled environment to hear the differences. On the laptop speakers and decent in-ears, too close to really make a call. I'd also say that I'd not tag it with the bad label. It sounded a bit bright on the TV sound system but not distorted.
 
Alright, we agreed I should turn down the volume on the master, I surely will. But again, you said:
For what it's worth - I had to go to the studio and listen in the controlled environment to hear the differences. On the laptop speakers and decent in-ears, too close to really make a call. I'd also say that I'd not tag it with the bad label. It sounded a bit bright on the TV sound system but not distorted.
To hear the differences between original and streaming platforms? And you could barely hear it? Okay, I recorded the playback sound from my computer and played first few seconds of the song, first is original wave file, and the second is Tidal. Audio file is attached below - can you take a listen and tell me what you hear? Because that's what I hear from my computer...
 

Attachments

First was loud and I had to pull my IEMs out. I turned the level down and tried again.
I normalised the two and compared them side by side. I see the hats at a much reduced level, and a few other compression artefacts, but apart from the hat being low in level, one does NOT jump out at me as being worse in quality - just different. On balance - I prefer the squashed version I think. The other is for my taste a little too wide in dynamics and because of the unusual arrangement and syncopated rhythms - you get the piano, bass, drums and the brass as four totally separate sounds, and in the squashed version they blend a bit more. I really don't hear the distortion you do I'm afraid. Just mild changes due to the different compression used. I think for me, the first version is simply too loud to suit the music. The muted laid back trumpet just doesn't work really loud.
 

Attachments

  • Screenshot 2021-08-22 at 20.24.15.webp
    Screenshot 2021-08-22 at 20.24.15.webp
    14.7 KB · Views: 57
Thank you for analysis Rob!
but apart from the hat being low in level, one does NOT jump out at me as being worse in quality - just different.
Different in a way that it's missing crisp and high frequencies, do you mean that? Maybe I'm judging too harsh on it, and I by no means have great ears yet, since I'm just starting out, but the difference that I hear is just unacceptable.

What I will do now is reduce the volume of the master for maybe 1dB or 1,5dB and just resend it back to the stores. We agree that this could help reduce compression artifacts in the transcoding process?
 
Something else you might be encountering if the file is being re-encoded at 128K mp3... there will be little if any content above 15kHz. That's not a issue for my old ears anymore, since I can't hear much above about 10-12K. However that is where a lot of the shimmer of cymbals is found. If you still have good hearing, that could be contributing to the difference.

Like Rob, I heard a bit of difference in instrument balance between two, but I can't comment on the high frequency content.
 
1dB or 1.5dB? Keep in mind that in blind listening - few people can detect any change less than 2dB. I know others agree but this is simply the wrong kind of music for chasing dBs. If this was my music - then this track would be normalised to -3 to -4dB fs - no more. In my head over -3dB is reserved for the track that hovers low then there is something to make the listener jump - maybe even a final huge perc hit. It's progressive, jazzy, laid back and cool sounding - it's totally NOT a chase the digits track - and if it was played on radio, for example, it would be quieter (and should be quieter) than the Metallica track and the ancient 70s Deep Purple tracks before and after.

I'm really struggling with quantifying the "Different in a way that it's missing crisp and high frequencies, do you mean that?" My hearing runs out at 13.5K. It's already overburdened with HF - looking at it spectrally - there is loads up top and those hats and metalwork may be just the kind of HF I don't like much - and tend to remove. I do it an awful lot live I have noticed. I hear very hard drums and there's so much energy up there it sounds and feels 'hard' and tiring. I look at the displays and there's so much above 14K - loads right up to the limits the display shows, and in real life, that naturally dies off with distance over quite a short range, but I have a PA with piles of amps firing HF at audiences. I'll happily gently roll it off starting at 13K and by 20K it's at least 10dB down. It still sound crisp and has sizzle, but you can listen for a couple of hours without your ears killing you.

Your music is quite unique - does it really need hats that jump out at you. You've picked a rather prominent sounding electric piano sound on purpose, for people to listen to and it's very exposed, so louder and sizzle hats would surely draw people away? Making people hear the padding in tracks I always try to avoid. The bass also has masses of HF - so I guess you really like top end, but when every sound source is full range - it's tiring, so what about folk like me who could be your listeners? Maybe I like the deadened hats through preference tonally? I don't know, but I prefer the duller ones, and with me behind the faders, I know I'd not be seeking to make them louder and more prominent.

Over the past few months I have been experimenting with my music - which goes to stores on 48K 24 bit. I've listened on Spotify, iTunes, Youtube and mp3 in 320 and 256K versions. I'm perfectly happy with all of them. 128K mp3 is out - just not nice. I tried making a playlist of the varieties and they're all slightly different but I'm unable to say better or worse. My gut feeling is Spotify, followed by Youtube are the most pleasant to listen to - perhaps more so than the track they compressed them from. I also wonder if now I'm mixing and even choosing sounds based on what the streaming services will do to them.

What I do know is that every single device I have introduces it's own fingerprint sound wise. These topics always start on the MacBook. If I can't hear it, I get the IEMs. If I still can't hear it, I'll try the video studio audio system if I'm there, and failing that, my audio studio when I'm next there (a mile apart!). If I can only hear somebodies mega problem in the studio - I'm often left wondering if they're just not even thinking about their listeners experience. I'm old enough to remember people starting to use NS10s in the studio even when they had great full range monitors. From time to time, I have to use products that are not right - the client runs out of time or a deadline shifts and I can live with using v4 of the mix when v10 would have been better. In the rush jobs, my effort is in the obvious things, not the glazing on top. Maybe, just maybe - you are hearing things because you are too involved with the music - something listeners might not be?
 
1dB or 1.5dB? Keep in mind that in blind listening - few people can detect any change less than 2dB. I know others agree but this is simply the wrong kind of music for chasing dBs. If this was my music - then this track would be normalised to -3 to -4dB fs - no more.
By this, you mean that top peak should be around -3 to -4dB? And if the LUFS for most tracks should be around -14dB, should mine be around -16dB maybe? And what about streaming services turning my volume up if it's too quiet for them? Will it make it "normal" (uncompressed) sounding, but again with appropriate loudness?

I understand and appreciate your opinions and suggestions about instruments role in the song. This is simply my first thing and I can understand I tried to push everything to the limits. I just want to make the best of what I got. Regarding the hearing, I'm 23 and I still hear plenty of higher range (I don't know for how long that will be). That might explain why we can't hear the difference the same. Anyway, I will cut out everything from 16.5k - 17k and up, since it doesn't change too much of the sound, but it could make a mess when encoded and compressed to streaming quality audio.
Maybe, just maybe - you are hearing things because you are too involved with the music - something listeners might not be?
I probably am too involved, and if listeners are not that's okay. I get your point, but while I'm learning I don't want to ignore some things with assumption that nobody will notice it. Maybe they won't, but I don't want to let go of my care about the details, whether it's worth or not. But it will change with experience, I know :)
 
Please forgive me, but you're moving too fast. Knowledge goes in and you create expectations that your ears cannot assimilate.

"you mean that top peak should be around -3 to -4dB? And if the LUFS for most tracks should be around -14dB, should mine be around -16dB maybe?"
I think that the top peak should be appropriate to the content. I've been recording other people's music and my own all my life, and bar metal or that kind of stuff, I'll do anything. In my radio days, we had one type of metering PPMs, and the BBC had a sort of rule that you peaked at a 'best' setting depending on the music. So on BBC Radio 3 they play music that in the concert hall the orchestra would be X number of people, and the music might say fff - as in damn loud, and that everybody understood. Almost fingers in your ears in the space. On radio, and later TV the electronic links from mic through to the radio/TV at home could manage maybe fff down to p. pp and ppp were lost in the chain noise. This meant that even back then compression was a 'thing', often manual compression - as in an engineer with a hand on a knob, gently going up and down. The BBC were doing this when I first started and the record level was chosen for best signal to noise and everything aligned. Commercial radio then started doing classical music too, but turned the volume up with clever gizmos to make it seem louder. Purists objected, ordinary people didn't.

Yes - for my music, -3dB as an absolute maximum may be appropriate for some stuff, but other music might well be even lower in level if that works. I have no idea whatsoever what the LUFS specs are. They are even in a little window on the screen, if I wanted to know this. I don't. My volume in the studio is pretty much fixed. So when I mix it sounds right loud vs quiet. Once it's done, I usually run it through a normaliser process set to -3.2dB - a figure I've settled on over the years. My 20 year old stuff is about the same level as the new stuff. Very often, watching the waveform during normalising, the change is so small I can't see it, and I have to double check it worked! If it sounds good with that much headroom, the streaming services seem to not mangled it at all. The odd track I've done that needed to be hotter does get mangled.

I think you just need to experiment more, listen and trust your ears and not get into the chasing the numbers. If you were a metal merchant, I'd not have said anything but your music does not need this faffing around with.
 
Last edited:
Thank you very much, all of you guys! I'll do the tweak and see how it works.
 
Alright, we agreed I should turn down the volume on the master, I surely will. But again, you said:

To hear the differences between original and streaming platforms? And you could barely hear it? Okay, I recorded the playback sound from my computer and played first few seconds of the song, first is original wave file, and the second is Tidal. Audio file is attached below - can you take a listen and tell me what you hear? Because that's what I hear from my computer...
I know this is old but Whattttttt... How can others not hear the difference. I am listening to this on a bass boosted sony and the second one is not too low in volume but the crisp and the levels of hihats are so low and diatorted. I am experiencing something similar in my recording maybe not as severe. Did you find a fix?
 
Alright, we agreed I should turn down the volume on the master, I surely will. But again, you said:

To hear the differences between original and streaming platforms? And you could barely hear it? Okay, I recorded the playback sound from my computer and played first few seconds of the song, first is original wave file, and the second is Tidal. Audio file is attached below - can you take a listen and tell me what you hear? Because that's what I hear from my computer...
I'm late to the thread, but the "distortion" you are hearing is sampling washiness from the filter used on spotify, youtube etc. Tidal is lossless and should't have this problem. As you noticed, The Apple music one is decent, so that's a big clue.

This has nothing to do with the loudness of your masters. So I'm going to disagree with the crowd here and recommend to not turn down your masters. They are fine. The sample you gave is pretty dynamic, so would have no volume issues with any platform. You were right in noticing that the minimum amount spotify and others would turn it down eliminates loudness as the problem. Trust your gut :)

I definitely heard the washiness of the hi-hats... and they sounded crisp in the original. What happens with the conversion from full quality to lossless formats for streaming is high frequencies get attenuated and filtered, which can creates a washiness sound in the high frequences from something like 3khz-20khz. Since hi hat fundamental frequencies lie around 3khz, there's your answer as to what you are hearing!

As you noticed, the quality of the sampling filters used along with the bitrate can make a huge difference (this answers why Apple sounded better ... possibly higher bitrate, and I would guess higher quality filters being used). My motto for streaming music is "Set everything to maximum quality"

Overall, the quality should be good enough even on some of the lower bandwidth settings that you won't hear any washiness unless you listen critically through a mastering set-up. Anything below 256kbps though is not going to sound anywhere close to amazing no matter what, but if the filters are excellent, you can get some surprising results at lower bitrates.
 
One caveat ... the conversion process does turn UP the volume (due to the sampling process). Apple roundtrip can help with deciphering what happens when your song is converted for streaming. A free VST like Lese Codec could do this as well. I really like ADPTR Streamliner.

The other thing I would say is don't rely too heavily on the meter readings - they are a guide, but don't define quality. I can't count the number of masters I've listened to that have intersample peaking of 1-2db above zero when converted to mp3 or streaming. Do I hear it? Yes. Does it make such a difference that I want to take up all of my time on the minutia of that? NO. If it sounds good, it sounds good. Most intersample peaks happen on snare, kick, and vocal esses. The only one to be really concerend about are the vocal esses (means they are too loud).

A guideline I would probably follow for mastering your style of music is to keep it at around -12LUFS as the long-term average. That means it could spike up to -8LUFS in the loudest parts and I wouldn't care as long as it sounded great.

Really the thing you want to do when thinking about streaming is make your music loud enough that if, say, a restaurant is playing your music and they have normalization turned off, there won't be a noticeable difference when it comes time for your song in a playlist. With normalization on, as long as it sounds good, it's going to be unchanged and will sit in the playlist well unless there are major tonal imbalances or you've mastered it so loud that the full spectrum of the mix is clipping above 0.
 
Back
Top