WAV files sound good in some programs, crappy in others

Mickmeister

New member
I recorded some vocal tracks in Sound Forge, added some compression and Sony noise reduction, then saved them as .wav files. The results sounded pretty good in Sound Forge, at least for a newb effort. I then brought them into After Effects to mix them (I know AE isn't supposed to be great for mixing, but it's just a matter of marrying a guitar track with a couple of vocal tracks - and - more importantly - I don't have any other audio software!).

The tracks didn't sound too great in AE - there was static-y clipping in some parts. I thought maybe that was just due to fact that AE isn't great with audio, so I tried loading them into some other programs, and they sounded just as bad. Bring them into Sound Forge or Windows Media Player and they sound fine; bring them into any other program (I tried AE, Win Amp, Sound Booth, Audacity) and they have that crappy clipping sound.

Any ideas what's going on here? And at the risk of going off-topic, any good freeware apps other than Audacity that I might be able to use for this simple mix job without running into this problem?
 
I would download Reaper and do everything in there.

Anyway, are you normalizing the wav file in sound forge? (making it as loud as possible) That could be your problem.

It could also be a formatting problem. Something like you are trying to import a 24 bit wav file into a 16 bit program, or a sample rate mismatch.

Those are the only things I can think of, because wav files are pretty universal. The only real thing that can go wrong would be caused by mis-matching sample rates, bit depth, etc...
 
Anyway, are you normalizing the wav file in sound forge? (making it as loud as possible) That could be your problem.

Yes...the tracks have been normalized in Sound Forge. I didn't make a backup of them before normalizing (I know, don't judge!). But it still seems odd that they sound good in some situations and not in others.

The other weird thing I just noticed is that I don't hear the distortion when I'm using headphones; only through the speakers. And even then, only in some programs, not others. If I play the files in Sound Forge or Windows Media Player, I don't hear the distortion whether I'm using speakers or headphones; if I play them in any other program i've tried, they're distorted thru the speakers but not thru the headphones.

It could also be a formatting problem. Something like you are trying to import a 24 bit wav file into a 16 bit program, or a sample rate mismatch.

The files are 16 bit. Not sure about the programs...but I would think a program like Winamp should be able to play a 16 bit wav file with no problem, no? As for the sample rate, I did notice that when I have the file in Audacity, it's telling me that the imported wav file is 44100 HZ, while the Audacity project rate is 48000. I tried changing the project rate to 44100 so that the wav and project jibe with each other, but that makes no difference in the sound quality - it's distorted when played thru the speakers, and okay thru the headphones.

If it consistently sounded bad thru the speakers I would just conclude that they're crap, but the files sound fine thru the speakers when played in Sound Forge or Windows Media Player.

So it's a very weird problem...nothing seems to add up.
 
A bit left field and shoot me down in flames if it makes no sense, but is all the software set to play back through the same audio drivers? I had a problems recently where some of my plug ins were not working in one software but was working in another, the software set to the wrong driver setting, i.e MME, and not WDM or ASIO (trying to remember what I changed it to not at that computer at the moment).

Check out the different software to see if they are set up the same.

Alan.
 
A bit left field and shoot me down in flames if it makes no sense, but is all the software set to play back through the same audio drivers? I had a problems recently where some of my plug ins were not working in one software but was working in another, the software set to the wrong driver setting, i.e MME, and not WDM or ASIO (trying to remember what I changed it to not at that computer at the moment).

Check out the different software to see if they are set up the same.

Alan.

Thanks...I'm going to be away from my computer for a few days but I'll mess with it some more on Monday.
 
Turn down the volume of the files in the other programs and see if you still get distortion.

Either way, stop normalizing the files. There is absolutely no point to it, and generally does more harm than good.
 
r8brain is great for converting files from 16 to 24 & 41k to 48 & visa versa - free too.
Yep, Normalizing has limited use.
I don't know what your problem is but most DAWs do everything in the one package to limit the sort of problems you're encountering.
Reaper is great, cheap & morally sound.
Audacity has its advocates - I use it when I'm teaching 11 years olds to record, (it's visual, fast, easy & the sound quality is fine) but that's about the extents of its value for me.
 
Just a thought but can After Effects handle 32 bit floating point files? I know Sound Forge can and it may be that, even if you recorded in 16 bit, you may have processed them in the floating point format. This could mean that, on Sound Forge you could process files to a level that simply couldn't be handled in integer format.

Oh, and there's nothing wrong with normalising if you use it properly. It's just another way to adjust levels.
 
Either way, stop normalizing the files. There is absolutely no point to it, and generally does more harm than good.

At this point, I think that's the best advice. I had been singing with the mic offset from my mouth a couple of inches, to minimize breath pops and stuff, but that was giving me very quiet volume levels - that's why I normalized and compressed. I just did a test where I sang directly into the mic, which gave me a much louder level. The file sounded quite good (except for some breath pops) with no processing whatsoever - in any program that I played it in. So I think all the processing was screwing things up. Thanks to all who offered their advice; there's a lot to digest in this thread so I'm going to save it for when I have more time to work on this.
 
Either way, stop normalizing the files. There is absolutely no point to it, and generally does more harm than good.

Could you elaborate on that? If you can alter the gain on any track, what is wrong with normalizing if that is just setting the gain for the highest peak? I haven't seen any harm so far in normalizing unless you had a particularly low level to start with, in which case the noise you may not have heard will be become prominent. The point to normalizing is to make the track as loud as possible without clipping. I am open to learning something
 
I'm with you, GuitarLegend (at least up until the point about making a track as loud as possible). Normalising is just a tool like any other and there's nothing inherently bad about it. Indeed, compared to pushing up a fader, normalising lets you say "put the loudest peak at -0.3dBFS and you know it'll be there rather than using a fader and risking not noticing that one peak 4 minutes in that might clip. I genuinely don't understand why this forum is so anti normalising.

However, it's important to understand how it works and whether it's an appropriate tool. I suspect some of the "anti normalising brigade" are, in fact against the process of normalising your original tracks to nearly zero before mixing. Using normalisation this way is silly since, once you start adding tracks together you just have to pull down the level anyway. Or maybe the dislike of normalising comes from some people using it when compression or hard limiting (to tame the peak and control dynamic range) might be more appropriate than just raising the whole track a certain amount.

Either way, I object to the "never normalise" advice since, used properly, it can be a helpful tool--and, all too often, overly simplistic advice finds its way into the collective consciousness of HR.
 
I'm with you, GuitarLegend (at least up until the point about making a track as loud as possible). Normalising is just a tool like any other and there's nothing inherently bad about it. Indeed, compared to pushing up a fader, normalising lets you say "put the loudest peak at -0.3dBFS and you know it'll be there rather than using a fader and risking not noticing that one peak 4 minutes in that might clip. I genuinely don't understand why this forum is so anti normalising.

However, it's important to understand how it works and whether it's an appropriate tool. I suspect some of the "anti normalising brigade" are, in fact against the process of normalising your original tracks to nearly zero before mixing. Using normalisation this way is silly since, once you start adding tracks together you just have to pull down the level anyway. Or maybe the dislike of normalising comes from some people using it when compression or hard limiting (to tame the peak and control dynamic range) might be more appropriate than just raising the whole track a certain amount.

Either way, I object to the "never normalise" advice since, used properly, it can be a helpful tool--and, all too often, overly simplistic advice finds its way into the collective consciousness of HR.

Sorry Bobbsy, I meant raising the level as high as possible without clipping. I know the loudness is a different can of beans
 
I must add, that the only reason I would say that 'normalize' is useless, is because it is to me in the realm of mixing music. It is however a great tool when copying cassette tapes to CD, or any type of situation where mixing or mastering is not in the scope of the project. Just getting the maximum level possible for an existing audio file is not taboo. The problem lies in when a noob thinks that this is the tool that makes a mix as loud as everything else they hear.

I do not make many mix tapes for ex girlfriends, so I find it pretty useless for myself. :)
 
Last edited:
Exactly, Jimmy. It's all about knowing when to use what.

If I have a mix where the dynamics are exactly where I want them, I'll just normalise it to nearly zero rather than messing up those carefully planned dynamics with compression or hard limiting.

However, if I have a mix that sounds good but is level-constrained by one or two big peaks, I'll go the other route. It's "horses for courses".

The big trouble isn't normalising--it's too many people who think learning the technicalities is beneath them as musicians. There's no such thing as a "make it all sound good plug in". Sometimes you have to learn what you're doing.
 
I find it hard to believe that audio sounds better in one program to another. Digital audio can not be done "better" or "worse" inside an audio engine. It's either done "right" or "wrong". If it's wrong, it will be glaringly obvious.

My bet is that your ears were tricked during the recording/processing phase into thinking it sounded ok, when in reality it really didn't. It happens all the time, even to experienced pros. That is why audio is SO subjective. The ear is easily tricked based on your state of mind.

Cheers :)
 
Very possibly, Mo Facta.

On the other hand, it could be to do with some sort of sample rate/bit depth incompatibility as several have suggested. Apart from psychoacoustics, that's the other thing that makes sense to me.
 
Back
Top