What Does a Mastering Engineer Require?

MiXit-G

New member
Ok lets say ive got a 24bit stereo mix of my songs..

What will help the mastering process?

What will hinder the mastering Process?

Just not sure how to aproach a mix if im sending it to a mastering house, do a reduce the amount of compression normaly added?
Can i leave some of the EQing?
 
Keep the compression to a minimal on the output bus.

MiXit-G if you know before hand that your going to get your songs mastered by a pro, these are some of the things I'd suggest.

Do what you would normally do during tracking and when in the mixing stage add your compression to individual track as needed and get that mix as good as you can. Now when your bouncing all your files into the final song, things here will depend on what medium your mixing down to.

If your mixing down to a file, then burning your files to CD I would use little if any compression on that last bounce. I'd suggest making your file as dynamic as possible leaving at least 6-14db average to peak ratio. By doing this you will be giving the mastering engineer tons to work with.

If your mixing down to another source just remember to keep your compression on the output to very little or none, giving the mastering engineer plenty of headroom to get your files sounding great.

Keep in mind that your final volume on each file/song shouldn't be anywhere near the volume of commercial CD's. If so the RMS on your songs may be to high and therefor reducing the options the mastering engineer has when mastering your music.

Hope this helps.
sonicpaint
 
1. Don't fade in or out
2. Add NO compression or EQ to the mixes
3. Save multiple versions of each mix to chose from, at least include a vocal up mix in addition to the regular mix
4. Bring the ME data and not audio versions of your mixes
5. Label everything meticulously
6. Make a list of the desired edits and sequence of tracks
7. Have a good long talk with the guy and let him know what your favorite albums are, what you like or dislike about music and do not go away unsatisfied
 
jake-owa said:
1. Don't fade in or out

Considering this was the first thing you mentioned, you must feel it is important.

Why not? I like to do my own fadeouts. What disadvantage does this present to the mastering engineer?
 
Very good suggestions from jake-owa.

Some of the reasons that you want to avoid fades are:

1. You can't add later what is removed earlier.
2. When the volume level is increased from the original, so is the noise floor. Saving the fades for after allows the ME to reduce the noise that would be raised if fading was done pre-mastering.
3. If you want to cross fade songs, there's more control if done later.
4. An ME probably has better tools than you might to control the type of fade.
5. There are some issues with dithering that are probably too detailed to get into here.

Basically just make your fade a bit longer that you want in the final product to give the ME the idea if there's something that you're totally happy with. That way the above issues aren't a problem.
 
OK - Most of that makes sense, mostly the noise floor thing.

As far as controlling fades or crossfades, most of us who edit on DAWS have very precise control over these things... graphic control with various tools/curves etc. for fading and crossfading.

I am very picky about fades... don't know why, I just have always paid very close attention to them. They are an important part of the song if it has one, so I would rather not leave that up to anyone else.

Any dithering would come from the ME's gear back onto the 16 bit final right? This dithering happens after all editorial work, so why would that be a problem? If the ME is using a combo of analog and digital gear, I guess it would depend on at which point the final A/D conversion is made, post or pre fading... am I making sense?

I guess I am saying it doesn't make sense to me.
 
lotuscent said:
OK - Most of that makes sense, mostly the noise floor thing.


Any dithering would come from the ME's gear back onto the 16 bit final right? This dithering happens after all editorial work, so why would that be a problem? If the ME is using a combo of analog and digital gear, I guess it would depend on at which point the final A/D conversion is made, post or pre fading... am I making sense?

I guess I am saying it doesn't make sense to me.

The dithering issue mainly has to do with material that is submitted as 16 bit either through an audio or data CD. Once something has been commited to 16 bit, data has been truncated and has been lost, particularly low volume information where the last bit is toggling back and forth such as on a fade. Leaving the tail helps prevent this loss by allowing the ME to produce the fade after the 16 bit file has been converted to 24 bit, re-processed, and then brought back down to 16 bit with proper dithering applied.

Similarly this happens in 24 bit, but it's more subtle. You've lowered the tail down to the last bit. During 24 bit processing a certain amount of quantization error will occur on these bits that could have been avoided having left more of the original signal, and saving the volume reduction for later.

This is one of the reasons why plugins like Waves allow you to dither each plugin indiviually. The internal processing is done at a higher wordlength and has to be brought down to 24 bit in order to chain to the next processor (for a 24 bit bus). Each time this is done it introduces a cummulative quantization error that makes the signal deteriorate.

Hope this makes a bit more sense?
 
Last edited:
Depending on how many tracks you have, DRT seems to be really cheap ($100 for 1 track, and $300 for 30 minutes of Track time)

I've heard their final master product on a mix done with a VS1680 and I can tell you that they are really good.

You can check them out here:
http://www.drtmastering.com/idxfaq.htm
 
Re: Keep the compression to a minimal on the output bus.


Keep in mind that your final volume on each file/song shouldn't be anywhere near the volume of commercial CD's. If so the RMS on your songs may be to high and therefor reducing the options the mastering engineer has when mastering your music.
sonicpaint [/B]


An ME told me recently that many of the mixdowns he receives are loud. I asked how loud and he said between -14db RMS and -13db RMS. After mastering the tracks they ended up between -12dbRMS and -10dbRMS. In other words the final stuido mixes were already quite loud, and he just had to make them a bit louder.

I know a lot of this has to do with the song style etc, but is submitting material nowhere near the volulme of a commercial CD still the norm? I have seen some excelent loud mixes come straight from the tracking studio.
 
I don' think that's loud at all.

Quote: An ME told me recently that many of the mixdowns he receives are loud. I asked how loud and he said between -14db RMS and -13db RMS

I don't think that -14db or -13db is loud at all. I know that there are Mastering Engineers that do try to keep their work at a respectable level with out giving up too much of the quality. These days with the quality of the file being judged by the loudness of the material (regular listeners), I don't think that he's doing much material with those average levels unless he's staying way from the mainstream.

I do know (and I'm sure if you ask around) that most modern music today is being mastered at the very most 6db average to peak ratio if the engineer is allowed by the client. I would even say that a lot if, not most of music today is pushed right up to 3db average to peak ratio.

I don't agree with killing the music with reducing the average to peak ratio(dynamics). Though with music being so loud these days that's what the client wants and in most cases Mastering engineers will do what the client asks, so the loudness wars continue unfortunately.

later
sonicpaint
 
Re: I don' think that's loud at all.

sonicpaint said:
Quote: An ME told me recently that many of the mixdowns he receives are loud. I asked how loud and he said between -14db RMS and -13db RMS

I don't think that -14db or -13db is loud at all. I know that there are Mastering Engineers that do try to keep their work at a respectable level with out giving up too much of the quality. These days with the quality of the file being judged by the loudness of the material (regular listeners), ...
sonicpaint

-14dbfs to -10dbfs (RMS) is what you should be shooting for in a finished product IMHO and is loud for a mix. Once you've reached that point in a mix there's not much an ME can do to help shape the overall dynamics other than leave it alone, or send it into the -6 or lower db range.

If it's hardcore rock, alot of consumers have come to expect this kind of sound unfortunately and is all too typical. Using this minimal db range makes the sound smeared, lose punch, and clarity in the stereo field. It also reaches a point of diminishing returns since compressors on radio stations and consumers will lower the overall volume, so all you're left with is a non-dynamic piece of mush.
 
What do you mean "average to peak" ratio? How is it calculated. RMS vs. what??? When I look at overall loudess of a song I just look at the RMS.

Arent most masterd pop songs around -12dbRMS to -10dbRMS? Maybe rock is a tiny bit louder?

So if final mixes are submitted to an ME that average -14 to -13 I would call it "loud" but of course not loud enough to compete with everything else.
 
greggybud said:
What do you mean "average to peak" ratio? How is it calculated. RMS vs. what??? When I look at overall loudess of a song I just look at the RMS.

Arent most masterd pop songs around -12dbRMS to -10dbRMS? Maybe rock is a tiny bit louder?

So if final mixes are submitted to an ME that average -14 to -13 I would call it "loud" but of course not loud enough to compete with everything else.

Everything of course varies with the type of material, rock being louder than jazz, etc.

I don't know what you are using for metering, so I can't give you specifics about settings. Are you using an analog or digital meter? What level is it calibrated at?

Most analog meters that I've seen are calibrated so that 0 on a VU meter corresponds to -18 dBfs, as a general rule of thumb this should be a good average level to shoot for.

Again, the main issue is not to use any compression/limiting on the overall mix at all. If you reach -14 RMS without compressing/limiting the mix, and you are happy with the sound, go with it.
 
I use Wavelab. Under the analysis functions tab it gives many different reports including RMS for both left and right channels for any given wave.

I can take most any pop song, analyze the wave, and the RMS will be between -12 and -10. At -14RMS I think it would be noticeably quieter and doubt that would please the record label.

My point was, and this is what I was told by an ME, pop songs are being mixed down at a hotter level than in the past. i.e. -14 to -12rms, and then delievered to the ME and mastered between -12 and -10rms. The only way I can figure how final mixdowns can be deliverd that hot is by additional compression and limiting especially on tracks with lots of transients such as drums percussion etc.

I am confused by what is meant as "average to peak ratio".
 
This may be a bit off the subject, but it is relevant, so I might as well ask since you guys are here...

I have so far been making my mixes as loud as possible before putting any compression across the stereo bus (I use a compressor when I just want mixes to listen to in my truck or something).
I mix "in the box." I realise that any digital gain applied to a signal will produce artifacts. So I try and get the signal hot as possible when tracking, and then I sort of assumed that "0" on my faders was unity... and that I should try to keep them close to there. But now that I am thinking about it, I may be thinking on the wrong track...

A few channels at 0 on the fader will soon clip the stereo bus. To keep this from happening, the faders come down, obviously.

I guess I was thinking that a mix as loud as possible represents the least amount of reduction, (and the most accurate signal) therefore producing less artifact then and down the road when gain, compression, or limiting may be applied by an ME.

Does this make sense and is this wrong? Starting to seem iffy...
 
Average to peak ratio?

greggybud said:
What do you mean "average to peak" ratio? How is it calculated. RMS vs. what??? When I look at overall loudess of a song I just look at the RMS.

Arent most masterd pop songs around -12dbRMS to -10dbRMS? Maybe rock is a tiny bit louder?

So if final mixes are submitted to an ME that average -14 to -13 I would call it "loud" but of course not loud enough to compete with everything else.

First I'd like to say that again I assume that we were talking about digital meters here and not a VU. So that would explain the -14db and so on. Sorry people, I seem to assume that everyone is using digital these days though I know I shouldn't.

Ok, average to peak ratio is the level between the average and the peak level. The second question has already been answered, RMS VS Peak is the answer to the "vs" question. How how you calculate it? With these two answers you should be able to figure it out.

good luck
sonicpaint
 
The short answer?

lotuscent, the short answer I guess would be avoid any type of clipping no matter how small, either pre or post master fader. It's ok to try to keep your signal hot but when avoiding clipping the loudness of your material will be forced up or down by the dynamics of the song and how drastic they are, or in short the average to peak ratio.

Hope this helps
sonicpaint
 
lotuscent -

sonicpaint is correct, the peak to average ratio is the difference between the peak and RMS levels. Since most people hit 0dBfs as the peak, the RMS level often ends up as being the same number as the peak to average.

Another point about bus processing (both EQ and compression/limiting). When using processing on the main stereo bus that is going to be applied in mastering, you are causing the audio to go through the same calculations more than once. This introduces quantization error distortion (see my previous post).

This is one of the reasons why normalization should not be done in the pre-mastering stage (if it should be done at all). You are going through a series of calulations to bring the level up to 0 dBfs when the overall volume is going to go through another series of volume calculations when being compressed, EQed, etc. and is going to be brought up again. This makes digital audio start to sound grainy and thin. Likewise, bringing the level up via a compressor/limiter to a point that isn't going to be used in the final product is a waste.

Use bus processing only to compare your mixes to a reference (like your fav CD) and then turn all of it off for the final mix to be sent to the ME.
 
masteringhouse said:
l
Use bus processing only to compare your mixes to a reference (like your fav CD) and then turn all of it off for the final mix to be sent to the ME.

yeah, that's what I do.

My question that I guess I did not articulate well enough is this : what is the optimum level for introducing the least amount of digital gain/reduction artifacts? At what level is a track in unity with the actual recorded signal? Or does that even matter? Obviously, to get the tracks as pure as possible, one would theoretically keep as close to this as possible, hence, the highest pre-clip level possible (full mix)... right...?

Does this make sense?
 
Back
Top