
Seafroggys
Well-known member
Hey guys, so we're just about done mixing an album, and obviously because there's no money involved, I am going to do "self-mastering". Obviously very basic stuff, minor two-buss processing, set things up in CD Architect, and done.
What I plan to do for workflow is to set up all the stereo mixes in the same timeline, get them the same loudness, run my faux mastering chain (probably BaxterEQ, TesslaPro, and FerricTDS) on the master so the sounds are unified, with potentially individual track stuff as well, but unlikely as I got the mixes to sound pretty damn good on their own.
Now with all the tracks being the same volume, when I normalize them before dithering to 16-bit, basically they're at the mercy of whichever song has the loudest peak. I'm not fighting any loudness wars here, but at the same time I don't want to lose out on extra volume for everything just because one snare hit on track #5 was 3 db louder.
In the 80's and early 90's, on most CD's, weren't there usually one peak on one track anyway? Not every track necessarily hit -0.1? Is this an acceptable practice, or was that just a product of the technology of the times because look-ahead limiters didn't exist? Would it be smarter for me to "master" the stereo tracks individually and get them to the same RMS by using meters? I kinda want to do the former for a more organic sound. For those sticky snare hits, would you go back into the mix and volume automate that snare hit? Usually when I've done automation on individual percussion hits it sounds unnatural. Would a high ratio compressor work better on just that one snare hit be more organic?
This is more a discussion of what others do, rather than me looking for "the answer." But I'm curious on what others think.
What I plan to do for workflow is to set up all the stereo mixes in the same timeline, get them the same loudness, run my faux mastering chain (probably BaxterEQ, TesslaPro, and FerricTDS) on the master so the sounds are unified, with potentially individual track stuff as well, but unlikely as I got the mixes to sound pretty damn good on their own.
Now with all the tracks being the same volume, when I normalize them before dithering to 16-bit, basically they're at the mercy of whichever song has the loudest peak. I'm not fighting any loudness wars here, but at the same time I don't want to lose out on extra volume for everything just because one snare hit on track #5 was 3 db louder.
In the 80's and early 90's, on most CD's, weren't there usually one peak on one track anyway? Not every track necessarily hit -0.1? Is this an acceptable practice, or was that just a product of the technology of the times because look-ahead limiters didn't exist? Would it be smarter for me to "master" the stereo tracks individually and get them to the same RMS by using meters? I kinda want to do the former for a more organic sound. For those sticky snare hits, would you go back into the mix and volume automate that snare hit? Usually when I've done automation on individual percussion hits it sounds unnatural. Would a high ratio compressor work better on just that one snare hit be more organic?
This is more a discussion of what others do, rather than me looking for "the answer." But I'm curious on what others think.