willow said:
recording - how do you avoid phasing (mic placements)and when do you mess with the phase inversion option? are there certain settings that are better before recording?
This is a very complicated question which can only be completly answered with an answer the size of a paperback book. But here's some short answers to get you started:
One should typically only be *planning* to phase invert a mic signal when that mic is tracking something where the physical waveforms themselves are inverted from the signal on another mic. One typical example would be when one is miking both the top and bottom of a drum. When the stick hits the drum, the top skin is pushed down
away from the top mic, but the bottom skin (or the air inside the shell if there is no bottom skin) is pushed down
toward the bottom mic. This is a physical world version of a phase inversion. therefore one would invert the signal on the bottom mic, essentially inverting the inversion and synching the waveform back up with that of the top mic.
The commonplace general rule to minimizing other phase issues is what's usually referred to as the 3:1 rule. This basically says that when you have two mics miking two sources (e.g. two drums, two horns, one guitar and one piano, etc.) that the distance between the two mics should be at least three times farther than the distance from mics to the instruments they are miking.
willow said:
playback- how do you know when to set a certain track with inversion? for example.. i have a song that was recorded with 8 mics. an engineer heard this song a while back and he said that there were phasing problems in the drum tracks. how did he go about distinguishing them and how would you correct it? is it a matter of finding a track that sounds weak and just inverting that one track?
It's hard to put into words what phase problems can actually sound like, he just had the experienced ear. As far as correcting it, see the 3:1 rule above for starters.
Second, make sure you are using directional (cardioid) mics and set them up in such a way as to minimize the amount of bleed you get from other drums.
Third, cut down on the number of mics. I'm serious. There is no need or reason for a "newbie engineer" such as yourself to be jumping into engineering right off the bat with both feet into the deep end by multi-miking the drum kit. Honestly, I don't know where you guys all seem to get the idea that it's necessary to use so many darn mics on a single drum kit. It only takes two ears for a drum kit to sound good live, why should it take 8 mics to get it to sound good on a recording? Answer: It shouldn't and it doesnt. If after you've been at it for a while and have some miking technique experience going for you, you are welcome to expiriment with multi-miking technique. But in the meantime, start with a couple of overheads or a front stereo pair and add a kick mic as the third track. Some of the best commercial albums and CDs have been recorded this way. Mabe then you can add a single snare mic if you want, and then move on from there, but much of the time even that is not necessary.
willow said:
meters- i remember recording on reel-to-reel and i was pretty good at getting good input levels just by hitting in the red only on quick peaks
recording- in sonar, i usually let the meters go to the red occasionally on the fast peaks. (not all the way... -6 area) should i be recording a little hotter?
playback- when i play all the tracks.. the master is too loud when it is set on 0 db? would it be best to set all the tracks at levels so that the master is showing optimum levels at 0 db?
The term "dB" or "decibel" alone is not enough these days, one has to define on what scale they are being measured. It's like saying "32 degrees". Depending on whether your talking Kelvin, Farenheight, or Celsus, 32 degrees can mean deathly and unearthly cold, or the temperature of ice, or hot enough to run naked on a sunny beach. It's the same with dB.
In the analog realm the meters are typically referring to dBVU, in which case 0dBVU is calibrated simply to match the level at which the voltage matches the rated input or outpt level voltage specification for that device (or for tape recording, 0dBVU is usually calibrated during recording to match the rated magnetic saturation level for the reference tape recommended for that machine, as measured in nanoWebers/meter.) The reason it's usually safe to push an analog signal a bit past 0dBVU is because there is still some "headroom" left in the physical tolerances of the circuitry (or tape head or tape) beyond the rated/calibrated specification. Tape oversaturation or the overdriving of tube voltages cause the "analog distortion" that some find "pleasing"
in the digital realm, it's a different story, however. There things are usually measured in dBFS. "FS" means "Full Signal" or "Full Strength". On this scale there literally is no such thing as +1dBFS. 0dBFS is as high as things will go. Period. This is because 0dBFS represents a digital signal of all "1"s, and there is no way to go higher than that digitally. You push a digital signal harder than that and you simply run out of "1"s. There is no more "headroom" like there is in the analog domain. What happens is anything that tries to go higher than that just gets clipped off at 0dBFS. This is clipping distortion. Can one sometimes momentarily clip? Yes; small amounts of digital clipping distortion can be gotten away with without becoming audible to 99.999% or the population. But too much of it and it becomes extremely audible, and not in a good way by any measure. But though small amounts can be gotten away with, that is sloppy technique; there is really not much of a valid reason to do so. One can acheive sumilar volumes and results without that clipping if they learn how to gain stage, mix and master their stuff the right way.
G.