
antichef
pornk rock
Trying to get my head around the whole mic pre thing - I've done some reading, and now I have some conceptual newb questions.
Following links from posts here, I read this:
http://www.studioreviews.com/pre.htm
The dollar bill analogy grabbed my attention - the passage says to picture the "sound stage" as a dollar bill, and the individual tracks as coins (or smaller circles) placed on the dollar bill. Poor quality pres yield larger circles (maybe the size of quarters) that have fuzzy edges, and when you go to mix a bunch of them, you find that they won't all fit on the dollar bill, and they blend together in crappy ways. High quality pres yield small circles (like the head of a pin?) with sharp edges so that many more can fit in the mix, and they stay distinct from each other.
Nice - my first question is, how far can you take this analogy? The passage says it's deficient because it's two dimensional, but why? I realize that it's inaccurate to describe one set of sensory information in terms of another sense (like describing how the color blue smells), but would it be useful to think of the left/right dimension as the stereo field and the up down dimension as frequency/pitch? Any other comments on the visualization - ways it could be extended or places where it breaks down? How does it work and how does it not work?
Now, for the second question - simple, but I think a good answer will help me conceptually quite a bit: why can't we just send the signal from the microphone straight to the A/D converter, and perform the pre-amp function with software? I can be as analog-biased as anyone, but I've also done enough software development to believe that there *is* a threshold of digital resolution beyond which my senses can't distinguish the difference (whether or not current technology supports crossing that threshold is another question). Once the necessary resolution of data can be supported, it still may be difficult to manipulate it programatically, but it seems like this would be less of an issue for trying to achieve a transparent amplification. I'm aware there are existing hardware modeling pres that some like and others dislike, but I'm asking a more conceptual question -- given essentially unlimited computing power and development skills, is there still a reason why you couldn't just take the mic signal into the A/D, and just make it louder on the other side?
Following links from posts here, I read this:
http://www.studioreviews.com/pre.htm
The dollar bill analogy grabbed my attention - the passage says to picture the "sound stage" as a dollar bill, and the individual tracks as coins (or smaller circles) placed on the dollar bill. Poor quality pres yield larger circles (maybe the size of quarters) that have fuzzy edges, and when you go to mix a bunch of them, you find that they won't all fit on the dollar bill, and they blend together in crappy ways. High quality pres yield small circles (like the head of a pin?) with sharp edges so that many more can fit in the mix, and they stay distinct from each other.
Nice - my first question is, how far can you take this analogy? The passage says it's deficient because it's two dimensional, but why? I realize that it's inaccurate to describe one set of sensory information in terms of another sense (like describing how the color blue smells), but would it be useful to think of the left/right dimension as the stereo field and the up down dimension as frequency/pitch? Any other comments on the visualization - ways it could be extended or places where it breaks down? How does it work and how does it not work?
Now, for the second question - simple, but I think a good answer will help me conceptually quite a bit: why can't we just send the signal from the microphone straight to the A/D converter, and perform the pre-amp function with software? I can be as analog-biased as anyone, but I've also done enough software development to believe that there *is* a threshold of digital resolution beyond which my senses can't distinguish the difference (whether or not current technology supports crossing that threshold is another question). Once the necessary resolution of data can be supported, it still may be difficult to manipulate it programatically, but it seems like this would be less of an issue for trying to achieve a transparent amplification. I'm aware there are existing hardware modeling pres that some like and others dislike, but I'm asking a more conceptual question -- given essentially unlimited computing power and development skills, is there still a reason why you couldn't just take the mic signal into the A/D, and just make it louder on the other side?