newbie pre questions

  • Thread starter Thread starter antichef
  • Start date Start date
antichef

antichef

pornk rock
Trying to get my head around the whole mic pre thing - I've done some reading, and now I have some conceptual newb questions.

Following links from posts here, I read this:

http://www.studioreviews.com/pre.htm

The dollar bill analogy grabbed my attention - the passage says to picture the "sound stage" as a dollar bill, and the individual tracks as coins (or smaller circles) placed on the dollar bill. Poor quality pres yield larger circles (maybe the size of quarters) that have fuzzy edges, and when you go to mix a bunch of them, you find that they won't all fit on the dollar bill, and they blend together in crappy ways. High quality pres yield small circles (like the head of a pin?) with sharp edges so that many more can fit in the mix, and they stay distinct from each other.

Nice - my first question is, how far can you take this analogy? The passage says it's deficient because it's two dimensional, but why? I realize that it's inaccurate to describe one set of sensory information in terms of another sense (like describing how the color blue smells), but would it be useful to think of the left/right dimension as the stereo field and the up down dimension as frequency/pitch? Any other comments on the visualization - ways it could be extended or places where it breaks down? How does it work and how does it not work?

Now, for the second question - simple, but I think a good answer will help me conceptually quite a bit: why can't we just send the signal from the microphone straight to the A/D converter, and perform the pre-amp function with software? I can be as analog-biased as anyone, but I've also done enough software development to believe that there *is* a threshold of digital resolution beyond which my senses can't distinguish the difference (whether or not current technology supports crossing that threshold is another question). Once the necessary resolution of data can be supported, it still may be difficult to manipulate it programatically, but it seems like this would be less of an issue for trying to achieve a transparent amplification. I'm aware there are existing hardware modeling pres that some like and others dislike, but I'm asking a more conceptual question -- given essentially unlimited computing power and development skills, is there still a reason why you couldn't just take the mic signal into the A/D, and just make it louder on the other side?
 
Second question first.

It is true that digital technology has produced many many many wonderful things including digital recording. But it's just that - digital recording. A voice, for example, starts in the analog domain and needs to go thru both a microphone (a 'transducer') and a preamplifier (a 'booster') to lift it up to a usable level. It is traditional for the digital conversion to take place somewhere after the signal from the microphone has become available to the digital system. There are currently microphones with A/D converters built right in, but they are expennnnnnsive. Likewise digital preamps. If you think you have the knowledge, dig right in and maybe you'll wind up a few years down the road with a digital microphone and preamp company.

As to the first question, the dollar bill analogy is a bit loopy but the premise is sound. You can also 'edge your circles' using careful EQ'ing and even more careful compression, but yeah it's fair to make the 'fuzzy'/'not fuzzy' comparison. On the other hand, some music just sounds better in fuzzy circles, and it's good to have a sense of where and when this might come in handy.


.
 
antichef said:
......but would it be useful to think of the left/right dimension as the stereo field and the up down dimension as frequency/pitch? Any other comments on the visualization - ways it could be extended or places where it breaks down? How does it work and how does it not work?
This a fairly logical way of looking at things hence it's a technique that is commonly used albeit with the addition of the 3rd 'front to back' dimension which relates to volume or more specifically the perceived 'distance' of a sound source. Obviously you can use volume to adjust percieved distance but there's also reverb and delays which although not adjusting volume can make sounds appear more distant and of course compression can make things sound more 'up front'.

Of course this is something that works for some people and is pretty nonsensical to others but from what you've said it might well be of interest to you.

This book explores these concepts in some detail:

Clicky
 
antichef said:
Trying to get my head around the whole mic pre thing - I've done some reading, and now I have some conceptual newb questions.

Following links from posts here, I read this:

http://www.studioreviews.com/pre.htm

The dollar bill analogy grabbed my attention - the passage says to picture the "sound stage" as a dollar bill, and the individual tracks as coins (or smaller circles) placed on the dollar bill. Poor quality pres yield larger circles (maybe the size of quarters) that have fuzzy edges, and when you go to mix a bunch of them, you find that they won't all fit on the dollar bill, and they blend together in crappy ways. High quality pres yield small circles (like the head of a pin?) with sharp edges so that many more can fit in the mix, and they stay distinct from each other.

Nice - my first question is, how far can you take this analogy? The passage says it's deficient because it's two dimensional, but why? I realize that it's inaccurate to describe one set of sensory information in terms of another sense (like describing how the color blue smells), but would it be useful to think of the left/right dimension as the stereo field and the up down dimension as frequency/pitch? Any other comments on the visualization - ways it could be extended or places where it breaks down? How does it work and how does it not work?

Now, for the second question - simple, but I think a good answer will help me conceptually quite a bit: why can't we just send the signal from the microphone straight to the A/D converter, and perform the pre-amp function with software? I can be as analog-biased as anyone, but I've also done enough software development to believe that there *is* a threshold of digital resolution beyond which my senses can't distinguish the difference (whether or not current technology supports crossing that threshold is another question). Once the necessary resolution of data can be supported, it still may be difficult to manipulate it programatically, but it seems like this would be less of an issue for trying to achieve a transparent amplification. I'm aware there are existing hardware modeling pres that some like and others dislike, but I'm asking a more conceptual question -- given essentially unlimited computing power and development skills, is there still a reason why you couldn't just take the mic signal into the A/D, and just make it louder on the other side?

You need to amplify the mic signal, hence the preamp.

Regarding your analogy, while I would not suggest that it is wrong, I WOULD suggest that your source material has much more impact that the Mic Preamps. To that end, the Mics also have a greater impact than the Mic Pre.

I think you are getting into some pretty high end nitpicking when talking about stacking tracks and all.

Personally I feel you can make good records and stack multiple tracks with the decent preamps in an RNP.

But, what do I know?
 
Nice! I'll be reading that book soon. The message I'm getting on the first question is that it's OK to use the dollar bill analogy, and to extend it with a front to back dimension that matches the simulated distance of a sound (which can be established with reverb and other effects).

jdier said:
Personally I feel you can make good records and stack multiple tracks with the decent preamps in an RNP.
I can run with that -- would a cooking analogy work? (not that I can cook) -- if you start with really high quality ingredients, the tools and spices you use are less important (and can even mess things up), but if the inputs are poor quality to start out with, you may be able to make them edible with skillful preparation, but the end result won't be great.

ssscientist said:
... to lift it up to a usable level. It is traditional for the digital conversion to take place somewhere after the signal from the microphone has become available to the digital system.
jdier said:
You need to amplify the mic signal, hence the preamp.
That's the heart of my confusion -- theoretically why not just deliver the weak mic signal raw (through A/D, of course) to the computer and let software take it from there? Why is the mic signal not "usable" in this way? I'm aware that all the available hardware and software products don't support this - just looking for the specific conceptual reason. Does the preamp add information to the signal that wasn't there before -- certainly it does in the case of the "coloring" preamps, but is the same also true about the transparent ones? And if so, why couldn't software add the same information?

ssscientist said:
There are currently microphones with A/D converters built right in, but they are expennnnnnsive. Likewise digital preamps.
I've seen some cheaper mics with USB cables coming out of them that presumably have the A/D converter built in (I've also read about a Neumann one like that cost more than my car) - I'm thinking those mics also have a preamp built in as well. I have been discouraged from buying the cheaper USB mics (mainly because the mic becomes the only interface device the computer talks to, due to clock issues). The existence of digital preamps makes me think that perhaps the computer could take that mic signal and do something with it, but that the industry has a lot of work to do before that becomes feasible. I'm guessing the difficulty is in writing the software for it. If that's right, then maybe it's just a matter of time until we build up the support libraries to do a good job of it.
 
antichef said:
That's the heart of my confusion -- theoretically why not just deliver the weak mic signal raw (through A/D, of course) to the computer and let software take it from there? Why is the mic signal not "usable" in this way? I'm aware that all the available hardware and software products don't support this - just looking for the specific conceptual reason. Does the preamp add information to the signal that wasn't there before -- certainly it does in the case of the "coloring" preamps, but is the same also true about the transparent ones? And if so, why couldn't software add the same information?

Because what an A/D converter does, is that it samples the analog signal at a certain frequency (sample rate) and turns it into digital data at various quality depths (bit rate), and gives every sample a value between 0 (minimum amplitude) and 127 (max) and then codes it to whatever format it uses (usually WAV), but that means that it will take *everything* from the analog signal exactly as it is. What I mean is that it will sample noise, unwanted frequencies and amplitude peaks, etc., which will diminish the quality of your digital sound file at the end.

Think about this, if your raw mic signal is weak in power and amplitude, then sampling will be taken from a very thin span of values, thus resulting in a very innacurate conversion from the source to the digital data at the time of scaling each sample to the 0-127 scale (i.e. bad quality).
This is why you need to amplify, compress, filter, whatever (add info, as you put), your signal BEFORE converting to digital data (make it usable), to get the most accurate reflection of your desired sound at your mix.

So pres don't just 'add' info, they also remove loads of unwanted info from your source.
 
Good thread.

The next natural transition from all this abstract discussion is to get your feet wet with playing with some actual equipment. The metaphors are accurate but only in that they get you in the general area where experience kicks in.

For instance, layering tracks- mostly irreleveant if youa re planning on recording a singer with guitar only. Or even if you are only planning on 4-8 tracks possibly. it becomes much more apparent if you are mixing many tracks together. you can imagine a little noise (from a preamp that is or moderate cost/quality) in the background of a track multiplied 24 times, ti will get pretty loud. In that case, it might make a significant difference to use a $1000 + preamp.

on the other hand, if you are a home recorder, and recording in a untreated room with a computer running for instance, the chances that your < $500 mic will pic up some noise from that fan (and the fridge in the other room, the doog sratching himself downt he hall, etc) is huge. That noise is bound to be louder on one track than the self noise of several tracks supplied by a given preamp (provided you are not using some really crappy preamp).

The key to home recording, with budgets concerned is BALANCE. Decent mics that suit a decent room (for instance, some high quality very sensitive condensers will sound like crap when you can hear that fan and fridge and dog the whole recording, but a dynamic might be less sensitive, but give a better recording because it won't include as much ambient noise), through a decent preamp and into decent AD/DA converters.

The big factor here is the budget and room, IMHO. If you have a decent room (counting instruments and voices here in this category) for tracking you will have a great advantage as, you won't have to fight your equipment to compensate. If you are on a budget where your mic, pre, converters, etc don't all cost more than (arbitrary number here) $1000-$3000 each for each channel, you will probabaly have to make compromises and the discussion of how 24 tracks layer together becomes less vital- you are more concerned with how one track records.

Daav
 
antichef said:
That's the heart of my confusion -- theoretically why not just deliver the weak mic signal raw (through A/D, of course) to the computer and let software take it from there?
That's an easy one.

No one's managed to make a budget digital preamp that sounds good, but the market is full of analog one that deliver spectacular results for what would've seemed like pocket change 15 or 20 years ago.

You said it yourself - one USB mic consumes all of the clock speed of a single modern computer.


.
 
The sound of a mike can be drastically affected by the preamp's characteristics... transformer input, impedance level, etc. It's good to keep the pre separate from the A/D converter so you can select just the right match between mike and pre. Re software emulation of the hardware, maybe preamp modeling will come along around the same time as realistic microphone modeling.
 
OK, thanks very much to all the responders. Getting a grip on exactly how the signal chain fits together is going to be critical for me, I think.

Here's what I gathered from this thread -- at least two things are missing that would be needed to effectively put a standard mic signal directly into an A/D converter, skipping the analog preamp - 1) there would have to be a new A/D converter design that anticipated receiving mic levels instead of line levels, and 2) a very large amount of software that currently hasn't been written would need to be written -- this software would directly take the digital output of the A/D converter and would need to both clean up and amplify that information (both subtracting from it and adding to it in meaningful ways) - to be a good substitute for existing analog mic pres, the software would have to be very sophisticated, and the cost of developing it (and maybe running it) is still high enough so that it makes more sense to stick with the existing approach.

Those two conditions could create a sort of commercial stalemate, or catch 22 -- the hypothetical company with the expertise to create the new A/D converter may not have the expertise to also develop/market/support the necessary computer software, while the other hypothetical company that could handle the software can't make the new A/D converter, and neither hypothetical company would make the first move, because it would be doomed to fail if the other one didn't follow suit. And even if they did act together, they'd be running headlong into competition with a bunch of well established and loved preamp vendors who, initially anyway, would have a higher quality solution. Also, I'm not hopeful that we'll have a dedicated hardware unit that would stand between the A/D converter and the computer, because I think the process would demand the horsepower, updatability, and open development environment of the computer itself.
 
Back
Top