Theoretically, DSP should go far beyond matching an equalization curve, although I don't know anything about the inner workings of the Antares unit. If it were as easy as setting EQ, then the curves would be published all over the web, and no one would buy the Mic Modeler.
Despite my objections, DSP has astonishing potential, because it can emulate analog changes in ways that analog cannot, including subtle changes in phasing and impedance (I'm not knowledgable about electronics, so someone who is might like to explain this better than I can). A limited but useful example is Sony's Super-Bit Mapping, which greatly improved the sound of commercial digital recordings without having to increase the number of bits or the oversampling rate.
Have you ever seen one of those mechanical shape-copiers at the hardware store? By lining up dozens of small, stiff rods in a secure holder, you can press it against an object (say a turned piece of wood), and then set your lathe to exactly match the curves. It's a cool, cheap, accurate tool.
With certain CAD units, the same can be done in three dimensions by marking enough spots on the surface for the computer to "model" the entire object (even if that "object" is a person in motion). This is the "CG" used extensively in the movies.
The potential of DSP is that it doesn't have to UNDERSTAND the characteristics that it is emulating in order to apply them to another source.
Imagine, for instance, that someone's grandfather heard Caruso sing in person many times in the early part of the 20th Century. For argument, imagine as well that this person's hearing is still excellent (a bit of a stretch, I realize).
Then one day grandfather hears a new tenor whose vocal character is as close to Caruso's as any he's heard in the last 80 years. He recognizes it at once. The singer may be young, and his voice will change as he matures, but for now, he sounds "like" Caruso.
With DSP and enough computing power, *theoretically* an engineer could take the old wax cylinder recordings of Caruso and apply the characteristics of someone with a similar voice, reproducing the original recordings as close to what was actually heard as the new tenor's voice really is to Caruso's. That's too big an "if" for many people, but it underscores the potential. There will always be those who prefer the direct-to-wax recordings for their uncluttered purity.
(This example was done -- not well -- in the mid-80s, when the desktop computers we use today would fill an entire room.)
It's still a matter of "source" versus "signal," of course. Perhaps someday DSP will enable my *recorded* singing voice to sound *like*
Caruso instead of gravel under foot. But do we all want to go around talking like James Earl Jones? What about the subtle differences by which we establish our very sense of "self"?
These are philosophical questions, of course, not practical considerations. If you're recording a great singer who always hits a certain note flat, there's nothing wrong with using a pitch corrector to fix that note. If the Antares subjectively improves the sound of your recordings, that is enough justification for its existence right there, whether it is accurately emulating a different microphone or not.
In the 60s, some engineers would splice together 30 takes of a difficult piano concerto to get a technically perfect take. Is that valid? Glenn Gould thought so; many others thought it was a crock.
There is also a steeply rising perceptual curve. When Edison first introduced his wax-cylinder grammophone, otherwise sane and normal people declared publicly that the sound was indistinguishable from the actual event. How could this be? Well, they had never heard a recording before.
When I first heard
the Yamaha DX-7 in the mid-eighties through a guitar amp, I believed that the "orchestral chimes" sounded exactly like the real thing. That seems naive and humorous now.
The compact disc (actually 16 bit / 44.1 khz oversampling) sounds inherently terrible, yet how many of us who had $100 turntables thought that was the case when CDs were first introduced? Until we *hear* music recorded at 24/96 or better, we have no idea how much better digital recording can be than the "Red Book" standard.
Which brings me back to whether the Antares, or any DSP unit that regular people can afford at this time, is "good enough" to stand in for a better source instrument. Some will argue we should archive the cleanest recordings we can, because each year will see improvements in after-the-fact processing that can always be applied later.
Finally, and I hope some of you can answer this, why do some musicians and engineers today want to emulate the "sound of the Beattles"? I understand that they were talented composers, famous celebrities and multi-millionaires, but when I stick one of their recordings on a good stereo system today, the SOUND (not the music) is dreadful. Does anyone really think that 60s mics and consoles sound better than what we can do today? I'm really asking -- there may be legitimate considerations of which I am completely unaware. And I truly like the sound of
the U67, for instance -- what would be so difficult about duplicating it today?
Rest in peace, George Harrison. I'm sorry to lose another icon of my youth.
Kind regards to all,
Mark H.