Why analogue and not digital?

  • Thread starter Thread starter cjacek
  • Start date Start date
Exceptional post

This is the proof in the pudding so to speak.

It matters not if we know how or even what to measure. It matters not how precisely we measure or how objective we measure if we are not measuring the right thing.

The fact remains that there is a difference between todays state of the art digital and analog recordings. It is heard by most everybody. (Didn't APL say tape sounds better just a few posts ago?)

Objective data is the ratio of those who like digital recordings over analog recordings.



OK, you hit on a pet peve of mine. Tape does not compress, never did and never will. Compression implies a time factor which tape does not. Tape DISTORTS when driven too hard. But people often confuse it as compression because the bias oscillator interferes with high frequency content, hushing HF componants as you approach the saturation point. Therefore the added harmonics are supressed.

At any rate, digital is not offensive because it fails to add problems. That's completely backwards thinking particularly since albums recorded on analogue tape and dumped to digital still sound digital to me. I remember the singer of my band commenting after I noted my experimental listening tests with analogue vs digital recordings of the same piece. He said "of course they'd like analogue better because its flaws cover up the flaws of the recording". This is like saying my car which is splattered with mud looks better because the mud covers the flaws in the paint job. Completely backwards and false. Sorry.





So anything with a 20Hz-20KHz frequency response would sound perfect? Dude, there's more to sound quality than frequency response. There's more to it than distortion and noise. If frequency content is all that mattered, we would have a whole new world from what we now know. Case in point, or years people have been noting that vacuum tube amplifier circuits sound better than solid state. There's been all sorts of studies on why and none of them were conclusive. Frequency response, distortion, noise were all very similar. A lot of people said that "tubes distort assymetrically and this is more pleasing than the symmetrical distortion of solid-state". Well, that may be somewhat true but it's not a logical conclusion. Most pro gear will have distortion characteristics below .1% at nominal levels. This is belows what would even be noticable and the only way to really get some noticable distortion (more like 1%) is to slam the equipment with high levels. Something that's not recommended or practiced with any regularity except by people who have no clue what they're doing. About 5 years ago, they came up with a new form of test equipment that can measure dynamic distortion. They discovered that while opamps show very little harminic distortion, they had absolutely massive dynamic distortion caused by the negative feedback loops required to maintain low harmonic distortion. Our ears are more sensitive to dynamic distortion, but it's been impossible to test for it till recently. What I'm saying is, our ears are telling us something is wrong with digital recording, we just haven't been able to test why. 20 years from now, that may change and a lot of people will feel very foolish.

I'll say this, as a mastering engineer, people are more offended by added stuff than missing stuff. A 3dB cut at 100Hz is not very noticable but a 3dB boost is very noticable. A downward expander is less noticalbe than an upward expander. The live feed from my board sounds better than what I get back from my 1/4" deck or my computer running 88.2KHz. Now if not adding distortion made sound worse, than the feed off of my 1/4" deck would sound better than the live feed from the board. This just isn't the case. This suggests to me that digital recording adds something to the signal that's offensive to human ears that science has not yet been able to pinpoint. We're getting close to a solution, but nothing so great yet.

I'll add one more thing, a 128Mbps MP3 has frequency response, distortion and noise levels very similar to CD, yet nobody in the know will argue that they sound the same.
 
Observer effect is what was really ment....

http://en.wikipedia.org/wiki/Observer_effect

This whole thread is full of the Observer effect. Let's just paste in one of the defs that appears to apply:

---------
Observer bias

The related social-science term observer bias is error introduced into measurement when observers overemphasize behavior they expect to find and fail to notice behavior they do not expect. This is why medical trials are normally double-blind rather than single-blind. Observer bias can also be introduced because researchers see a behavior and interpret it according to what it means to them, whereas it may mean something else to the person showing the behavior. See subject-expectancy effect and observer-expectancy effect.
-----------

case in point: some distortions were calculated to be 110 dB down and thus were dismissed as non-observable. Others responded that there was a math error. Math error dismissed.

Let's not even think about what stability is needed to measure 1uV with any degree of accuracy. (1uV is about 110 dB below 0.316V)
 
Just to be clear, I personally think there is very little wrong with digital sound today. But just to throw a monkey wrench into the works, I wish they could include algorithyms that would remove some of the odd harmonics introduced by modern circuitry and add even ordered harmonics. I am old enough to remember using a Silvertone twin twelve as a young teenager and hearing for the first time the first Silvertone SOLID STATE 6-10. I am confident that the solid state versus tube debate was NEVER settled at all and, I have long suspected that perhaps a part of what people think they don't like about the sound of digital is not caused so much by the process as the, for the most part, cheap circuitry and componets used. Hence the resurgence of boutique tube based expensive gear to help remove the edge. I do know that here, where my (partly digital) system goes into a analog console with outstanding eq, it sounds fine.

I agree 100%. Case in point, a lot of FET mics use a PNP output pair. In my testing, the PNP pair comes with a small amount of third order baggage, and when it overloads, both second and third move up in step (and eventually higher order products, but I am talking below that limit).

So I build a FET output stage in addition to the FET buffer. It has nominally higher distortion (just a couple of dB), but it's much more second, and second increases before third. I think it sounds better that way.

Based on my understanding of expensive tube mics, I would expect similar behavior, but of course I haven't tested that myself.

These are all class A circuits too . . .
 
Sinc func

A quick read of the sinc function shows that:

Realistic filters can only approximate this ideal, since an ideal sinc filter (aka rectangular filter) has an infinite delay, but it is commonly found in conceptual demonstrations or proofs, such as the sampling theorem and the Whittaker–Shannon interpolation formula.

(and)

As the sinc-in-time filter has infinite impulse response and must be approximated for real-world applications, it's often a "windowed sinc filter." Sinc-in-frequency filters, among many other applications, are almost universally used for decimating Sigma-Delta ADCs, as they are easy to implement and nearly optimum for this use.
-----------------

As we move out of the conceptual proof arena and into the real world we fall prey to the pitfall of moving from the time domain to the frequency domain and back again. That is to say the math (in general) requires infinite time in the freq domain and infinite frequency response in the time domain.

The next question that begs an answer is: which approximations are more annoying? Limiting the frequency response (analog) or limiting the time (digital).
 
One more question

Often heard is that analog adds (distortion etc) things that are the root of the digital vs analog debate. That is to say that digital is an exact capture and recreation of the input signal.

If that were so then in my mind it should be easy to make a "tape" plugin that sounds like (well) tape. And thus there would be no debate.

So what does analog do (or not do) to make it sound analog and what does digital do (or not do) that makes it sound digital?
 
This whole thread is full of the Observer effect.

True. A lot of people saying I like this or that better.

Then there were a few posts trying to understand what caused the preference. Some of those posts contained errors. Attempts to correct the errors were dismissed as poo-pooing the preference which was not the intent.

The preferences are real.

Here's another subjective story. I got a turntable and a really nice LP of Kind of Blue. It sounded better than the CD. Analogue wins. Then I got a better CD player. I was very surprised when I put the CD and it closed most of the gap between old CD player and the LP. Analog still wins, but not by as much. Thus, D/A is not trivial.
 
case in point: some distortions were calculated to be 110 dB down and thus were dismissed as non-observable. Others responded that there was a math error. Math error dismissed.

Let's not even think about what stability is needed to measure 1uV with any degree of accuracy. (1uV is about 110 dB below 0.316V)

Others never posted their refutation, others just said "your math is wrong". I can post my calculations, sample files, and charts in detail if you like . . . every specific claim I have made is repeatable and verifiable by anyone here.

We don't need stability in this case, because these are measurements that are occurring on a digitally-generated, noise-free signal using math. So we only need a computer that is capable of calculating the required precision. Yes, if we attempt to output a -110dBFS signal to the real world, we have to deal with D/A (and amplifier) noise. But that helps my argument against audibility.
 
I'd like that. I can't quite figure out what his POV is.



That's not how D/As work.



You don't get the original signal back from analog, either.

- That you cannot see Dr Zee's POV is your limitation not his.....

- That is exactly what D/A do. They take a digital value and produce a corresponding signal level. Happens that they are doing this many times a second. Happens that some converts are stable at 0 Hz conversion rates and others are not. Happens that some converters ring like crazy and others not. Happens that ... you get the picture.

-LOL I like your admission that Digital is flawed. (You don't get the original signal back from analog, EITHER)

The assertion has been made on the digital side of the fence that digital is an exact capture/playback system. It shows up int he form of "Analog adds distortion etc".

Too bad that is not the question.....
 
Others never posted their refutation, others just said "your math is wrong". I can post my calculations, sample files, and charts in detail if you like . . . every specific claim I have made is repeatable and verifiable by anyone here.

We don't need stability in this case, because these are measurements that are occurring on a digitally-generated, noise-free signal using math. So we only need a computer that is capable of calculating the required precision. Yes, if we attempt to output a -110dBFS signal to the real world, we have to deal with D/A (and amplifier) noise. But that helps my argument against audibility.

Ah I get it. In your ideal system it works therefore in the real world it significant.

I thought we were talking about real world.
 
So what does analog do (or not do) to make it sound analog and what does digital do (or not do) that makes it sound digital?

I'll go first! (surprise)

Part I, I don't know.

Part II, going back to pianodano's post, in addition to A/D/A, we know that the signal must also pass (at least) two more analog buffer stages. Most converters, even midrange ones, are using fairly generic opamps for that task. There are some shockingly good opamps out there these days, but my converter (RME) isn't using them, not even the pretty good ones. In fact, I remember opening it up when I got it and thinking I wanted to drop in something better, but I can't easily take it out of production, so . . . years pass by, etc.

Now, would anybody pass their signal through two more opamps, just for fun? Stated another way, in an A/B/X test, could people discern the difference between one audio path, and another with two more opamp stages, of middling opamps and generic small parts? This is with no digital conversion, mind you, just two different analog paths.

My guess would be yes, that difference would be audible. So that's the first concern.

Second, I know that a perfect A/D converter will cause a mild attenuation at very high frequencies, near its upper limit. Is that audible? I think so, from the results of a test I did some time ago with my new and old (both not-so-perfect) converters. I could measure a difference in high frequency response, but there were also some general audio quality issues (again, analog path). But people could hear a difference, no doubt. So that's a maybe, but it goes away with a high enough sample rate.

Moving beyond flaws that must exist in theory, there is the issue of implementation. I've mentioned implementation of the analog path, but the converter itself is subject to quality issues, such as jitter, and the DSP devoted to its decimation and reconstruction routines. I am not able to test these things directly, but I trust those who describe them.


I think there is plenty there to chew on without having to add a bunch of other stuff that isn't well supported by theory or experimentation.
 
Ah I get it. In your ideal system it works therefore in the real world it significant.

I thought we were talking about real world.

Well, in the OP wado starts with a hypothetical noise-free 10kHz sine wave and discusses the square-wave shape of the waveform. By definition, that is not real-world. But it's easy to measure.

Next, he moves on to a noise-free sine wave at -54dBFS, in 16-bit, and discusses the resulting quantization distortion (QD). That is also not real world, because any converter should limit its dynamic range to within its bit depth. That is, if you make a 16 bit converter, you better make sure you have noise at -90dBFS.

With a 24 bit converter, you don't have to add noise, because the self-noise of the electronics will exceed the necessary level of dithering noise. Wado noted that most current 24 bit converters actually only achieve 20 bit dynamic range; but by capturing the bottom 4 bits of noise, that's a guarantee that no QD can ever occur, meaning that the converter will achieve its stated dynamic range (and indeed will have signal audible below that range, for the same reason that's true in the analog world).

Now, if you truncate that 24 bit signal to 16 bit without dithering (adding noise), you can generate QD on a test signal. And I posted files clearly showing that. That's a digital distortion, but it's not real world. It's also a sloppy engineering practice.

Later tonight, I will upload and post real world recordings I did this AM, trying to generate QD doing exactly that, sloppily truncating a 24 bit real world signal to 16 bit with no added dither. I'll let you evaluate the results.

So I believe I am firmly rooted in the real world.
 
So anything with a 20Hz-20KHz frequency response would sound perfect? Dude, there's more to sound quality than frequency response.

Really? What a revelation to me....30 years in audio and all I thought mattered was frequency reponse. No wonder I keep getting fired ..
Noise, distortion? I thought they were just rumours... (joke)


Read my post. I was responding to the original poster who seemed to think that sampling AS SAMPLING "leaves bits out" AT ALL AUDIO FREQUENCIES. "leaves the meat out", he said.

I said that if that were true, sampling at half 44.1 (22) or a quarter (11) should "leave even more meat out", making it sound even worse than the poster already says it is at 44.1 . But in fact what it does is limit your upper frequencies.
Nobody I know, including you, says lowering the sample rate in this way affects a 100hz or even a 500hz sine wave.
I gave him a simple test so he could clearly hear that the highs are shelved at basically 10k and 5k respectively. And than the tones well below that arent even touched.
How you could take my post as meaning that frequency response is all that matters is beyond me. I was saying that upper frequency response, IN THIS TEST, is all that CHANGES, not all that matters.

How can we have an intelligent discussion when you dont even read the post properly?


Cheers Tim
 
OK, you hit on a pet peve of mine. Tape does not compress, never did and never will. Compression implies a time factor which tape does not. Tape DISTORTS when driven too hard. But people often confuse it as compression because the bias oscillator interferes with high frequency content, hushing HF componants as you approach the saturation point. Therefore the added harmonics are supressed.

If harmonics are being added at lower volumes but not at higher volumes, then perhaps it's more of an upward expansion rather than a downward compression - is that it? If I take a digital recording and send it out to tape and back, then normalize it so the peaks match the digital recording, the version that went to tape and back will have a higher average volume. Whether its compression or expansion, the net result is simliar.

Not sure what you mean about a time factor.

At any rate, digital is not offensive because it fails to add problems. That's completely backwards thinking particularly since albums recorded on analogue tape and dumped to digital still sound digital to me. I remember the singer of my band commenting after I noted my experimental listening tests with analogue vs digital recordings of the same piece. He said "of course they'd like analogue better because its flaws cover up the flaws of the recording". This is like saying my car which is splattered with mud looks better because the mud covers the flaws in the paint job. Completely backwards and false. Sorry.

Perhaps that ADD or AAD recording still sounds digital to you, but if it were DDD it would sound diffferent to a lot more people. Thus my opinion that what tape adds is more material to the sound than what digital may (or may not depending on who you ask) do to it. Not sure how that is backwards thinking.
 
leddy;2928876 If I take a digital recording and send it out to tape and back said:
I wonder if you are allowing for the crazy looking but great sounding response that can result on playback from tape. Or if you somehow are, how can that be ?http://www.endino.com/graphs/

To get the magic happening, ALWAYS record to tape first and THEN transfer to digital. Not the other way around.

Danny
 
To get the magic happening, ALWAYS record to tape first and THEN transfer to digital. Not the other way around.

Danny

I'm with you, but I mostly record my own jazz group on location. It's hard to drag a reel-to-reel machine to gigs and stop to change tape 3 times every set...

I have little choice but to record digital at the moment. It still makes a huge difference to my ears to mix down to 1/4" 2-track.

I am planning a studio recording where we go direct to 2-track tape.
 
I wonder if you are allowing for the crazy looking but great sounding response that can result on playback from tape. Or if you somehow are, how can that be ?http://www.endino.com/graphs/

To get the magic happening, ALWAYS record to tape first and THEN transfer to digital. Not the other way around.

Danny

Frampton did his instrumental album that way. It sounds good.
 
Frequency response is NOT the only thing that changes with sample rate changes. Otherwise, we'd still be using 44.1KHz for everything. But 96KHz is now the standard because it sounds cleaner. Most of the distortion is pushed beyond the human hearing range. This is not just an issue of converter quality but the inherant quality of the sample rate itself. Many classical recordings are done either DSD or 192KHz BECAUSE they can hear the difference.



If harmonics are being added at lower volumes but not at higher volumes, then perhaps it's more of an upward expansion rather than a downward compression

OK, you misread me. There are not harmonics being added at lower volumes because there's no saturation. Harmonics are added at HIGH volume because the peaks are getting distorted. It's just hard to hear those added harmonics because the bias causes the high frequencies to sag as the tape is hit harder. Regardless, it's not compression or expansion, it's distortion. Nothing more, nothing less. If you hit a line amp too hard and roll off the high end is that compression? No. In order for it to be compression, the intensity of the waveform regardless of where it is in the cycle will be attenuated because a compressor averages volume changes over time to PREVENT distortion. On the other hand, when you hit tape too hard, it instantly crushes the highest peaks leaving the lower level stuff 90 degrees later alone. That's the difference.


As for tape plugins for DAWs, they do exist and they do not sound like tape. They sound like distorted digital. There's a reason ProTools sounds like ProTools. There's a reason RADAR sounds significantly better than ProTools like there's a reason ADATs and DASHs sound the way they do. Metallica's "Black Album" sounds like it was recorded on a DASH even though it started on 2" and got dumped to DASH for overdubs. The reason, the individual tracks were distorted by a digital medium. That distortion was carried to the final master even though they used an analogue board & so forth. "Brothers in Arms", though also a great recording, sounds brittle & edgy because it was recorded on a DASH. On the other hand, "The Joshua Tree" sounds open and warm because it stayed analogue all the way to the mastered stage. A lot of people won't hear the difference, but I do. That tells me that more than the frequency response changes with sample rate. RADARs run at 96KHz sound pretty good, though not as good as a 2" 16-track. Almost anybody will chose a 1/2" 30I/S master over a 24-bit 96KHz even though the frequency response is very similar.

The thing that we (pro-analogue guys) keep trying to say is things are not always as they seem. Even though modern DACs have gotten very good at trying to fill in massive amounts of missing information, you're still listening to a computer generated guess, not the original. The output may be similar, but it's not the same. Now, the output of a 1/2" tape is not going to be quite the same as the original either, but its distortions are not as harmful. It must follow natural laws of physics and we interperet its distortions as part of the original sound because it by nature follows the physical world. Digital on the other hand, must use rules written by man to artificially construct a new waveform from measurements. Those rules try to emulate the natural physics but there's not not enough information to do it properly. I mean both sample wise and because of our limited knowledge of physics.

Think about this, you can take a plaster mold of your face and make a bust of it. The mold may change shape a little as it dries, it may have some dirt on it, but the final product has a direct physical link to the original.

Now, somebody else wants to make a bust but they use a ruler to CM by CM measure the face and write down those measurements. A different person then later recalls those measurements and makes a bust going by those measurements. Of course he only used CM precision and the details of a human face goes much finer than that. So he rounds off the edges of each 1CM square section leaving the center alone. The final product may look very much like the original, but people would be able to tell that there's something not quite right with it.

I should also add that in all the interviews I've read, all the engineers and producers I've met, all of them who have worked with analogue, like analogue better for its sound quality. They just use digital because it's easy to edit. The only people I really see arguing that digital sounds better are people that haven't logged any major time on some good analogue equipment. I've worked with many people that thought digital was IT until they did a project on some high end analogue gear, some of them even mid-line analogue gear and changed their whole perspective.
 
Last edited:
Even though modern DACs have gotten very good at trying to fill in massive amounts of missing information, you're still listening to a computer generated guess, not the original.

Exactly. Some of us would like to measure, explain, and find countermeasures for those differences.

w1942 said:
Now, the output of a 1/2" tape is not going to be quite the same as the original either, but its distortions are not as harmful.

And some of us would like to characterize those, develop the transfer functions, and write plugins.

Your face analogy is pretty pathetic. It would be like making a recording and sampling once every five seconds or so.

Now here's a blind/blind subjective experiment that somebody with, say, a MIDI controlled pipe organ could do. You could have three things to listen to.

1. Live source through the signal chain, ie, organ-mic-mixer-amp-monitors. Might be tough to keep the flanking paths down.

2. A digital reproduction.

3. A nice analog reproduction.
 
Exactly. Some of us would like to measure, explain, and find countermeasures for those differences.

And some of us would like to characterize those, develop the transfer functions, and write plugins.

Fine but why persist in burdening everyone else with it.



Your face analogy is pretty pathetic. It would be like making a recording and sampling once every five seconds or so.

No, it was exaggerated, probably in an attempt to make the point unmistakeably clear................obviously it wasn't clear enough for you.

:cool:
 
Back
Top