Why analogue and not digital?

  • Thread starter Thread starter cjacek
  • Start date Start date
OK, that's a lot of stuff to read. But first comes first.

Ethan, I really wish a consumer 24 bit format had caught on. It might have alleviated the volume wars, for one.

That's complete and utter bull. I thought the same at first myself when I was recording 16-bit. All the sources I saw at the time said to record as hot as possible to minimise quantization distortion. Yet I quickly learned that ANY format pushed to its limit sounds like crap. I found myself getting much better sound recording -12dBfs RMS at the highest. Also, there's no doubt in my mind that a mastered song left at -12dBfs to -18dBfs RMS sounds way better than the same song mastered to -9dBfs RMS. But now we're pushing songs up to -5dBfs RMS and it's nothing but mush. Less distortion is better, limiting adds distortion. The loudness war started as a way to grab attention but has since become all about numbers. "My song is louder than your song" kind of crap.



In the real analog world, you can't stack up an arbitrary number of offsetting gain changes without suffering a severe penalty in terms of noise or headroom. Hence, no one works that way, whether analog or digital.

Really? Have you dissected a modern mixing console? Any given channel may contain as many as 6-10 opamps just to control gain. Each opamp typically has 20 transistors in it equalling about 200 gain-controlling semiconductors. But then there's resistors etc tying everything together so that's more gain change. Some lower end stand-alone "tube" preamps often have 4-6 opamps in of themselves doing the work. Of course this sort of equipment doesn't generally sound all that great. The legendary consoles of old have minimal gain stages and sound better for it. Of course if you had 200+ gain changes in a single digital channel, the sound WILL degrade. If you're using 48-bit resolution, it will be minimal. But even in 32-bit float, that's best avoided. But this just proves my standing theory that the less you need to manipulate the sound, whether analogue or digital, the better.



You seem to be implying all minimum-phase digital EQs will generate the former and not the latter desired result, but that is something I find very difficult to measure

Not sure I follow you. I said that linear-phase EQ cancels the ripple & phase shift effect in trade for pre-echoes.



Well, I don't mind the UAD stuff

UAD does pretty well, but trust me, they're not quite the real thing either.



(2KHz square generates aliasing)
Is this a problem with the digital theory or a particular converter's implementation

Both. I'll send you an example in a day or 2.



As I said earlier, designers do use such complex test signals. I would enjoy seeing your test methodology, since it could be useful not only for evaluating digital theory but different brands of converters.

Good to hear the serious guys are getting better. The low end stuff is pretty abysmal. My test methodology mostly consists of contemplating an idea, record it, listen to it. See what my ears tell me. I remember Rupert Neve commenting that a piece of digital equipment he was being shown boasted a 120dB dynamic range. He said "no, it's no more than 100dB". "What test equipment did you use". "My ears. You turn down the signal and it falls apart, you don't need test equipment to tell that". (I paraphrased)



However, 44.1kHz is an adequate if not ideal data rate. Lavry argues for a minimum 60kHz rate, but no more than 96kHz. I have tested his theories and arrived at his result.

44.1KHz is not ideal. If it was then you wouldn't need to have microprocessors generate samples in between them and try to guess what the original wave was like. Using my ears, I did some tests running a simple reverb algorithm at 44.1KHz, 88.2KHz and 176.4KHz. The outcome was cleaner at higher sample rates. I prefer 88.2KHz myself because it's easy to handle and sounds cleaner.
Almost all the higher end plugins double the sample rate to do their internal processing behind the scenes. DACs typically use at least 8x sampling before converting back to Delta. If 44.1KHz was ideal, then none of that other stuff would be true.




You can say that -54dBFS is required for ambience, and that's true. But the distortion is something like -50dB below the signal, which puts it in the -104dBFS range, and thus extremely difficult to hear. If you turned up the -54dBFS peak 16-bit signal to -6dBFS, you'd have the same result as your 8-bit example

Your math is flawed. But at any rate, I wasn't talking about recording a nominal level at -54dBfs. I was talking about recording at the standard -18dBfs and finding the detail which once lived at -54dBfs going missing. Now, even the best modern 24-bit ADCs only yield about 20-bit performance in real world tests. You're recording at -18dBfs to preserve your headroom so now your nominal resolution is effectively 17-bit. Most humans in critical listening tests showed a dynamic range of 120dB was necessary to reach the point of diminishing return you're still falling short. Bob Ludwig, a pro-analogue guy will profess this. Bob Katz, a pro-digital guy also admits it.



While we're on the topic of dither (whether as a process or Ethan's real-world view), it prevents this type of distortion from occurring when bit depth is reduced, so that quantization distortion does not occur at levels audible above noise.

This holds true until you run any kind of process. But even dither has its limitations. You can use dither to turn a 96dB S/N ratio into an 80dB S/N ratio and in exchange, turn that 80dB usable dynamic range into a 96dB one. but it seems to get less effective at higher frequencies, again demanding higher sample rates to minimise distortion. I'm exaggerating of course but you catch my drift.



it's incredibly difficult if not impossible to generate audible quantization distortion on an actual recording.

I beg to differ. It may be subtle, but it's there. It shows itself as loss of warmth and ambience.



But I accept all manufacturers' specs unless I know them to be unreliable. To date, that has only happened with one (rather new) manufacturer, and they don't make a recorder.

Really? My Otari claims to have an S/N ratio of like 75dB, but that's only if you record at 1,200nW/m on 250nW/m tape. My portable flash recorder says you should record as loud as possible without the clip indicator lighting. I just tested a "High Definition" camera that recorded full 720p and yet only resolved 200 lines. People who publish specs cheat. They do it all the time to make them appear better than the competition.



Like earlier when you talked about hitting tape really hot to increase its dynamic range

I never said that. I record on GP9 at 400nW/m which is below the typical level. By recording at a lower level, I preserve the headroom of the tape which increases the dynamic range. Recording hotter would crush the peaks reducing the crest factor of the music. Of course, recording lower would cause more detail to be lost in the noise floor. Though I use a modded version of IEC with a little emphasis at 10KHz to hide noise and increase headroom, I'm very picky about operating levels.



Now to answere your wonder about the "Brothers in Arms" album, there was a lot that's different about the new mix. But the cold hard sound of old digital is almost gone because of working at significantly higher resolutions than were available originally. I remember the first time I heard a digital recording that I thought sounded great. I later learned that it was a RADAR using 96KHz converters. It was still mixed on an analogue console. I often point out which recordings were done on ProTools to my wife when we're listening to the radio. She asks how I know and I can only answere "because it's gritty sounding". On paper, there's nothing to prove what I claim but I'm right about 80% of the time. Even though PT is very high resolution, it still sounds bad mixing digitally in my opinion.


Now, I've used FFT windows to analyze my tests but you have to remember that it's very hard to measure the defects of something WITH the exact same thing. In other words, FFTs can be useful but cannot be fully trusted for showing digital abnormalities because they themselves are digital and require massive processing, and thus adding distortion in the resulting chart.


In parting, I'm reminded of a wise man who once said "the more the words, the less the meaning and how does that benefit anybody".
 
That's complete and utter bull. I thought the same at first myself when I was recording 16-bit. All the sources I saw at the time said to record as hot as possible to minimise quantization distortion. Yet I quickly learned that ANY format pushed to its limit sounds like crap. I found myself getting much better sound recording -12dBfs RMS at the highest. Also, there's no doubt in my mind that a mastered song left at -12dBfs to -18dBfs RMS sounds way better than the same song mastered to -9dBfs RMS. But now we're pushing songs up to -5dBfs RMS and it's nothing but mush. Less distortion is better, limiting adds distortion. The loudness war started as a way to grab attention but has since become all about numbers. "My song is louder than your song" kind of crap.

I believe you have misunderstood me. I did not intend to suggest the volume wars would sound OK with a 24 bit consumer format, I was speculating that maybe they wouldn't have happened, because maybe people would have been happy leaving their mixes at the RMS levels they mixed to, which you suggest, and I agree.





Really? Have you dissected a modern mixing console?

Well, yes, a 16 channel A&H board, it had I recall 4 dual opamps per channel. But that's not exactly what I mean. In 32 bit float, you can string an arbitrary number of significant gain changes, say 8 +6dB gain changes, followed by a -48dB attenuation and get a file that nulls with the original. That is not what the opamps in a console are doing. I guess you could do that if you want to, but it isn't going to do wonders for audio quality when you needed the +48dB on the mic preamp to begin with.







Not sure I follow you. I said that linear-phase EQ cancels the ripple & phase shift effect in trade for pre-echoes.

Sorry, in that case I was only referring to testing minimum phase EQ and looking for comb filter products.





UAD does pretty well, but trust me, they're not quite the real thing either.

I never supposed they were, but then there aren't too many hardware Fairchilds out there and I have nowhere to put a plate reverb . . . those who know say the more recent emulations (Neve stuff, etc) are better, but they are also vastly hungrier than the Pultec & LA2A generation. Such that the UA forum is pretty much constant griping about processing power. I get by OK since I am keenly aware of my alternatives.





44.1KHz is not ideal. If it was then you wouldn't need to have microprocessors generate samples in between them and try to guess what the original wave was like. Using my ears, I did some tests running a simple reverb algorithm at 44.1KHz, 88.2KHz and 176.4KHz. The outcome was cleaner at higher sample rates. I prefer 88.2KHz myself because it's easy to handle and sounds cleaner.

The data storage rate and rates necessary for processing are not necessarily analogous. That is one of the great wonders to me of some of the UAD stuff; it is upsampling to 192kHz and spitting out the original sample rate in a manner that to me seems completely transparent (ignoring any intended processing). They are quite good.







Now, even the best modern 24-bit ADCs only yield about 20-bit performance in real world tests. You're recording at -18dBfs to preserve your headroom so now your nominal resolution is effectively 17-bit. Most humans in critical listening tests showed a dynamic range of 120dB was necessary to reach the point of diminishing return you're still falling short. Bob Ludwig, a pro-analogue guy will profess this. Bob Katz, a pro-digital guy also admits it.

I think the best converters are approaching 22 bit. Lavry is up to 127dB dynamic range. Actually there is lots of good reading related to this discussion in his manual:

http://www.lavryengineering.com/white_papers/AD122-96MKIII_manual.pdf

But I accept a 20 bit figure as "converters I can afford". Even at that level, since quantization distortion is reduced as the signal becomes more complex, and at the level it would exist in a noisy 24 bit converter, it is likely below the converter's noise. That might not be proud, but it works.



I beg to differ. It may be subtle, but it's there. It shows itself as loss of warmth and ambience
.
.
.
Now, I've used FFT windows to analyze my tests but you have to remember that it's very hard to measure the defects of something WITH the exact same thing. In other words, FFTs can be useful but cannot be fully trusted for showing digital abnormalities because they themselves are digital and require massive processing, and thus adding distortion in the resulting chart.

Yes, that's right. It's not that FFT cannot be "trusted", it's just that the information provided is limited by the sample size and resolution. With the processing power available these days, resolution isn't a problem. But once you increase the sample size to include enough to get that detail, the phenomenon you are seeking out will be swamped by the rest of the music.

So if all we are left with is listening tests, the impasse remains. Preferences can be strange . . . when I got my new converters, I tested them against the old (the last generation of ADAT stuff). The superiority of the new box could be easily demonstrated with basic test signals. It wasn't even questionable. And the new spec was much better too. I mean this was black and white.

With program material, the old box sound fuzzy to me, highs smeared everywhere.

So I posted by test, and everybody kept picking the old box :confused:

I don't doubt people who pick analog over digital, but I really can't understand people who like bad digital. Good digital to me sounds crisp and tight. Bad digital doesn't sound cold to me, it sounds like a mess. Like my turntable which I'm pretty sure needs a new stylus (on the way), it renders "s" as "z".

Anyway, if there is no happy medium of a sufficiently complex test that can still be readily analyzed, how to move forward? It's hard to design a listening test. Most threads I've seen of listening tests of any kind degenerate into arguments over methodology or quality of performance. I try to stay out of that. But you seem to have objective tests you use in which I am interested, so I will wait for your files.

Referring back to my attempt to disprove Ethan Winer, that truncation of a real-world source did result in audible quantization distortion, I couldn't do it. I could do it easily with a test signal, but not a flute, not a piano, nothing. And I really wanted to!


High Definition" camera that recorded full 720p and yet only resolved 200 lines. People who publish specs cheat. They do it all the time to make them appear better than the competition.

Well, I make mine as honest as I can. But then I get questions about how big a product is. I list it in inches and millimeters, I really don't know what else to do . . .



In parting, I'm reminded of a wise man who once said "the more the words, the less the meaning and how does that benefit anybody".

I am pleased you found sufficient meaning in my posts to reply.
 
Here are some sample files demonstrating quantization distortion ("QD").

Up front, please listen to these on headphones, as it's easier to hear low levels, but PLEASE, the first sample is a 0dBFS peak sine wave, so calibrate that wave to 94dBSPL before putting headphones on your head! I don't want to damage anybody's hearing.

After that, I put the same tone faded out, just for a quick glimpse of QD as a signal falls.

These are all generated tones, some with noise, no real-world signals yet. It's much easier to show QD with test signals, so let's listen to it:

-56dBFS A440 sine wave
"" A440 sine wave with typical converter noise
"" sine wave with very quiet converter noise
"" sine wave with added overtones
"" sine wave with added overtones with typical converter noise
"" sine wave with added overtones with quiet converter noise

These samples only add converter noise, which is not very realistic. I'll have a much harder time keeping noise to that low level when I do the real-world test, but we want best-case noise scenarios, because that yields worst-case QD.

There are four files:

24 bit - the QD can be measured on the noiseless samples, but it's -150dBFS peak. Once quiet converter noise is added, it totally swamps the QD. I can't hear the QD anyway, and only slightly the noise (it has to compete with my headphone amp's noise!)

16 bit truncated - the QD is easy to hear, no question. Sounds bad! This is why dither is required (or thought to be)! But a few notes: QD is reduced with the noise added, even though the noise is way below the required dither. It's somewhat hard to hear the difference, but it's easy to measure. QD is also reduced with the more complex tone. It's still there in every sample though.

24 bit with dither/16 bit dither - these are the same file; the 24 bit file after dither but before truncation, and the resulting 16 bit file. If you want to save yourself a download, just download the 24 bit file and do the truncation yourself (same with the 16 bit truncated file above). Anyway, the QD is gone. It's important to note that the added noise of dither doesn't just "cover up" the QD. It couldn't, because the QD peaks were higher than the added noise. Thus you can't dither after the fact, because then you're just mixing noise into distortion. But by adding the noise before truncation, the QD is prevented; it never occurs.


Conclusions: QD is not an audible phenomenon in 24 bit, it probably doesn't even exist because of converter noise (that should also be true of 20 bit, 16 bit converters, because they should have been designed that way). It can be a problem when truncating to 16 bit if dither is not first applied. More complex signals result in less QD than simple signals. Adding noise reduces QD, even if the noise isn't enough for a proper dither.

When I do real world tests tonight, it's going to be very hard to show any QD on truncation. I've tried before and failed, but I'll give it my best shot again.

24_bit_test.wav
16_bit_trunc_test.wav
24_bit_test_dither.wav
16_bit_test_dither.wav
 
I believe you have misunderstood me.......I was speculating that maybe they wouldn't have happened, because maybe people would have been happy leaving their mixes at the RMS levels they mixed to, which you suggest, and I agree.

No I understand. I'm just stating that the loudness war has nothing to do with resolution. Even if we had 24-bit media or SACD on every store shelf in 1990, the levels would still be where they are today. The loudness war was never about taking advantage of every bit of information, it is about trying to be louder than everybody else to catch your attention. Childish, stupid war. It's why I don't buy albums anymore. Scuse, me, I don't buy NEW or REMASTERED albums.



I recall 4 dual opamps per channel. But that's not exactly what I mean. In 32 bit float, you can string an arbitrary number of significant gain changes, say 8 +6dB gain changes, followed by a -48dB attenuation and get a file that nulls with the original.

4 dual opamps equals 8 individual opamps (except on the pan pot of course). But every time you have a resistor, every time you have a transistor, everytime you have a length of circuit board trace, you are changing the signal gain, even a little bit. Now of course if you stepped up the level to +48, YOU WILL get distortion. But there's over 200 places on that board where gain changes are happening per channel in minute amounts. Now +48dB in a 32-bit float world won't cause internal clipping (though the playback would be horribly clipped) but if you pulled that clip back to unity and compared it to the original, there will still be a difference. It may be small but there will be a difference. It's in the nature of limited math.



Sorry, in that case I was only referring to testing minimum phase EQ and looking for comb filter products.

OK. It's gotten to the point where it's hard to SEE comb filtering on an FFT for reasons I mentioned but it's impossible to completely get rid of it. Sensitive ears can hear it.



that the UA forum is pretty much constant griping about processing power.

Those people are stupid....They beg for better simulations and complain about the consequences of them. The thing is, most of those people could get N'tracks' lousy EQ plug and love it if it said "Pultec" on it. Don't even get me started on plate reverbs. I could tell none of my digital reverbs' "Plate" settings sounded any like a real plate even before I had one. The best digital reverb I have is so processor hungry that it won' even run in real time. It may take 5 minutes for it to run on a 3 minute song with my P4 2.4GHz processor. Needless to say, it doesn't get used much particularly since I don't mix much stuff in my computer. For that, I have several outboard processors to handle the bulk of it but I digress. I have a couple of impulse reverbs in my computer and tried to profile my plate with them. What they spit back to me was nothing like the real thing. But that's what you get when you send what's effectively a multi-tap digital delay/EQ to do a giant sheet-of-metal's job.



It's not that FFT cannot be "trusted"

They're not very accurate no matter what the resolution is. You can generate a sine wave at 1KHz -18dBfs, look at it in an FFT window and see all sorts of stuff you're not hearing. Some methods work better than others, though I tend to stick to Blackman myself as it seems a bit less distracting. But if the FFT added no distortion of it's own, then only 1 tiny little point on the graph should be raised but instead you get a bullet shaped hump with ripples spanning across the spectrum. Now how can this be used to find problems that span many frequencies at much lower levels if it can't represent a 1K tone?


I agree with you on the oddities of people's preferences. I remember I was one of the first people to have 24-bit recording capabilities in my area. I had converters that could run up to 96KHz stereo and was so proud of that. A few years later, I got a new set (which still aren't great BTW) and they instantly sounded SO MUCH BETTER even at 44.1KHz. At 88.2KHz it was a whole new world. They can be cranked up to 192KHz but I really don't hear much difference unless there's a lot of processing involved but that's a processing issue anyway. Now on the other hand, I have an aquaintance that has a 2" 24-track Saturn but he still pulls out the old ADATs (I mean the originals) once and a while because he likes the dirty jittery 16-bit medium for some stuff. But I'm glad to hear the top converters are getting closer to 22-bit. I personally think PCM is a mistake. The technology is here to bypass almost all our problems of filtering & quantization through DSD. The conversion to DSD is so simple, just an analogue low-pass filter followed by a compariter that gets its feed off the clock. That's it. No manipulation after the fact. The conversion back to analogue is even simpler. The raw pulse code is run through an analogue low-pass filter and you're done. Though on paper, the specs are similar to 20-bit 192 KHz, there is NO DOUBT it sounds much cleaner. I think this is the next step. It's very hard to process DSD in the digital domain but it is possible. But the conversion process is so non-intrusive due to its minimal amount of manipulation to the stream, there's no reason not to mix on an analogue board.
 
I don't mean to ignore the bulk of your post, but it seems we are largely at a point of agreement.

They're not very accurate no matter what the resolution is. You can generate a sine wave at 1KHz -18dBfs, look at it in an FFT window and see all sorts of stuff you're not hearing. Some methods work better than others, though I tend to stick to Blackman myself as it seems a bit less distracting. But if the FFT added no distortion of it's own, then only 1 tiny little point on the graph should be raised but instead you get a bullet shaped hump with ripples spanning across the spectrum. Now how can this be used to find problems that span many frequencies at much lower levels if it can't represent a 1K tone?


OK, OK, you say distortion, I say resolution, I think it's the same thing. Yes, one must always be aware that there could be distortions below the resolution of the graph. In your example, I can't see second order distortion at less than -110dBFS. To date, that's hasn't been a need of my circuits :o but I also know I can't hear it against the fundamental tone. If it was many higher-order distortions I'd be worried, but the resolution there is a little better too.

On the other hand, if you see a clear distortion on an FFT, it's there. Then it's easy to proceed to discern if it's audible.


I personally think PCM is a mistake. The technology is here to bypass almost all our problems of filtering & quantization through DSD. The conversion to DSD is so simple, just an analogue low-pass filter followed by a compariter that gets its feed off the clock. That's it. No manipulation after the fact. The conversion back to analogue is even simpler. The raw pulse code is run through an analogue low-pass filter and you're done. Though on paper, the specs are similar to 20-bit 192 KHz, there is NO DOUBT it sounds much cleaner. I think this is the next step. It's very hard to process DSD in the digital domain but it is possible. But the conversion process is so non-intrusive due to its minimal amount of manipulation to the stream, there's no reason not to mix on an analogue board.

If you read Bruno Putzeys' board on PSW he talks often about decimation. According to Putzeys, a sufficient well-designed routine should be able to convert from DSD to PCM and back without loss. Further, he feels that commercial decimation routines may often fall short due to insufficient processing power devoted to the function. Of course he has a vested interest in his own routines, but his arguments make sense.

In that case, short of buying the really expensive boxes that feature such routines, it may be better to record DSD and let offline processing perform the conversion to PCM. The difficulties of DSD DSP are apparently much more significant than this approach, which really could be commercially available now.

http://recforums.prosoundweb.com/index.php/t/22849/0/
 
A couple of comments...

Someone said something about artifacts when digitizing a 1kHz square wave. Yep, you'll get 'em in spades because the waveform has harmonics out to infinity Hz. So depending on how well your AD handles freqs in the antialiasing neighborhood, you'll see stuff at the higher freqs. You'd also see changes to a square wave, say from a signal generator with a 100 kHz bandwidth, run through an audio analog device designed for 20 kHz.

Someone else said something that a 10 kHz digitized sinewave looks like a series of steps. This is not true. The digitized values are discrete in time, ie, the sample's time width is zero. An amplitude at an instant. That's the mathematical meaning. You can't connect the dots that are the samples.

Once a signal is in the digital domain, you can do all kinds of manipulation on it and then do the opposite and get back exactly what you put in. There is nothing in the analog domain that does that.

Furthermore, a 44.1 kHz sampling rate can exactly reproduce any waveform with exact accuracy as long as that waveform has zero content above the Nyquist frequency. Period. Zero error, zero distortion.

But the sounds we record are not digital, and the most frought with peril part of the process is getting an accurate level at the specific instant we need it, and getting rid of information above the Nyquist frequency. So we lose stuff going into the digital domain. Next we have to convert the data back to the analog domain. That's much harder to do than it sounds.

You can easily make digital equalizers with zero phase shift by using the filtfilt command in Matlab. It works by running the signal through the filter forwards then backwards.

An FFT of a 1 kHz sine wave will have one value in the 1 kHz frequency bin and be zero everywhere else. However, in real life the thing doing the FFT and the source probably don't agree on what's 1 kHz so there is smearing of energy into lots of frequency bins. That's because the FFT math assumes that the record you're analyzing repeats from time = minus infinity to infinity. So if the ends of the record aren't continuous you get a discontinuity and those have broad frequency content. So we use a window that artificially forces the ends of the record to zero. But doing that throws away information, so you can bias the window to keep what you're interested in. If you need good amplitude resolution you use a flattop window, Hanning for good frequency resolution.

The digital realm is mathematically perfect. Getting stuff into it and back out is not.

And msh is one of the smartest and most gracious people I know.
 
My Bwain hurts on these threads

Analogue Digital vs Digital Analogue

If it works stick with it there seems to me a selling of a dead horse :p

With these threads :rolleyes:
 
Well, the challenge was there for a "momentous" 5000th post and I think it was achieved :D.

:cool:
 
Sh!tf#st, more like.

Naw, just kidding. It's been a very informative and interesting post, despite talking o'er my head quite a bit,... but I've followed most of it. Theory is fine, but no real world system is free of flaws, however coarse, fine or measurable they may be.

Hey, I like analog for what it is despite all of it's inherent real world flaws, and I make really swell sounding LPs using tape.:eek:;)

(Just kidding about making actual LPs, but just euphemistically referring to my collection of recordings as "albums". My analog stuff's mixed down to CD and MP3 like e'one else's).

Not to mention I own and occasionally use some stanalone porta-format DAWs, and they're alright. Nothing to go crazy about the difference or write home about, but I personally feel I've yielded sonically more pleasing recordings as an end result of using analog vs. digital using essentially the same techniques, tho' as always YMMV.

I'll admit that much of my preference for analog is it's simplicity and work flow, and my general disdain of menus and gadgetiness in recording. Again, YMMV:eek:;)
 
Last edited:
Well, it's all fine and dandy for people (insert any name you like: Lavry, msh, etc., etc) to experiment, they can run sine waves till they die BUT in real world applications it probably counts for very little.

Ignoring any early (60's) recording I did to tape and later attempts with 4 track cassette units, I really started "serious" attempts with 16 trk 16bit/44.1/48 h/disc based recorders through my S/craft console and even now there is absolutely nothing to complain about the recordings from that system. I went analogue because of h/drive issues............I lost one complete song due to some form of "corruption" and in discussing transfering the remaining/surviving songs/tracks to a friends 1" 16trk R2R, I became aware of a similar desk that had been decommisioned at a local radio station so I purchased it. What was almost immediately apparent to friends that heard it, was that the R2R imparted a "mystical" something (both to the transfered material and new recordings) that just "wasn't there" in the digital realm............it was subtle, almost imperceptable, BUT there was something. My current goal once the studio rebuild is complete is to have my R2R and h/disc recorders synched giving me not only an additional number of available tracks but also what I consider to be the best of both worlds........not critical stuff will go to digital, the more "important" sources will go to R2R.

Now msh and others can read and quote as many "experts" as they like but until they put up samples of genuine "real world" musical material their "contributions" will retain the appearance of being a lot of ego stroked hot air being propelled by questionable motives.

:cool:
 
Indeed a "momentous" 5000th post, ausrock, or perhaps I was just trying too damn hard to please!:eek::p:D

Still, despite it all, as Dave (A Reel Person) pointed out, I too found it [the thread] informative and fascinating, from BOTH the analog AND digital camp. Despite our differences, I thank everyone (including mshilarious) for a rather civil and IMHO, best thread on the subject. This should be a sticky so that we never have to do this again, anew....:eek:;) Too much good info here, to let it go over the side and people can always add on...

Again, I wish to thank everyone (sorry if I don't include your name), who participated and especially our new member wado1942, who got us all in this mess! Naaa just KIDDING!!!!! :D:D:D

-------
 
Last edited:
Nah!

Nah, that was the best part, when the original author stepped in and posted.

It's truly great, informative, etc., and how often does that really happen?!?:eek:;)
 
but until they put up samples of genuine "real world" musical material their "contributions" will retain the appearance of being a lot of ego stroked hot air being propelled by questionable motives.

ausrock, posting samples of 'real world' musical material won't matter cause no matter how it was recorded, whether on a Studer or briefcase size digital portastudio, DAW or whatever, no matter the sample, speed used... it is still going to end up being sampled to be heard [via posting links of digital audio, for example], effectively rendering the comparison useless. Sure, one will be able to hear the tell, tell characteristics of tape and digital but for the most part they're going to not sound dramatically different [using same recording chain and technique]. The only method, IMHO, is to actually get people to listen straight from the source, which means no mp3's, wav's or anything of that sort. It does mean listening straight from the master tape or DAW, right there where it was recorded, under similar circumstances. The latter, obviously, isn't realistic and I, personally, would not want to participate in another form of comparative study of the formats. Otherwise, they all end up getting reduced to the same bits and pieces, losing most of their original characteristics and resolution. For example, someone posts an mp3 or wav of two album cuts [from the same source], originally on LP and CD. They all end up being reduced to digital, sampled and in the worse case scenario mp3. Sure, you hear characteristics of the LP and it sounds different than CD BUT the LP completely loses resolution and thus cannot be realized to its original 'feel' or enjoyment. Both the LP and CD are now the same sample and bit. The only way to compare, to fully realize each medium, is to listen straight from the turntable / digital source, using the same sound system.

-----
 
Last edited:
A guy...

A guy who dezignz plugz would expectedly have certain opinionz about digitalz superioritiez.:eek:;)

I don't think the A/D/D/A debate will ever die.:eek::eek:;)
 
I agree, Wado's participation was probably the most balanced, unbiased input here and is greatly appreciated.

Considering the fact that most here aren't totally "down on" digital, maybe it's time that some of those who display a digital bias, instead of persisting in interjecting, took their "knowledge" out to educate the masses who seriously believe that digital is the "be all and end all" of recording............teach them that digital has it's shortcomings and how best to deal with them, teach them that ProTools really isn't that "pro", etc., etc.,......teach them that everything has it's limitations and that there is a place for everything.

:cool:
 
Last edited:
ausrock, posting samples of 'real world' musical material won't matter cause no matter how it was recorded, whether on a Studer or briefcase size digital portastudio, DAW or whatever, no matter the sample, speed used... it is still going to end up being sampled to be heard [via posting links of digital audio, for example], effectively rendering the comparison useless. Sure, one will be able to hear the tell, tell characteristics of tape and digital but for the most part they're going to not sound dramatically different [using same recording chain and technique]. The only method, IMHO, is to actually get people to listen straight from the source, which means no mp3's, wav's or anything of that sort. It does mean listening straight from the master tape or DAW, right there where it was recorded, under similar circumstances. The latter, obviously, isn't realistic and I, personally, would not want to participate in another form of comparative study of the formats.

-----

Dan,

I was well aware of that fact when I posted.....I just felt that it had to be said ;):D
 
Back
Top