W
wado1942
New member
OK, that's a lot of stuff to read. But first comes first.
That's complete and utter bull. I thought the same at first myself when I was recording 16-bit. All the sources I saw at the time said to record as hot as possible to minimise quantization distortion. Yet I quickly learned that ANY format pushed to its limit sounds like crap. I found myself getting much better sound recording -12dBfs RMS at the highest. Also, there's no doubt in my mind that a mastered song left at -12dBfs to -18dBfs RMS sounds way better than the same song mastered to -9dBfs RMS. But now we're pushing songs up to -5dBfs RMS and it's nothing but mush. Less distortion is better, limiting adds distortion. The loudness war started as a way to grab attention but has since become all about numbers. "My song is louder than your song" kind of crap.
Really? Have you dissected a modern mixing console? Any given channel may contain as many as 6-10 opamps just to control gain. Each opamp typically has 20 transistors in it equalling about 200 gain-controlling semiconductors. But then there's resistors etc tying everything together so that's more gain change. Some lower end stand-alone "tube" preamps often have 4-6 opamps in of themselves doing the work. Of course this sort of equipment doesn't generally sound all that great. The legendary consoles of old have minimal gain stages and sound better for it. Of course if you had 200+ gain changes in a single digital channel, the sound WILL degrade. If you're using 48-bit resolution, it will be minimal. But even in 32-bit float, that's best avoided. But this just proves my standing theory that the less you need to manipulate the sound, whether analogue or digital, the better.
Not sure I follow you. I said that linear-phase EQ cancels the ripple & phase shift effect in trade for pre-echoes.
UAD does pretty well, but trust me, they're not quite the real thing either.
Both. I'll send you an example in a day or 2.
Good to hear the serious guys are getting better. The low end stuff is pretty abysmal. My test methodology mostly consists of contemplating an idea, record it, listen to it. See what my ears tell me. I remember Rupert Neve commenting that a piece of digital equipment he was being shown boasted a 120dB dynamic range. He said "no, it's no more than 100dB". "What test equipment did you use". "My ears. You turn down the signal and it falls apart, you don't need test equipment to tell that". (I paraphrased)
44.1KHz is not ideal. If it was then you wouldn't need to have microprocessors generate samples in between them and try to guess what the original wave was like. Using my ears, I did some tests running a simple reverb algorithm at 44.1KHz, 88.2KHz and 176.4KHz. The outcome was cleaner at higher sample rates. I prefer 88.2KHz myself because it's easy to handle and sounds cleaner.
Almost all the higher end plugins double the sample rate to do their internal processing behind the scenes. DACs typically use at least 8x sampling before converting back to Delta. If 44.1KHz was ideal, then none of that other stuff would be true.
Your math is flawed. But at any rate, I wasn't talking about recording a nominal level at -54dBfs. I was talking about recording at the standard -18dBfs and finding the detail which once lived at -54dBfs going missing. Now, even the best modern 24-bit ADCs only yield about 20-bit performance in real world tests. You're recording at -18dBfs to preserve your headroom so now your nominal resolution is effectively 17-bit. Most humans in critical listening tests showed a dynamic range of 120dB was necessary to reach the point of diminishing return you're still falling short. Bob Ludwig, a pro-analogue guy will profess this. Bob Katz, a pro-digital guy also admits it.
This holds true until you run any kind of process. But even dither has its limitations. You can use dither to turn a 96dB S/N ratio into an 80dB S/N ratio and in exchange, turn that 80dB usable dynamic range into a 96dB one. but it seems to get less effective at higher frequencies, again demanding higher sample rates to minimise distortion. I'm exaggerating of course but you catch my drift.
I beg to differ. It may be subtle, but it's there. It shows itself as loss of warmth and ambience.
Really? My Otari claims to have an S/N ratio of like 75dB, but that's only if you record at 1,200nW/m on 250nW/m tape. My portable flash recorder says you should record as loud as possible without the clip indicator lighting. I just tested a "High Definition" camera that recorded full 720p and yet only resolved 200 lines. People who publish specs cheat. They do it all the time to make them appear better than the competition.
I never said that. I record on GP9 at 400nW/m which is below the typical level. By recording at a lower level, I preserve the headroom of the tape which increases the dynamic range. Recording hotter would crush the peaks reducing the crest factor of the music. Of course, recording lower would cause more detail to be lost in the noise floor. Though I use a modded version of IEC with a little emphasis at 10KHz to hide noise and increase headroom, I'm very picky about operating levels.
Now to answere your wonder about the "Brothers in Arms" album, there was a lot that's different about the new mix. But the cold hard sound of old digital is almost gone because of working at significantly higher resolutions than were available originally. I remember the first time I heard a digital recording that I thought sounded great. I later learned that it was a RADAR using 96KHz converters. It was still mixed on an analogue console. I often point out which recordings were done on ProTools to my wife when we're listening to the radio. She asks how I know and I can only answere "because it's gritty sounding". On paper, there's nothing to prove what I claim but I'm right about 80% of the time. Even though PT is very high resolution, it still sounds bad mixing digitally in my opinion.
Now, I've used FFT windows to analyze my tests but you have to remember that it's very hard to measure the defects of something WITH the exact same thing. In other words, FFTs can be useful but cannot be fully trusted for showing digital abnormalities because they themselves are digital and require massive processing, and thus adding distortion in the resulting chart.
In parting, I'm reminded of a wise man who once said "the more the words, the less the meaning and how does that benefit anybody".
Ethan, I really wish a consumer 24 bit format had caught on. It might have alleviated the volume wars, for one.
That's complete and utter bull. I thought the same at first myself when I was recording 16-bit. All the sources I saw at the time said to record as hot as possible to minimise quantization distortion. Yet I quickly learned that ANY format pushed to its limit sounds like crap. I found myself getting much better sound recording -12dBfs RMS at the highest. Also, there's no doubt in my mind that a mastered song left at -12dBfs to -18dBfs RMS sounds way better than the same song mastered to -9dBfs RMS. But now we're pushing songs up to -5dBfs RMS and it's nothing but mush. Less distortion is better, limiting adds distortion. The loudness war started as a way to grab attention but has since become all about numbers. "My song is louder than your song" kind of crap.
In the real analog world, you can't stack up an arbitrary number of offsetting gain changes without suffering a severe penalty in terms of noise or headroom. Hence, no one works that way, whether analog or digital.
Really? Have you dissected a modern mixing console? Any given channel may contain as many as 6-10 opamps just to control gain. Each opamp typically has 20 transistors in it equalling about 200 gain-controlling semiconductors. But then there's resistors etc tying everything together so that's more gain change. Some lower end stand-alone "tube" preamps often have 4-6 opamps in of themselves doing the work. Of course this sort of equipment doesn't generally sound all that great. The legendary consoles of old have minimal gain stages and sound better for it. Of course if you had 200+ gain changes in a single digital channel, the sound WILL degrade. If you're using 48-bit resolution, it will be minimal. But even in 32-bit float, that's best avoided. But this just proves my standing theory that the less you need to manipulate the sound, whether analogue or digital, the better.
You seem to be implying all minimum-phase digital EQs will generate the former and not the latter desired result, but that is something I find very difficult to measure
Not sure I follow you. I said that linear-phase EQ cancels the ripple & phase shift effect in trade for pre-echoes.
Well, I don't mind the UAD stuff
UAD does pretty well, but trust me, they're not quite the real thing either.
(2KHz square generates aliasing)
Is this a problem with the digital theory or a particular converter's implementation
Both. I'll send you an example in a day or 2.
As I said earlier, designers do use such complex test signals. I would enjoy seeing your test methodology, since it could be useful not only for evaluating digital theory but different brands of converters.
Good to hear the serious guys are getting better. The low end stuff is pretty abysmal. My test methodology mostly consists of contemplating an idea, record it, listen to it. See what my ears tell me. I remember Rupert Neve commenting that a piece of digital equipment he was being shown boasted a 120dB dynamic range. He said "no, it's no more than 100dB". "What test equipment did you use". "My ears. You turn down the signal and it falls apart, you don't need test equipment to tell that". (I paraphrased)
However, 44.1kHz is an adequate if not ideal data rate. Lavry argues for a minimum 60kHz rate, but no more than 96kHz. I have tested his theories and arrived at his result.
44.1KHz is not ideal. If it was then you wouldn't need to have microprocessors generate samples in between them and try to guess what the original wave was like. Using my ears, I did some tests running a simple reverb algorithm at 44.1KHz, 88.2KHz and 176.4KHz. The outcome was cleaner at higher sample rates. I prefer 88.2KHz myself because it's easy to handle and sounds cleaner.
Almost all the higher end plugins double the sample rate to do their internal processing behind the scenes. DACs typically use at least 8x sampling before converting back to Delta. If 44.1KHz was ideal, then none of that other stuff would be true.
You can say that -54dBFS is required for ambience, and that's true. But the distortion is something like -50dB below the signal, which puts it in the -104dBFS range, and thus extremely difficult to hear. If you turned up the -54dBFS peak 16-bit signal to -6dBFS, you'd have the same result as your 8-bit example
Your math is flawed. But at any rate, I wasn't talking about recording a nominal level at -54dBfs. I was talking about recording at the standard -18dBfs and finding the detail which once lived at -54dBfs going missing. Now, even the best modern 24-bit ADCs only yield about 20-bit performance in real world tests. You're recording at -18dBfs to preserve your headroom so now your nominal resolution is effectively 17-bit. Most humans in critical listening tests showed a dynamic range of 120dB was necessary to reach the point of diminishing return you're still falling short. Bob Ludwig, a pro-analogue guy will profess this. Bob Katz, a pro-digital guy also admits it.
While we're on the topic of dither (whether as a process or Ethan's real-world view), it prevents this type of distortion from occurring when bit depth is reduced, so that quantization distortion does not occur at levels audible above noise.
This holds true until you run any kind of process. But even dither has its limitations. You can use dither to turn a 96dB S/N ratio into an 80dB S/N ratio and in exchange, turn that 80dB usable dynamic range into a 96dB one. but it seems to get less effective at higher frequencies, again demanding higher sample rates to minimise distortion. I'm exaggerating of course but you catch my drift.
it's incredibly difficult if not impossible to generate audible quantization distortion on an actual recording.
I beg to differ. It may be subtle, but it's there. It shows itself as loss of warmth and ambience.
But I accept all manufacturers' specs unless I know them to be unreliable. To date, that has only happened with one (rather new) manufacturer, and they don't make a recorder.
Really? My Otari claims to have an S/N ratio of like 75dB, but that's only if you record at 1,200nW/m on 250nW/m tape. My portable flash recorder says you should record as loud as possible without the clip indicator lighting. I just tested a "High Definition" camera that recorded full 720p and yet only resolved 200 lines. People who publish specs cheat. They do it all the time to make them appear better than the competition.
Like earlier when you talked about hitting tape really hot to increase its dynamic range
I never said that. I record on GP9 at 400nW/m which is below the typical level. By recording at a lower level, I preserve the headroom of the tape which increases the dynamic range. Recording hotter would crush the peaks reducing the crest factor of the music. Of course, recording lower would cause more detail to be lost in the noise floor. Though I use a modded version of IEC with a little emphasis at 10KHz to hide noise and increase headroom, I'm very picky about operating levels.
Now to answere your wonder about the "Brothers in Arms" album, there was a lot that's different about the new mix. But the cold hard sound of old digital is almost gone because of working at significantly higher resolutions than were available originally. I remember the first time I heard a digital recording that I thought sounded great. I later learned that it was a RADAR using 96KHz converters. It was still mixed on an analogue console. I often point out which recordings were done on ProTools to my wife when we're listening to the radio. She asks how I know and I can only answere "because it's gritty sounding". On paper, there's nothing to prove what I claim but I'm right about 80% of the time. Even though PT is very high resolution, it still sounds bad mixing digitally in my opinion.
Now, I've used FFT windows to analyze my tests but you have to remember that it's very hard to measure the defects of something WITH the exact same thing. In other words, FFTs can be useful but cannot be fully trusted for showing digital abnormalities because they themselves are digital and require massive processing, and thus adding distortion in the resulting chart.
In parting, I'm reminded of a wise man who once said "the more the words, the less the meaning and how does that benefit anybody".