Not another debate thread - Just a question

To dredge up this old thread again ... a funny update.

I subscribe to a channel called The Great Courses on Amazon Prime. It's filled with tons of awesome courses on all kinds of topics that. A course might have 40 lectures in it, and each lecture is about 30 to 40 minutes long. I usually pick a topic and work my way through the lectures --- one a day --- during my lunch break.

I'm watching one on quantum mechanics right now. :)

I still have trouble with this "analog is a continuous wave" concept, especially after viewing this video:

D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org) - YouTube

Which seems to suggest that digital audio "sampling" does not work the way most people think it does. Granted, I know very little about how digital audio works (and I apparently know very little about how analog audio works too), but I found the video---and the demonstrations in it---very engaging and interesting.

Maybe some of you will not agree with it or find fault with it? It's beyond my pay grade, so I can't comment on it. I did find the demonstrations with the oscilloscope (GD -- took me about 15 tries to spell that word correctly) very compelling, though. The fact that the comments are disabled on this video leads me to believe that there was likely some serious ugly debate going on.

Anyway, I'm only on lecture 6 in the quantum mechanics course, but so far it's been made clear that---given our understanding at the current time---the universe is essentially "quantized" (hence the "quanta" in quantum mechanics), meaning that energy comes in discreet, indivisible "packets." And so this would seem to suggest that there is no such thing as a "pure continuous wave."

BUT, there's also this concept called wave particle duality. And it says that everything (every particle)---whether you're talking about light or matter---acts as both a particle and a wave, depending on when/how you look at it. (I'm paraphrasing horribly here.) And so that muddies the waters a bit more. I'm more confused now than ever! :)

By the way, if you want to know where I got my idea of particles on tape resembling the "analogue" of a sound wave, you can blame it on Peter McIan and his book Using Your Portable Studio. You can see in this excerpt how his simplification of the process led to my misunderstanding.

Plus, I'd always heard that phonographs and/or wire recorders essentially worked in a similar fashion --- i.e., when "recording," the needle actually etched an analogue of the sound wave into the wax cylinder. On playback, this process was essentially reversed. (Again, an extreme simplification.) So I kind of thought the same thing was essentially happening with magnetic tape.
 

Attachments

  • Using Your Portable Studio - excerpt.pdf
    628.5 KB · Views: 3,849
famous beagle said:
I still have trouble with this "analog is a continuous wave" concept, especially after viewing this video:

D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org) - YouTube

Which seems to suggest that digital audio "sampling" does not work the way most people think it does. Granted, I know very little about how digital audio works (and I apparently know very little about how analog audio works too), but I found the video---and the demonstrations in it---very engaging and interesting.

Maybe some of you will not agree with it or find fault with it? It's beyond my pay grade, so I can't comment on it. I did find the demonstrations with the oscilloscope (GD -- took me about 15 tries to spell that word correctly) very compelling, though. The fact that the comments are disabled on this video leads me to believe that there was likely some serious ugly debate going on.

The problem with that video is that Monty is a developer (Ogg Vorbis) so people might assume he's speaking from a position of authority. He is, sort of. His demonstrations are effective, but his descriptions can be misleading, especially concerning dither. At around 11:30 in the video he begins to explain quantization and dither. His demonstration is effective and more or less correct, but then at around 16:45 he says no one ever ruined a great recording by not dithering the final master. It's a contradiction of sorts. He shows you exactly what dither does, and from there we can make an argument on why it's necessary, but he goes the other way.

That's a serious red flag for me. It shows that even amongst developers, they don't always agree with Claude Shannon on how DSP works.

What Monty's video demonstrates well enough is that low level signals have quantization error. A sine wave quantized with only one or two bits will literally turn into a square wave and generate a massive spew of harmonic distortion correlated to the signal. Dither makes the LSB fire at random, rather than even trying to give you a massively inaccurate representation of the original signal. Three side effects of that are it adds noise not correlated to the signal, it eliminates harmonic distortion that is correlated to the signal, and inside the noise generated by dither we can hear the original signal without distortion. Look at the part of the video where he shows a sine wave "at 1/4 bit". There is no such thing as "1/4 bit" in fixed point. It's a zero or a one. Without dither, the signal is chopped out. Dither preserves low level information.

To avoid truncation distortion, dither should be applied every time the audio is quantized to fixed point. This means the output of ADC's (sometimes beyond our control), the output of certain plugins (beyond our control, unless we're writing plugin code) and the output of a typical DAW running in floating point to a fixed point render or DAC. (Hardware inserts, monitoring, fixed point renders)

If we allow signals to truncate without dithering, the quantization distortion becomes signal and can't be removed after the fact. Dither needs to be applied right before quantization. It's a choice between noise and distortion. The noise can be relatively harmless to outright beneficial (plain jane TPDF dither, no noise shaping), while the distortion is cumulative and can have an effect on subsequent processing.

Youtube, iTunes, Spotify, end users that want to make MP3's and devices with digital volume controls are examples of subsequent processing.
 
The problem with that video is that Monty is a developer (Ogg Vorbis) so people might assume he's speaking from a position of authority. He is, sort of. His demonstrations are effective, but his descriptions can be misleading, especially concerning dither. At around 11:30 in the video he begins to explain quantization and dither. His demonstration is effective and more or less correct, but then at around 16:45 he says no one ever ruined a great recording by not dithering the final master. It's a contradiction of sorts. He shows you exactly what dither does, and from there we can make an argument on why it's necessary, but he goes the other way.

That's a serious red flag for me. It shows that even amongst developers, they don't always agree with Claude Shannon on how DSP works.

That didn't really strike me that way. To me, he seemed to be saying that dithering was a safety net that would be perhaps a bit more important at lower bit counts (he mentioned 14 bits as the originally intended bit depth for CDs), but at 16 bit, it's so negligible that it's not even really necessary. (Thus, no one ever ruined something by not dithering it.)

That's the takeaway I was getting, but I don't pretend to be a digital audio expert (not even fairly knowledgeable) in the least. The second 2/3 of your post, for example, goes pretty far over my head.
 
A lot of people attribute the "sound of digital" (eg. lack of depth, poor stereo imaging, cold harsh sound) to lack of dithering. To be fair, it's a subject that usually generates a lot of confusion, on which developers don't always agree. Since it happens at very low signal levels, the argument that you can't hear it anyway seems intuitive on the surface. The thing is, fixed point PCM audio has a flaw built into it that dither counteracts, and it's not about trying to hear something more than -80 dBFS down in the signal. Leaving harmonic crud near the zero crossing (very low level audio) can have farther reaching effects on the quality of things you actually can hear.
 
A lot of people attribute the "sound of digital" (eg. lack of depth, poor stereo imaging, cold harsh sound) to lack of dithering. To be fair, it's a subject that usually generates a lot of confusion, on which developers don't always agree. Since it happens at very low signal levels, the argument that you can't hear it anyway seems intuitive on the surface. The thing is, fixed point PCM audio has a flaw built into it that dither counteracts, and it's not about trying to hear something more than -80 dBFS down in the signal. Leaving harmonic crud near the zero crossing (very low level audio) can have farther reaching effects on the quality of things you actually can hear.

I'll take your word for it.

Do you know of any videos that show evidence of this? I'd love to see/hear it!
 
Monty's video demonstrates it well enough in some ways. He shows a sine wave on a spectrum analizer. Without dither, you can see the harmonic spikes from truncation distortion. With dither, the noise floor comes up a little, but the spikes are gone, and the amplitude of the distortion is greater than the noise from dithering. He also shows digital black - absence - nothingness from trying to truncate a sine wave below 1 bit resolution. Add dither, there's the sine wave.

Here's another video:

 
Monty's video demonstrates it well enough in some ways. He shows a sine wave on a spectrum analizer. Without dither, you can see the harmonic spikes from truncation distortion. With dither, the noise floor comes up a little, but the spikes are gone, and the amplitude of the distortion is greater than the noise from dithering. He also shows digital black - absence - nothingness from trying to truncate a sine wave below 1 bit resolution. Add dither, there's the sine wave.

Here's another video:



Thanks for that. What's funny is that, although he plays you the 8-bit file without dithering, he doesn't play you the 16-bit file without dithering. LOL. And was the main thing I wanted to hear because, as he said, no one uses 8-bit.

I would love to hear a whole mix done with dither and without to see if I can hear the difference.

I'm sure that's out there somewhere, so I'll see if I can find it.

It doesn't really matter to me either way, because it's easy enough to dither. But to get to Monty's point, it seemed to me that he was saying that, at 16 bits, not dithering is not going to "ruin" a master because the difference would be so hard to hear.

Again, that's what I'm guessing. I'm going to go see if I can find any evidence of that myself.
 
Well, apparently this is going to be more difficult than I thought.

I thought for sure I'd be able to easily find a video that compared a full mix down to 16 bit with and without dither in a "Can you hear the difference?" type of video. But I haven't been able to find it yet.

I did find this video, where he at least played a sine wav at 24 bit and then truncated to 16 bit and 8 bit, but he's f*cking talking all over it so it's really hard to listen critically! (WTF???) From what I could tell, given the circumstances, I couldn't say I heard a noticeable difference between 24 bits and 16 bits. I could hear a slight difference at 8 bits. (This was just a lone sine wav, though, of course.)

Introduction To Dithering | iZotope Insider Tips - YouTube

Anyway ... the search continues I guess.
 
famous beagle said:
It doesn't really matter to me either way, because it's easy enough to dither. But to get to Monty's point, it seemed to me that he was saying that, at 16 bits, not dithering is not going to "ruin" a master because the difference would be so hard to hear.

It's easy enough to dither, and it would be easier if we didn't have to think about it. For a while now, Samplitude has had a thing called smart dither or something that you can enable so the workstation just dithers automatically when necessary. I don't think all DAWs handle it the same way.

As for the difference being hard to hear, it certainly can be for something like an ABX comparison. Sometimes you can hear things like reverb tails where dithering preserves the length of decay and sense of space better. Without listening for a specific artifact it can be difficult to hear, but once we find something like that it can get much easier. How a signal with truncation distortion will react to further processing is a concern as well. If you don't notice it for 2 or 3 generations, it's too late to do anything about it. We have the option of just making sure the signal doesn't have that crud in it.
 
BUT, there's also this concept called wave particle duality. And it says that everything (every particle)---whether you're talking about light or matter---acts as both a particle and a wave, depending on when/how you look at it. (I'm paraphrasing horribly here.) And so that muddies the waters a bit more. I'm more confused now than ever! :)
It's very important to remember that quantum mechanics says literally nothing about what actually is. It is merely a convenient model that helps to predict what might be. It is actually more likely that there is neither particle nor wave, but using those ideas can help us to predict - or more precisely figure the probablity of - certain phenomena coming to actuality. That is, it's all just math and pictures to help us conceptualize and comprehend things that are essentiallly inconceivable and uncomprehensible. I know that doesn't help your headache, but that's where we're at. :)
 
It's very important to remember that quantum mechanics says literally nothing about what actually is. It is merely a convenient model that helps to predict what might be. It is actually more likely that there is neither particle nor wave, but using those ideas can help us to predict - or more precisely figure the probablity of - certain phenomena coming to actuality. That is, it's all just math and pictures to help us conceptualize and comprehend things that are essentiallly inconceivable and uncomprehensible. I know that doesn't help your headache, but that's where we're at. :)

Yes ... it really truly is mind-boggling. The double-slit experiment and the Mach-Zehnder interferometer simply blow my mind.
 
You have a point: The fewer magnetic particles passing the record point per second, the lower the maximum audio frequency you can record. Consider the situation with those open-reel tapedecks of the 50s, 60s, and 70s; and note that they commonly had three speeds at least on consumer decks (professional decks often had 15IPS - really "hauling the tape").

There is one reason for the availability of the higher speeds - you can record signals of higher frequency if you can move more oxide past your record point per second.

Regarding the relationship between the analog audio signal and the high-frequency bias signal, one service technician whom I knew in Oklahoma told me that in a way, the analog signal modulates the bias signal as the carrier. And as in AM radio, if you overmodulate the carrier, you get distortion - not as bad as that in digital equipment when the converter is "overmodulated" - but still a kind of distortion you cannot undo in later processing of the recording.

As you indicated, there are also varying qualities of recording tape. I've seen some cheap tape which has dropouts and just poor quality recording as a result. If you want to make a high-quality recording, it is well to get hold of a reel of quality commensurate with the need for quality of your recording at hand.
 
You have a point: The fewer magnetic particles passing the record point per second, the lower the maximum audio frequency you can record. Consider the situation with those open-reel tapedecks of the 50s, 60s, and 70s; and note that they commonly had three speeds at least on consumer decks (professional decks often had 15IPS - really "hauling the tape").

There is one reason for the availability of the higher speeds - you can record signals of higher frequency if you can move more oxide past your record point per second.

Regarding the relationship between the analog audio signal and the high-frequency bias signal, one service technician whom I knew in Oklahoma told me that in a way, the analog signal modulates the bias signal as the carrier. And as in AM radio, if you overmodulate the carrier, you get distortion - not as bad as that in digital equipment when the converter is "overmodulated" - but still a kind of distortion you cannot undo in later processing of the recording.

As you indicated, there are also varying qualities of recording tape. I've seen some cheap tape which has dropouts and just poor quality recording as a result. If you want to make a high-quality recording, it is well to get hold of a reel of quality commensurate with the need for quality of your recording at hand.

Err? Not quite. There are vastly more magnetic domains passing a tape head even at 1.7/8ips than are needed for HF response (I seem to recall a Nakamichi that got to 25kHz?). The problem is output. The output of a system that reads magnetic flux is proportional to the Rate of Change of that flux so the output of a tape head rises at 6dB/octave. Obviously, the higher the tape speed, the greater the output at any given speed. This we get the familiar playback time constants which would be a perfect LP, 6dB oct' but there are inevitable losses in the heads and tape formulations. We can compensate for those to some extent but as tape speed falls we get progressively more HF 'Squash'. So, we have to test cassette response at neg 20 but we can do it at -10 (ref whatever flux standard you are using) for 15ips. The wider tracks help as well but really mostly improve noise performance (signal doubles for twice track width, 6dB but uncorrelated noise only goes up by 3dB)

Then, there is really little problem WRITING to tape* but the replay head gap puts a limit on HF. So, make finer gaps? They did but then overall output falls. Cassette really is something of a marvel!

Yes, HF bias must be at least twice HF limit. Sound a bit familiar?

If Mr Beats is looking in please be gently with me, I learned this stuff nearly half a century ago and have not checked any references!

*The arrival of the transistor posed a problem in getting enough voltage swing to ensure constant current drive but as circuits developed, amplifiers with current feedback, they were overcome..To an extent, as tape formulations improved and could be hit harder, amplifiers had to play 'catchup' Some replay amps also began to get short on headroom.

Dave.
 
hautbois16, please see my PM. Couple of things come to mind?
You say you "like the KNOBS" fine, so I thought you might find a MIDI surface controller handy? That converts much of the DAW visual parameters into knobs and slider movements. I hope the interface you chose has MIDI but if not USB will probably be fine.

You also mentioned making a "sacrificial track" ? You don't have to bugger a perfectly good one! Just save it and send it to a USB stick or external drive. Then you always have the track saved virgo intacta.

Another issue was with fekking Windows changing sound settings? I curse this often with Skype as I use a headset into a USB port but at odd times the computer reverts to "High definition Input" instead of "USB CODEC" . I f you are using a desktop you might consider an internal PCI sound card? The M-Audio 2496 can still be found and can be set as default sounds then you disable the internal MOBO sound and just run higher quality through the M-A. Bloody Windows ten then has nowhere else to go!

Dave.
 
Great thread this! I would just like to add that there is a school of thought that time itself is quantised. It could be that you can't get any shorter a time interval than maybe the time it takes light to traverse a sub atomic particle. That could mean that time jumps from one interval to the next like a series of movie frames to give the illusion of continuity. Sort of puts the cat among the pigeons in terms of sound waves......
 
Great thread this! I would just like to add that there is a school of thought that time itself is quantised. It could be that you can't get any shorter a time interval than maybe the time it takes light to traverse a sub atomic particle. That could mean that time jumps from one interval to the next like a series of movie frames to give the illusion of continuity. Sort of puts the cat among the pigeons in terms of sound waves......

Given that Einstein proved that time and space are inextricably linked, this would make sense to me. It's actually almost hard to imagine that it couldn't be true, assuming that.
 
Given that Einstein proved that time and space are inextricably linked, this would make sense to me. It's actually almost hard to imagine that it couldn't be true, assuming that.

Got to be tied up with 'data'? Information Theory (of which I know Jack S) surely means once we "cannot know" THAT is the limit?

Or! The time taken for a photon to pass a given point?

Dave.
 
Back
Top