Probably a silly dynamics question

  • Thread starter Thread starter mixsit
  • Start date Start date
So from the looks of the transfer graph on that GUI , What you created is much like the first limiter/compressors that came into existance in the 50's which did'nt really have a "knee" ; but the whole diagonal line that is linear gain in the transfer plot is just tilted or tipped on the center ("hinged") . The difference is that although your graph does'nt have a knee , it's not upward expanding the low stuff. (other than the effects of make up gain, right ??)

( so while you say the wave form was'nt significantly changed , it;s over all dynamic range was made smaller ( forced into a more compact overall footprint in amplitude ??) . just a volume control , gain change?

I don't for sure since I've never gotten my mittens on one personally , but many of the first boxes did'nt have allot of controls for the envelope behaviours. the Gui's on say , IK MM's emulation of a fairchild 670 , or the new PSP old timer only have one control for both attack and release .


Here is a reference to the page over at rane that I found . It presents an interesting take; a history lesson on the transfer function .



http://www.rane.com/note155.html



: SEE Appendix



What "compression" is and does has changed significantly over the years. Reducing the dynamic range of the entire signal was the originally use of compressors, but advances in audio technology makes today's compressor use more sparing.

Early compressors did not have a "threshold" knob; instead, the user set a center ("hinge") point equivalent to the midpoint of the expected dynamic range of the incoming signal.

With advances in all aspects of recording, reproduction and broadcasting of audio, the usage of compressors changed from reducing the entire program to just reducing selective portions of the program.



n155fig29.gif





So it is usefull to break what a comp does into two parts , Too me , the simpler part is the gain changes brought about by the transfer function ( which depending on the comp , can have many wild and wonderfull permiations) and then the more mellon-twisting aspect of the detector circuits envelope following activities ; with all of it's controls , and how it goes about deciding what to tell the gain elements what to do ( when and how fast to do it ).

All of that is before the more esoteric sound-changing elements some comps have , like adding analog colour and inharmonic distortion .



If you got this far TX for putting up with my ramblings ! :pI probably got something wrong . Comps are devilishly deep and complicated . Really challanging to understand ( I doubt I ever will completely), And I guess that's part of their charm .... you gotta like a challenge!!:D


Cheers
 
Last edited:
Hey this is fantastic. I was happy just to see the idea finally starting to get conveyed, let alone thinking it could actually happen!
You know, for these purposes I'm not sure you'd need to go all the way down into the noise floor. Starting somewhere down within the musical part of the signal might be more to the point. But, it looks like you have already run into an interesting consequences. With even at 2:1 ratio it's a lot of total reduction, and perhaps the want of a soft knee curve ultimately? What do you think?
Well, I thought this was a very interesting question and resulting thought process and final experiment. It kind of forces all of us to go back to basic audio theory and think about just how simple the basics actually are, and yet how they can so complexly relate to real world signals.

It also caused me to realize a couple of things that I never really consciously thought about before, like just how much the overall amplitude of the signal is reduced when you throw the threshold so low (at first I though something was "broken" when I saw the original compressed result :p, but then a quick thought of the math made me realized that it wasn't, but that's just how it really works.) Also, I never really thought about the properties of something as simple as output gain, and depending upon how that itself is designed can affect the outcome; it's possible that an analog compressor (or any other harware box, FTM) that uses an amplifier circuit that (for example) outputs 2dB for every dB going into it is going to provide a different result than a digital one that simply adds x dB to the incoming signal.

As far as real world use of low-threshold compression, I'm not completely sure of the utility of it. On the one hand, as that third sample shows, it is an efficient way of increasing RMS without having to smash peaks against the ceiling. But on the other hand, it also shows the dangers of compression in amplifying and exposing faults that otherwise exist at volumes low enough to be effectively masked from all but the most discriminating user. For (just one) example, in that third sample, that trail off of that initial intro distorted guitar slide sounds pretty awful with that compressed fade becoming really granular sounding.
Glen that clip is interesting. A couple questions- Does that plug allow you to go to 'zero on attack and release, and did you have to do it off-line?
Yes on the A/R question. Both of them (the Adobe one also) allow you to go down to 0ms on both properties. See their not trying to emulate a real-world compressor, they are just tools for applying math to the signal dynamics, and therefore put no such bottom-side limitation on the A/R settings.

It's kind of surprising to me how very few people know about these dynamics processing tools. I think the only reason I know about them is because I started using this stuff in Windows before VST or even DX plugs even existed. Then, all we had to work with were the native tools that came with the editor, so we had a tendency to explore those as much as possible. Then when the idea of 3rd-party plugs came into vogue, everybody got hot-to-trot to all the emulators that started becoming available, and the more esoteric native tools kind of get ignored now.

That said, though, graphical dynamics processors like these, while extremely flexible and powerful, are quite difficult to use and use well, especially for the newb or the recording musician who may not be all that deeply versed in audio signal theory. Also, they tend to be quite surgical, and not very "musical-sounding", exactly because they are not trying to emulate analog device characteristics. But when your question came up, mixsit, I dusted off some mental cobwebs and figured a GDP was exactly the tool for the job you described ;).

As far as doing it "off line", yes, I applied the compression to the wave as an off-line process. I don't believe that's necessary - the SF tool does have a "real time" checkbox on it, and both tools do have a preview button, so they can probably be used on the fly just as well.

As far as the question about the knee, I'll get to that in my next post which will reply to flatfinger's post.

G.
 
So from the looks of the transfer graph on that GUI , What you created is much like the first limiter/compressors that came into existance in the 50's which did'nt really have a "knee" ; but the whole diagonal line that is linear gain in the transfer plot is just tilted or tipped on the center ("hinged") . The difference is that although your graph does'nt have a knee , it's not upward expanding the low stuff. (other than the effects of make up gain, right ??)

( so while you say the wave form was'nt significantly changed , it;s over all dynamic range was made smaller ( forced into a more compact overall footprint in amplitude ??) . just a volume control , gain change?

I don't for sure since I've never gotten my mittens on one personally , but many of the first boxes did'nt have allot of controls for the envelope behaviours. the Gui's on say , IK MM's emulation of a fairchild 670 , or the new PSP old timer only have one control for both attack and release .

Here is a reference to the page over at rane that I found . It presents an interesting take; a history lesson on the transfer function .

http://www.rane.com/note155.html

: SEE Appendix

n155fig29.gif


So it is usefull to break what a comp does into two parts , Too me , the simpler part is the gain changes brought about by the transfer function ( which depending on the comp , can have many wild and wonderfull permiations) and then the more mellon-twisting aspect of the detector circuits envelope following activities ; with all of it's controls , and how it goes about deciding what to tell the gain elements what to do ( when and how fast to do it ).

All of that is before the more esoteric sound-changing elements some comps have , like adding analog colour and inharmonic distortion .

If you got this far TX for putting up with my ramblings ! :pI probably got something wrong . Comps are devilishly deep and complicated . Really challanging to understand ( I doubt I ever will completely), And I guess that's part of their charm .... you gotta like a challenge!!:D

Cheers
Not at all. All good add. This is exactly part of the thought process.

This is from my Valley 610 book- (or why 'it can't' (or shouldn't) be done.

"Dynamics processors can not be allowed to operate instantaneously ...(my snips).. following exactly the input waveform; the result is operation as a non-linear gain block which creates distortion. The objective ... is rather to control the envelope of the signal as in fig ..."

You see perhaps the counter- or opposing if you will possible goals in gain reduction.
Continues..

"The ideal processor should be able to distinguish the input waveform from its relatively slower envelope and follow the envelope without regard to the frequency content.. This, in practice, is nearly impossible. (my highlight) ..The process must react quickly enough to control ... sudden bursts ... thus, sometimes it must react to the waveform. "

They then go on to describe the how and how not you should do attack and release responses for best compromise, with which both ends of the time scale have the common problem of compression, being; 'fast enough' = messes with the micro.

So the question is, with these tools, and at the sample level- can we do an end run' around it?
 
As far as doing it "off line", yes, I applied the compression to the wave as an off-line process. I don't believe that's necessary - the SF tool does have a "real time" checkbox on it, and both tools do have a preview button, so they can probably be used on the fly just as well.


G.

While the dynamics processing might be a bit time consuming for paper pencil TI calculator it does not stress the CPU load of current desktop computers (at least ones with OS that does not suck up 99% of CPU cycles) so, specifically regarding AA, you can stick 24 iterations of dynamics processor on 24 separate tracks pretty much without a hiccup (don't typically do that but dynamics math does not stress CPU like anything resembling usable reverb will)

with regard to musicallity the original CEP dynamics processor 'could' be as transparent as dynamics processing was likely to get. You could alter the source in many of the ways associated with hardware compressors (though you would not emulate those results . . . very few iconic hardware compressors/limiters gained reputation via transparency (on more then one occasion I've heard the magic double button press of the 1176 referred to 'thickening' the source akin to giving the music an errection . . . not transparent)

as with most functional digital tools the musicality of use is dependent primarily on the practicioner . . . with valve gear and tape the perceived musical artifacts are a function of hardware design and cultural prejudice

but do agree that using the processors effectively depends, to an extent, on having at least a conversational understanding of basic audio physics. They have enough dependent variables that simple trial and error via impact in the mix can be frustrating. It is pretty easy to eat CPU cycles while gaining no perceptible impact or with what appears to be a minor change introduce unwanted artifacts

but in terms of managing a mix transparently they tend to be worth the effort to learn
 
So from the looks of the transfer graph on that GUI , What you created is much like the first limiter/compressors that came into existance in the 50's which did'nt really have a "knee" ; but the whole diagonal line that is linear gain in the transfer plot is just tilted or tipped on the center ("hinged") . The difference is that although your graph does'nt have a knee , it's not upward expanding the low stuff. (other than the effects of make up gain, right ??)
Almost...but not quite a cigar ;). Very good question though. :)

On the graph on the GDP, there is no knee, but neither is there a hinge point. On that graph, like on the Rane graph, a perfect, straight 45° angled line would indicate a 1:1 input-output ratio - i.e. no compression, no expansion. This is represented by the dotted line on the graph. What we did was apply straight 1.5:1 compression across the board, from -80dBFS to 0dBFS on the input (L-to_R across the bottom X axis). This basically means that for every 1.0 dB going into the compressor, 0.75dB comes out (as read on the bottom-to-top Y axis on the left). This changes the slope of the line to a shallower slope, as represented by the actual solid line on the SF graph. There is no "knee" per se, because the compression is uniform at 1.5:1 everywhere (the line is straight, no bent knee in it).

Basically the corner point of any "knee" simply represents where the threshold is set. Since we pushed the threshold in this case all the way down to the bottom left of the graph, no knee actually appears. Technically speaking, I guess, there will be a knee at -80dBFS in this case, because anything that may exist in the signal below that threshold point will not be compressed by the plug. but because it's at the boundary of the graph and the plug, we don't see it and don't really control it.

If you want to see a knee, set the threshold higher, within the boundaries of the plug's operation. For example, here's a setting of 2:1 compression with the threshold set at -36dBFS. You can see the knee "joint" at the center at the -36dB input level:

comp_test_gdp_knee.jpg


Below the threshold, there is no compression applied, so the line follows the ideal 45° 1dBin/1dB out dotted line perfectly. But at the threshold input level of -36dBFS, the 2:1 compression starts being applied, so the slope of the line changes, and we now see a "knee".

When Rane talks about the "hinge point", they're talking about real old-time devices. This is very interesting - and probably confusing to many - because this represents a different class of compression altogether than what most of us are used to. All the standard compressors that most of us are used to, whether digital or analog, use what's called "downward compression"; meaning that everything above the threshold level is compressed downwards. Simple and - to most of us - natural enough. But watch what happens when we turn that knee into a "hinge point" or a "pivot point" instead:

comp_test_gdp_pivot.jpg


Now our old threshold point become a pivot point around which a straight line can rotate. But that now means that our compression line below that point is no longer riding the 1:1/no compression train. Now there is compression applied to the signal below that pivot point, but it's in the form of "upwards compression"; i.e. all signals lower in amplitude than -36dBFS are now raised in amplitude towards -36dBFS. The line is straight, the relative ratio of compression is still the same, but it's now reversed; the overall signal is still being compressed, but like things in a gravity well around a planet or a black hole, they're all being pulled or compressed towards the center, or towards the pivot/hinge point setting.

This is not to be confused with dynamic range expansion (though many do). Expansion is just the opposite, everything is expanded away from the pivot point or threshold point, and thus the overall dynamic range is expanded. This would be represented on the graph by a line or line segment that is *steeper* the the 45° 1:1 dotted line, and would numerically be represented with the larger number on the right of the colon (e.g. an expansion ratio of 1:1.5):

comp_test_gdp_expand.jpg


Now, a "soft knee" simply means that the "joint" or bend in the line at the threshold is smoothed instead of a sharp angle. A perfect functional example would be the preset in the SF plug called the "Soft noise gate below -36dB":

comp_test_gdp_softgate.jpg


What that setting is doing is basically applying no compression or expansion above -36dB (the straight 45° line above and right of -36dB input), but starting at -36dB, starts gently curving into a steep expansion (steeper than 45°). This has the effect of pushing/expanding downwards the low-volume noise below -36dBFS, essentially pushing it down to inaudible levels. But it does so with a soft knee so that there is no sharp cut off of the threshold volume, but rather a gentler transition. These soft knees can sometimes be more natural or musical sounding than a sharp knee.

Finally, notice that the gentle curve is made using a number of line segments and "handles". Part of the real power (and complexity) of these GDP tools is that they work much like automation rubber bands on a timeline; you can stick as many or as few handles on the dynamics line as you want, and bend the line into just about any goofy spaghetti shape you want, and get all sorts of weird dynamics effects happening all over the dynamic range of your signal. 99 out of 100 of those shapes will probably be musically useless, but you never know what you may need or want to do to surgically massage your signal.

Ugh, I need a smoke break.... ;)

EDIT: @ Ortez: I agree completely :).

G.
 
Last edited:
So the question is, with these tools, and at the sample level- can we do an end run' around it?

Ah, no. You can't alter the source without alterning the source. The variables being discussed as if they were independent (e.g. amplitude, frequency) aren't. Ultimately they are dependent variables of a pressure wave (side stepping entirely tree falling in the forest psychoacoustics)

One of the reasons a general rule of thumb for a polite mix is to effect dynamics prior to EQ is that, generally speaking changing dynamics (amplitude of S/N ration) will influence perceive EQ more then altering EQ will change perceived dynamics. And you can't change the dynamics with changing the dynamics no matter where you set the parameters.

Tracking, editing, mixing, & (heaven forbid) mastering is always all about compromise. No free lunch. you alter the source you alter the source.

Oh, in digital domain all editing is effected at the 'sample' level . . . no matter the user interface it's all executed @ sample level (you can program algorithms to skip samples in all sorts of wonderful ways but the execution is linear sample by sample . . . even if the program is supposed to skip ahead 700 samples then back 500 execution still 'polls' the samples linearly.
 
As far as real world use of low-threshold compression, I'm not completely sure of the utility of it. On the one hand, as that third sample shows, it is an efficient way of increasing RMS without having to smash peaks against the ceiling. But on the other hand, it also shows the dangers of compression in amplifying and exposing faults that otherwise exist at volumes low enough to be effectively masked from all but the most discriminating user. For (just one) example, in that third sample, that trail off of that initial intro distorted guitar slide sounds pretty awful with that compressed fade becoming really granular sounding.

Some processors follow expansion with a bit of low level gating to try and keep those low level nasties at bay don't they ? could that work ?? This reminds me of the Elemental Audio comp that seperated by level zones instead of the usual freq ( maul the band ) I should have bought that one way back when I first saw it . Hope they make it back from the RN debacle.

Another thought about the term "granular"... Is this an instance of a time when a greater sample rate would be a great benefit ( other than sounding good to fido ?) I've read that dynamics , comps and limiters can put the increased resolution ( sorry , not the best term) to good use. Or is it a quaintizaton error issue because it's at the LSB's (least significant bits) ? ( geeze; I'm probably being pedantic)


"The ideal processor should be able to distinguish the input waveform from its relatively slower envelope and follow the envelope without regard to the frequency content.. This, in practice, is nearly impossible. (my highlight) ..The process must react quickly enough to control ... sudden bursts ... thus, sometimes it must react to the waveform. "


And the lenght of the waveforms within the source material are a big factor .( and when its a broadband source , using the sidechain filters can be effective , though I have'nt found that to be as effective as I once assumed it would be; except for the hpf..)

Something I have'nt gotten into much (because it's still a bit fuzzy ) is the hold function that some of my soft comps(not gates) detector sections have ; I know it is to help avoid those single cycle following reactions .


as with most functional digital tools the musicality of use is dependent primarily on the practicioner . . . with valve gear and tape the perceived musical artifacts are a function of hardware design and cultural prejudice
From what I understand , the mystic qualities given to analog gear was only because some of them were more FORGIVING when the operator screwed up !!

Since we pushed the threshold in this case all the way down to the bottom left of the graph, no knee actually appears.

Shit , how did I fall for the old "Knee in the corner " trick . Oh well ; better than being the recipient of the old "knee in the groin" trick:mad: !!!!!!!!!






Thanks guys!! . I've always had a hard time believing that with all the interactive knobs on comps , you could just randomly twistem into submission . I keep at it and am making substantial progress . Stuff like the above helps very much !!!




Cheers
 
Last edited:
Some processors follow expansion with a bit of low level gating to try and keep those low level nasties at bay don't they ? could that work ?? This reminds me of the Elemental Audio comp that seperated by level zones instead of the usual freq ( maul the band ) I should have bought that one way back when I first saw it . Hope they make it back from the RN debacle. Another thought about the term "granular" Is this an instance of a time when a greater sample rate would be a great benefit ( other than sounding good to fido ?) I've read that dynamics , comps and limiters can put the increased resolution ( sorry , not the best term) to good use.
The plug you're talking about is (was) called "Neodynium". The name changed to "Dynamizer" under RND. I love that plug, it one of the three EA plugs I take with me when I go to someone else's studio or system (the other two are "Eqium", the EQ plug which I like even more than Neodynium and use virtually *everywhere*, and "Finalis", their limiter plug.)

Neodynium is really just a high-tech/space age version of a dynamics processor. You can think of the level zones like taking the graph in a standard GDP like we've been looking at and automatically adding four handles on the line allowing you to divide the graph into four different resizable zones. While a vanilla GDP is actually more flexible in that you can put as many handles on (creating as many "zones") as you want, the chances of ever really needing more than four zones to work independently is pretty rare, IMHO. It also doesn't drop to zero on it's A/R settings, but they do, if I recall, go down to microseconds, which is more than flexible enough for most anything (certainly more than your average compressor).

But with the additional ability to set up a "key filter", which adds the dimension of setting up a specific frequency curve to which you can apply the dynamics, if you don't want something more surgical than broadband, the ability to target left and right channels independently if yu want, and some excellent, very clever metering, that app is like the stealth joint task fighter of dynamics control. A steep learning curve for the newb, perhaps, but about all one could ask for in power, flexibility and surgical control in a fantastic interface.

As far as the "granularity" I was referring to, (a word that can mean different things to different folks, I suspect), it's an artifact of the fact that the compression combined with makeup gain is kind of "bloating" the reverb tail and amplitude fade. It's kind of foreshortening the fade slope. It still lasts as long, but it stays at an audible for longer before completely disappearing.

I'm not sure if higher sample rate would help that or not. Off the top of my head, I can't think how, because it's not really a frequency issue - at least that I can think of off-hand. but then again I'm kind of brain-burnt right now after diving in the deep end of this thread so long, so maybe I'm missing something there :o.

Perhaps 32-bit floating point processing may be a help there, but probably mostly if it were in effect in the file itself as well, and not just in the plug doing the processing. Again, I'm kind of mentally winging it here, so no chiseling of this in stone, OK? ;)

Noise gating would serve to cut off the fade faster. How sharp or smooth that cutoff would be would be a function of the knee in the gating. But either way it might hide the "granularity" by dropping it below the gate faster, but that would also at the same time shorten the length of the signal before it mutes. It would probably be one of those cost/benefit trade-offs where you'd want to pick the balance point between the two evils.

The best solution, IMHO is to avoid it altogether by not throwing compression over a fade out or tail, save the fades/tails for the last step when possible. Or, if you absolutely have to apply something over a pre-existing fade - as in mastering/remastering - then I'd probably apply the compression as needed, and then create a new fade or tail that fades out before the "granular sound" kicks in.

G.
 
A single sample cannot convey anything such as frequency, amplitude, let alone timbre.

Technically speaking, I think a single sample will convey amplitude. A single sample is a pure, instant recording of sound pressure level, right?
 
If you want to get technical it would convey DC offset, but not amplitude. Amplitude measures the difference between the highest and lowest pressure levels. So, no it wouldn't be amplitude.
 
I see, you're right about amplitude. But the DC offset being sampled is merely an electrical signal that functionally represents a relative sound pressure level, right? That is, when the sound being recorded is captured, it is the SPL that is inducted and then sent as an electrical signal of varying voltage to hardware down the chain.

To be brief, the higher the SPL, the higher the voltage. The higher the voltage, the higher the DC offset. Am I correct? If so, then the DC offset being sampled is a direct corollary of the SPL being recorded.

Thus, while you are totally 100% correct about amplitude, the sample does indeed convey SPL.

Am I nuts? :D
 
Nope.

SPL is related, or rather dependent on amplitude and the density of the sound carrying medium (gas/liquid, etc). If you have a steady state DC offset there is no sound pressure. In physical world, if you put a steady positive DC offset through an amp and connect a speaker to it, all you'll have is a speakercone that's exerted forward but it will not be vibrating/moving. Now, this might not be good for the health of the speaker, but since it wouldn't be moving/vibrating it wouldn't be producing any waves, which in turn won't produce any sound pressure, which will mean that you will not have a sound pressure level.

One way to look at DC is to look at it as the "rest" or "pivot" point at which your wave will oscillate. In normal operation, DC = 0. Now, imagine you have a wave that oscillates around 0.5 and -0.5 "points" (unlike dB, this is an arbitrary scale that is used in both the analog world as well as the digital world, in the analog world, full scale amplitude for a synthesized wave would be one that oscillates between +1V and -1V... on some synths... others use 5V scale, but I digress...). The overall excursion of this wave is 1V (if we go with the analog domain). Now, let's add a DC offset of 0.5V. All this will do is it will move the pivot point of our oscillating wave, which will mean that the wave will be oscillating between 0 and +1V (instead of the original -0.5V and +0.5V). Now, note that the difference of the highest and lowest points in the wave is 1V in both cases, which will mean that the amplitude is the same, even if the point around which the wave oscillates has shifted.

I know I am probably not explaining this as well as I should, but I am typing this inbetween taking calls, and doing some actual work :)

The image below should make it more clear. The wave to the left has positive DC offset. The wave to the right has no DC offset, in other words, DC = 0. Note that the amplitude in both cases, is the same:

dc_fixer.jpg
 
Last edited:
There is no such thing as "atom of water". Water is a chemical molecule consisting of two hydrogen atoms and one oxygen atom. Unless I am confused as to what you're alluding to. :confused:

Yeah, you're citing the normal chemical definition of 'atom'. But a guy called George Gurdjieff used the term 'atom' idiosyncratically to refer to the smallest amount of, say, water that still had all the characteristics of water. (It flowed, it froze, it formed droplets, etc. He mentioned that it was only J-u-s-t visible to some people, which gives you an idea of how much bigger his 'atom' of water was than a molecule of water.) If it had been me, I'd've used a different term completely rather than use a known term and give it a different meaning, but that was Gurdjieff.

Anyway, what I was wondering about was something similar - the smallest amount of sound that still contained the recognizable characteristics of the instrument that made the sound. But a lot of that turns on who's listening, right?

And one level further in - I was wondering exactly how much information a sample carries, and how much of the sound of the instrument a sample represents.
 
Yeah, you're citing the normal chemical definition of 'atom'. But a guy called George Gurdjieff used the term 'atom' idiosyncratically to refer to the smallest amount of, say, water that still had all the characteristics of water. (It flowed, it froze, it formed droplets, etc. He mentioned that it was only J-u-s-t visible to some people, which gives you an idea of how much bigger his 'atom' of water was than a molecule of water.) If it had been me, I'd've used a different term completely rather than use a known term and give it a different meaning, but that was Gurdjieff.
Ah. Gotcha. I knew I was confused there somewhere :)
 
At the 44.1 khz sample rate , a single sample has a span of 22.6 microseconds.

A millisecond is one thousandth of a second!

A microsecond is one millionth of a second

A sample takes the time span of 2.26 thousandths of a second

A sample takes the time span of 22.6 millionths of a second.



We have solved the riddle of all those trees that fell in the forest without anyone hearing them............ they fell FAST







1/44100 X 1,000,000 = 22.6757369611451247165532879818594



round it up to 22.7 ; betcha still cant hear it !!!
 
Last edited:
..Anyway, what I was wondering about was something similar - the smallest amount of sound that still contained the recognizable characteristics of the instrument that made the sound. But a lot of that turns on who's listening, right?

And one level further in - I was wondering exactly how much information a sample carries, and how much of the sound of the instrument a sample represents.

I'll take a guess (that's all it is. seat-of-the-pants style here :D None.
Second guess, w/o overtones, to define and add context, all you have is a voltage. Not even pitch.
 
Nope.

SPL is related, or rather dependent on amplitude and the density of the sound carrying medium (gas/liquid, etc). If you have a steady state DC offset there is no sound pressure. In physical world, if you put a steady positive DC offset through an amp and connect a speaker to it, all you'll have is a speakercone that's exerted forward but it will not be vibrating/moving...since it wouldn't be moving/vibrating it wouldn't be producing any waves, which in turn won't produce any sound pressure, which will mean that you will not have a sound pressure level.

One way to look at DC is to look at it as the "rest" or "pivot" point at which your wave will oscillate. In normal operation, DC = 0.


I understand that if you have a speaker cone that is exerted forward it's not going to translate to a sound pressure level in a room. But that's because the speaker is in a box that is likely not airtight. The net air pressure in the room is the same whether the speaker is exerted or not. If you have an airtight, air-filled enclosure with a speaker cone on one of it's walls facing inward, the exertion of that speaker will indeed increase the air pressure in the enclosure. Sound is only an oscillating variation in the pressure of a medium. In that scenario, the voltage supplied to the speaker to exert it will be proportional to the air pressure in the enclosure.

If you sample a wave that has a DC offset of zero (as in your second image), your samples are not all going to read zero. As you said, it's only the pivot point that changes with DC offset. Instead, your samples will oscillate between peak and trough, and the zero DC offset means that the extrema of the wave are equidistant from a zero volt reading. If the DC offset is +.5, then the extrema will be equidistant from +.5 volts.

But then what are your bits actually measuring? They are measuring volts DC, not DC offset. DC offset is just a measure of derivation from having a "pivot point" at zero volts.

All I'm saying is that when a sample is taken, it's measuring pressure at a point in time in a particular medium. While I agree with your earlier post that this won't reflect timbre, amplitude, etc., it does reflect some measure of information about the sound source.
 
We have solved the riddle of all those trees that fell in the forest without anyone hearing them............ they fell FAST

Nice. :D

I've always wondered about that. Fast trees are quiet trees.

Whispering pines.
 
noisewreck, I think I've got it figured out where I'm not understanding your point. You are right that the sample only measures the difference in voltage from zero. In that sense, it is a measure of the DC 'offset'. But when you defined DC offset, you were referring to it as a "pivot point" for a wave. To my knowledge, a single sample in a waveform cannot tell you what that waveform's "pivot point" is. So we're talking about two different understandings of "DC offset".

And I retract my point about the sample measuring "pressure level". Without a baseline there is no relative pressure level to measure.

So a sample only measures voltage, right (i.e. the distance from zero voltage)?

I'm not trying to be a pain in the ass here. ;)
 
A single sample is like a single letter of the alphabet; while it does have intrinsic value, it doesn't really mean much unless or until it's taken within the context of it's neighbor samples to spell out a meaning.

G.
 
Back
Top