Phase adjustment plugin????

  • Thread starter Thread starter ebeam
  • Start date Start date
You know, I just thought of a great analogy for phase shift without time delay.

Think about the profile of a corkscrew spinning on a hard, flat surface. The corkscrew's heght doesn't change (time), the thickness (amplitude) and spacing (frequency) don't change throughout the length, just the phase.
 
ebeam said:
What is the difference between a perfect sine wave and a perfect cosine wave (with the same amplitude and frequency) starting and ending at the same time? Phase. No time difference, just 90 degrees out of phase.
The definitions are completely interchangeable for sine waves. Time, phase, same thing. Delay the sine wave a quarter of a cycle and you produce the cosine wave, though the attack has been time shifted. Mathematically, if someone said "shift the phase of this sine wave 10 degrees" you would simply time delay or advance the wave 1/36 of a cycle and use your original time points for attack and release to truncate the wave. That is, by definition, what you are doing when you manipulate the phase of a sine wave... time shifting.

Take a look at this neat applet: http://www.udel.edu/idsardi/sinewave/sinewave.html Leave only the blue box checked, and play with the third numerical value - the phase. When you change it by 10, 20, 30 degrees... doesn't that look like simply a time delay of the waveform to you? It is.

I understand what you are getting at, but mathematically it's the same result. I suppose you could go through the trouble of examing a waveform, calculating its frequency, calculating the offset at each sampling point (in the digital sample) to phase shift it, and be done with it. Or, you could slide the waveform in time by the phase shift required, and retain the original time dependent attack and release times to truncate the shifted wave. Identical end result, but probably much less computationally intensive to manipuate the waveform in the time domain instead of offsetting every single sample point.

You can calculate the phase difference between these two and redraw one to match the other without changing the attack and release at all.
Look at that wave applet again, and look at how the attack looks when you have a phase of 0, 90, 180 degrees. There is a very subtle sonic difference in the attack being in a trough, midpoint, or crest of the wave. You yourself said that humans can hear the difference in a polarity change (which I highly doubt in the case of a sustained pitch, but perhapse we can detect differences in attack), which means the attack is still in the midpoint, just that it ascends in level in one case, and descends in the other. Personally, I think such sonic differences will be small in most cases. Perhaps for low frequencies it is more noticeable. What I would be worried about is some cumulative effect of altering the attack characteristic of every single frequency component in the waveform. That might have a noticeable effect... or it might not. Never done a listening test on that one. :)

And a fourier transformation is a method for solving/manipulating a differential equation...
Well, it's actually a differential equation that transforms a wave from the time domain to the frequency domain. Been a while since I did that though, and don't plan on doing it again if I can avoid it. :D
 
ebeam said:
You know, I just thought of a great analogy for phase shift without time delay.

Think about the profile of a corkscrew spinning on a hard, flat surface. The corkscrew's heght doesn't change (time), the thickness (amplitude) and spacing (frequency) don't change throughout the length, just the phase.
Yep, pretty good analogy. But, look at the very tip of the corkscrew (the attack). Doesn't it look different depending on what position you revolve the corkscrew into (phase)? ;)

You can keep the attack at the same place, but it will look different. When I say time shifting the wave, that is really independent of what you do at the beginning or end. You can truncate a wave at any point you choose.
 
I guess I didn't understand what you were saying before. A frequency dependent time delay followed by adjustment of the attack and release would be mathematically the same as a phase adustment. Makes sense I guess. I wonder what that would sound like. The shape of the waveform would definitely be changed and by changing the relative phase of different frequencies, you would be altering their interractions and I assume that would change the sound. But, if you could alter the sound so that important frequencies aren't cancelled when combining two signals, a plug-in for this might be useful, or at least interesting.
 
ebeam said:
I wonder what that would sound like. The shape of the waveform would definitely be changed and by changing the relative phase of different frequencies, you would be altering their interractions and I assume that would change the sound.

Yeah, I'm kinda curious myself now that we've talked about it. I would love to hear a smooth sweep through phase differences to see what the effect is.

However, AFAIK all analog high and low pass filters alter tha phase (dependent on frequency) of the sound passing through them as well. If I remember correctly, low pass inductors lag the sound based on frequency, and high pass capacitors lead the sound.
 
Well, EQ has an effect on phase. I wonder if that Waves Linear Phase EQ does something similar to what we are talking about to minimize this effect. It has some serious latency from what I understand.
 
Let me understand this -

Looking at it from a simple viewpoint -

If you are just simply switching the phase - you end up with an overall wave change of X1=X-2X (or X1=-X+2X for the -'ve portion of the wave).
ie This is nothing to do with delay (which is cheating by just shifting by a long-wavelength amount and hoping that this cures-all for the other frequencies.)

However this is still only an overall wave view when phase is switched like this, and therefore does not cater exactly for the higher frequencies? (Does this matter? - Can anyone hear this?!)

In theory one would have to have a multi-band phase switcher to do the job properly - which theoretically would have an infinitely small dy/dx bands of frequency to be able to reflect the whole frequency spectrum?
Presumably this would not be possible/practicle to come close to, and if a phase-band approach were attempted, the resulting bands would tend to have some sort of effect on the phase themselves. (Though Waves claim to have minimised this with their LMB compression plug-in)...
ZZZzzzzzzzzzzzz...

Fuel to the fire...
where did I put those matches?...
 
I'd love to see such a plug-in. From what I understand from following this thread, the theory is to try and break a complex wave form into frequency bands and process from there. My question is... do you really have to worry that much about the higher frequencies? I mean, a 1kHz sine wave has a cycle (period) of 1ms. That means that the most you'd have to nudge it in any one direction would be half of that... can daw's even manage that? More importantly... would we even be able to notice it?

You have to shift lower frequencies much further, of course... and this is probably where we'd hear the major difference.

IMO the closest you're going to get in the digital realm is frequency ranges... the digital domain is, of course, made of discrete samples... so applying "continuous" calculus to it is absurd, you'd agree... we couldn't possibly create a plugin that aligned 102Hz, 102.1Hz, 102.2Hz... etc (like this over the whole sound spectrum) perfectly anyway.

Make it 3/4 bands -- adjustable -- with the option of shutting off bands for performance reasons. You could just dial in the "problematic" areas with your ears--for example--center the band around 315 and adjust out "mud"... this would be highly dependent on the mix. Turn on more bands for more processing... simple as that.

So, let's get on Waves/McDSP/Antares/PSPAudioware etc. about this. This could kick ass :)


Chad
 
participant said:
... so applying "continuous" calculus to it is absurd, you'd agree... we couldn't possibly create a plugin that aligned 102Hz, 102.1Hz, 102.2Hz... etc (like this over the whole sound spectrum) perfectly anyway.

Make it 3/4 bands -- adjustable -- with the option of shutting off bands for performance reasons. You could just dial in the "problematic" areas with your ears--for example--center the band around 315 and adjust out "mud"... this would be highly dependent on the mix. Turn on more bands for more processing... simple as that.

What about dividing the bands up into octaves or fractions of octaves?
 
Even after you break a signal down into it's constituent frequency components, you can't "align" those components to one another. You can shift them in time, and thus in phase, but there's really no meaning to the term "align" in this context.

If one frequency was 1.13584532189 another frequency, how would you "align" them? It might take hours before you get any type of repetition in their interference pattern.

No, I think you can simply change the sound. Align is a psuedo-term... it's simply what sounds good, and what doesn't.
 
The way I see it, the main purpose of such a device would be to compensate for phase differences in microphones that are placed at different lenghts from a source. Given any distance you could calculate the phase shift for any frequency. So this plug would have a knob for distance, essentially, and shift all frequencies accordingly. So, in practice, most high frequencies are transient (right?) so by the time a transient passes the second mic, the first one no longer has that signal anyhow. I imagine you would mainly hear the effect on low frequencies, or sustained higher frequencies. I don't think the main ouput is the audible effect on one signal, but how such a device changes the interraction between two signals.


Participant-
You should, in theory, be able to shift a waveform by 1 sample - which at a 44.1kHz sampling rate would be increments of approx 1/44th the wavelength for a 1 kHz signal, roughly 8 degrees of phase shift.

Calculus is done all the time with DSP. Thats what an EQ does. Once you have enough data, those discrete points start to look so much like a smooth line that you wouldn't be able to tell the difference. Obviously, the bit and sampling rate will have a major effect on how precise the calculations are and how high of a frequency can be manipulated.
 
Last edited:
Bigus Dickus said:
If one frequency was 1.13584532189 another frequency, how would you "align" them?

My point exactly. That's why you should process frequency bands.

You should, in theory, be able to shift a waveform by 1 sample - which at a 44.1kHz sampling rate would be increments of approx 1/44th the wavelength for a 1 kHz signal, roughly 8 degrees of phase shift.

My point is would it be audible in the higher frequencies? How could it be? The wave forms just get tighter and tighter.

Calculus is done all the time with DSP. Thats what an EQ does. Once you have enough data, those discrete points start to look so much like a smooth line that you wouldn't be able to tell the difference. Obviously, the bit and sampling rate will have a major effect on how precise the calculations are and how high of a frequency can be manipulated.

I just don't see the point of doing this phase adjustment on any kind of extremely accurate scale, since you're applying continuous (smooth) calculus to samples. It's like measuring something with a micrometer, and then blasting it in two with a shotgun :)
 
uh...

participant said:
My point is would it be audible in the higher frequencies? How could it be? The wave forms just get tighter and tighter.

Ummm... forget I said that :D Here I am, thinking about one (1) cycle of a high-frequency sound wave... and I say something dumb like "how could you hear phase adjustment on high frequencies?" Duh... Ya hear phasing problems on cymbals... don'tcha, dumbass? :)

Of course... the differences in phase between two soundwaves with similar HIGH FREQUENCY content most certainly could be heard... we're talking about many more than a single cycle :o

Duh... somebody kick me :)
 
and then.......and there........ of cause............you have all the reasons why there is such a thing as "time allign" in Pro Tools??
 
Yeah, but the whole point is that "time align" time shifts all frequencies in the signal by the same reference time offset.

A true "phase align" would time shift frequency components as a function of their frequency, so different frequencies would have a different time offset.

Make sense?
 
Closest you can get to that might be using the VocAlign plug-in, with peak align.
However, perfect alignment in the way you suggest might suck the life out of some sound.
 
Back
Top