S/P DIF, bad quality

  • Thread starter Thread starter Vadim
  • Start date Start date
V

Vadim

New member
when i play or record, by using S/PDIF Optical cable, connected from KORG Triton Studio Keyboard to a E-mu 1212m audio card on PC, the quality is terrible, it's a little distorted, something like 20Khz sampling rate, and the setings are at 44.1 khz. but when i switch the audio to 48 khz it sounds better.
Why doesn't it sounds right at 44.1 khz?
 
Vadim said:
when i play or record, by using S/PDIF Optical cable, connected from KORG Triton Studio Keyboard to a E-mu 1212m audio card on PC, the quality is terrible, it's a little distorted, something like 20Khz sampling rate, and the setings are at 44.1 khz. but when i switch the audio to 48 khz it sounds better.
Why doesn't it sounds right at 44.1 khz?
Are BOTH devices set to 41.1? They need to agree. With the limited info, sounds to me like they're out of sync.....
 
fraserhutch said:
Are BOTH devices set to 41.1? They need to agree. With the limited info, sounds to me like they're out of sync.....

I don't see how. S/PDIF carries sync information with the signal. The drivers would have to be really broken for you to have a problem like that (i.e. they would have to be oblivious to the rate at which the audio interface was actually running).
 
Even if the sample rates were not synced properly (which is possible) the result would most likely be that the audio would play back at the wrong pitch, not that it would be distorted or otherwise messed up.

When you say it sounds better at 48k, do you mean it sounds great or that it just doesn't sound as bad?

One possibility I would look into is that the optical cable you are using has gone bad. Optical cables can break down, in my experience mostly because the ends get damaged in some way and the light can't pass through perfectly. The end of an optical cable is basically like a little lense. If the lense gets damaged, cracked or otherwise compromised it can have a very serious effect on the audio passing through.
 
SonicAlbert said:
When you say it sounds better at 48k, do you mean it sounds great or that it just doesn't sound as bad?
It sounds good, same as thru analog cable, i didn't listen to it thru phones to really ensure that it's ok.
 
SonicAlbert said:
Even if the sample rates were not synced properly (which is possible) the result would most likely be that the audio would play back at the wrong pitch, not that it would be distorted or otherwise messed up.

That shouldn't be possible unless either A. the drivers or hardware suck, or B. you have misconfigured the interface to take input from S/PDIF while using the internal clock. Make sure the interface is set to "External Clock from S/PDIF" or some such.

The receiving device should ALWAYS be synchronized to the clock from the sending device unless either A. the sending device is synchronized to the clock from the receiving device (with a word clock cable, for example), or B. both devices are synchronized to some third source.

As for the "just the wrong pitch" thing, no, it wouldn't. If the sample rates of the two devices differ, you would not just get the wrong pitch (though you would get that). What you say would only be true if the buffer size for the audio interface were infinitely large and if the receiving device always waited until it had received enough data to play to the end before it started playing. However, in the real world, the buffer in the receiving device is of a finite size, and the receiving device always starts playing immediately. For this reason, in addition to a pitch problem, you would also get all sorts of distortion.

Device A sends out 48 kHz worth of data, Device B expects 44.1 kHz, that means that every second, it gets an overrun. That means that the data from the source either wraps around and overwriting unplayed data or gets dropped on the floor to prevent this from occurring. Either way, the net effect is that 3900 samples per second get thrown away, either as a large chunk or as a lot of smaller chunks, depending on the interface's buffer size, packet size, read head offset, etc.

Device A sends out 44.1 kHz worth of data, Device B expects 48 kHz, so device B ends up getting an underrun. That means that it needs more data to play, but no data has arrived. Thus, it plays silence for 3900 samples per second, either as a single chunk or a lot of smaller chunks, depending on the interface's buffer size, packet size, read head offset, etc.
 
If you run a 48k session at 44.1 there are no data overruns. The 48,000 samples pass through at a rate of 44,100 samples per second. Nothing gets truncated or stuffed through too fast, the data just goes through at a slower number of samples per second, which changes the pitch. If you run a 44.1k session at 48k clock rate the 44,100 samples get run through at 48,000 samples per second, also changing the pitch, but in the other direction.

But that assumes both devices are locked to the same sample rate, just the wrong one.

Indeed, it would be almost impossible to have that happen unless there were settings grossly wrong in the receiving device.

So it is possible that the receiving device in this case is set to internal sync *and* the wrong sample rate. Although I'm not sure if the result would be distortion, or what it would sound like. Perhaps just shut down?

Some equipment only has one sample rate, and a keyboard might indeed have a single sample rate of 48k. So the key here would be to check that the interface is locked to the SPDIF input and the sample rate is set to 48k.
 
Back
Top