I need a buffer workshop.

  • Thread starter Thread starter ScienceOne
  • Start date Start date
S

ScienceOne

New member
I need buffers to be explained to me. What is the difference between buffer settings in my program and buffer settings in my sound card. Are they both just latency settings for each device? And what does DMA stand for/mean?
 
buffers are basically just temporary storage areas for data before they are sent on to another device. Your software can use a buffer to store a portion of audio information, process it there with effects, and then send the audio out to your soundcard. In the non-audio world programs like word processors, programs can use a buffer to store all the information that you type....then when you save the file it writes it physically to disk. This is much more efficient so the computer isn't constantly accessing files on your computer just to make a change.
hardware buffers are similar. your settings determine how much audio is held in the buffer before being sent out to your speakers. A general rule of thumb to follow is when recording you want your buffers to be low enough so that there isn't any audible delay when you're monitoring, but high enough so that the computer doesn't freak out with the overload of information and give you errors back in the form of audible glitches. And when mixing, you want to have your buffers high enough so that the computer has time to process your information, but low enough so when you press play there isn't too much delay between pressing the play button and the first note you hear.

DMA stands for Direct Memory Access. It's a way of passing data directly from memory to another device without going through the CPU first.
 
Last edited:
Benny nailed it, and I'll clarify just one point: DMA and buffers work together, both with sound cards and hard drives. As data arrives from either the sound card or hard drive it is placed into the buffer automatically via DMA, bypassing the CPU which may be off doing other things like handling an EQ plug-in or reformatting a Word document. Then, when the program has a moment and "comes up for air," it can read that data from the buffer. The key is that the initial transfer of data from sound card to buffer happens in the background, since it's critical that no audio be lost as it arrives.

--Ethan
 
Ethan Winer said:
Benny nailed it, and I'll clarify just one point: DMA and buffers work together, both with sound cards and hard drives. As data arrives from either the sound card or hard drive it is placed into the buffer automatically via DMA, bypassing the CPU which may be off doing other things like handling an EQ plug-in or reformatting a Word document. Then, when the program has a moment and "comes up for air," it can read that data from the buffer. The key is that the initial transfer of data from sound card to buffer happens in the background, since it's critical that no audio be lost as it arrives.

That's very close, but subtly off.

As you said, when data is being transferred via DMA, the hardware is doing the moving of data from a device into RAM for you. However, when it finishes, the DMA engine signals the CPU that the data is ready and immediately interrupts whatever is happening on the system.

The sample buffer being scribbled into by the DMA engine is a ring buffer of a certain size. Basically, the DMA hardware starts writing at the beginning, and when it gets to the end, it wraps back to the beginning of the buffer. The CPU then copies data from that buffer to the application (by various mechanisms, depending on the OS). Essentially, the CPU and the DMA engine are constantly chasing each other. :)

If the CPU doesn't get the audio data from that buffer to the app in time, the DMA engine will wrap around and begin overwriting data that hasn't been read yet by the CPU. Thus, it is still possible to lose audio data.

There may be multiple layers of software buffers between the sample buffer (the dumping ground for the DMA engine's output) and the app, all of which are of a fixed size, hence the reason OS audio architectures are such a pain in the backside to get right. :D

The advantage to DMA is that the actual effort of reading data from the device is avoided, thus saving CPU cycles for other things.
 
So is it like this: The program and speaker output (and the hard drive, when recording) are the end of the line for data. The interface and the hard drive (on playback) are the two separate sources of data. In between a source and a final destination there is a buffer (and a processor). That buffer is very fast-access but low in capacity. It fills itself up and then empties itself out when the processor has a second to "empty it." This makes it so that the processor doesn't need to constantly be preoccupied with a data trickle from the source, it can just get info in larger single piles.

Is this right at all?
 
Back
Top