What SPLS Block Size in REAPER?

LazerBeakShiek

Tell Tony, Eddie an the Cruises IS here
What is SPLS Block size again? In Reaper options there is a block/sample size. I kept it low at 144(lowest) to decrease latency. This might not be a perfect fix. My recordings sound thin. Could it be my sample size is too small, and that's doing it? When I increase it from 128(144) to 4096 ( A setting I saw on Todd's screen pic) the drums hit better. Have you tried recording a source with a low and high sample setting? What were your results? Will this make better quality recordings , like Mp3 bit rate does?

Screenshot 2021-05-09 081726.jpg

If I use 128 sample size, I don't need to nudge it that far to align the project. Great for recording midi input. But is it making cheap quality recordigs for a microphone?
 

Attachments

  • Screenshot 2021-05-09 081646.jpg
    Screenshot 2021-05-09 081646.jpg
    218.9 KB · Views: 5
Last edited:
SPLS is basically buffer size. Higher figure = higher latency, higher stability. Lower figure = lower latency, lower stability.

The quality of the recorded signal is determined by the sample rate and bit depth, i.e. in your screeen shot, by 48khz and 24bit. SPLS does not affect recording quality and is not responsible for any thinness.

The quality of output (WAV or MP3) is determined by the settings in the Render dialog box.
 
That means samples. Building up a bigger store of samples (buffer) before sending them to be recorded or played means there's less of a chance of running out (which would cause clicks and pops or similar effects), but it also means a longer wait before they're heard (latency). Other than the possible clicks and pops or latency, it won't affect sound quality.

If you have to nudge recordings you probably need to manually adjust the record latency (which is different from the monitoring latency). Normally, the software detects it accurately and sets it automatically so there's never a need to nudge a recording, but sometimes it's not detected accurately.
 
Yeah, that block thing will not cause "thin" recordings. I'm not sure how the drums could be "hitting" better with increased buffer settings either. I do have some issues on the laptop of having to nudge recorded tracks occasionally because of a latency issue, but you should NEVER have to do that. Automatic latency compensation should take care of it. I'm still trying to figure out why it happens only on the laptop. I suspect at this point it has to do with the interaction of Samplitude ProX2 and the Focusrite 2nd generation 18i20. It did it on the old laptop too, so I don't think it's that piece of hardware. I run ProX2 on the desktop in the main studio and it never happens there. I have another 18i20 in a second bedroom (it's a little redundant since I never record in there) which I will bring out to the remote recording space to swap out with the suspect 18i20. If it does it with the other 18i20, well then, I think it will be time to get rid of the Focusrite and get a different interface and hope for the best. It's really aggravating to be in the heat of things and have to worry about nudging a track back along the time line to align it. Luckily my bandmate isn't a client, though it does slow things down. I've adapted to the point where I can just grab the track and move it to where I think it ought to go by sight and I'm usually right, but really now. It's 2021.
 
DAWs that I've used have a manual record offset setting somewhere in the options. You may be able to rescan the interface to get the right automatic setting.
 
Yes, I have played with that.

That SPLS setting goes from 144-32,768...Just sayin.

Further a slow computer like my NEW laptop is 1.6 ghz and cannot handle the higher settings. My deshktop is old but 3.5 ghz, and can do it. So I figured it was quality..

Why does it go to 32,768 then? Just because..
 
Just because the CPU is listed as 1.6GHz, doesn't mean it's always running there. It will ramp up as demand increases.

Its a simple calculation what the minimum latency will be for each block size. Buffer/sample rate is the minimum latency you can have. Having a 32K buffer with a 44.1K sample rate will give you about 3/4 second latency (32768/44100 =743ms)! This is one case where the higher sample rate is a plus if the laptop can process the data coming in efficiently.

At 44.1kHz sample rate:
64 samples = 1.45ms delay
128 samples = 2.9ms delay
256 samples = 5.8ms delay
512 samples = 11.6ms delay

At 96kHz sample rate:
64 samples = 0.67ms delay
128 samples = 1.35ms delay
256 samples = 2.7ms delay
512 samples = 5.8ms delay
 
Last edited:
Your example has 96khz less latency ms than 44.1k. Is that correct?

Why is this then?
Because if you have more samples per second (96k compared to 48k) each sample takes less time, so the same buffer size, e.g. 64 samples, will take up less time at the higher sample rate.
 
Record latency is not the same as input monitoring latency. Input monitoring latency is the delay you hear on a live track when monitoring it through your interface. Record latency (or record offset) is the misalignment of the latest recorded track to the previously recorded tracks.
 
Should I be recording in 96khz then? Because that recording latency delay sucks.

This is once case where the higher sample rate is a plus if the laptop can process the data coming in efficiently.
So is a higher SPLS a plus? In any way? It has to mean something. If you are not running out of samples at 512 , why go up to 32k?
 
Should I be recording in 96khz then? Because that recording latency delay sucks.


So a high sample size is a plus?
High sample rate yields lower latency because more samples per second means they're passed through the system faster.

But there's a better solution to new tracks being offset. There should be a manual setting that you can use if the automatic detection isn't right. The normal procedure is to feed the output of the interface to an input and play a short, sharp sound from the DAW (like a snare hit) and record it back to another track. Then you can zoom in and measure the precise offset and enter that value into the manual record latency correction. It may take a positive or negative value to make it right. It's a simple, though slightly tedious, procedure. If it works you won't have to nudge tracks to match.
 
I use default values and always have. latency and sample sizes is the go to fiddle area, but too many people are doing it for the wrong reasons. If your computer glitches because the buffers empty, the the obvious point is your computer isnt powerful enough and you start tweaking buffer sizes and then latency creeps in. My young friend also runs Cubase. He is a bugger tweaked because his system glitches. Odd because it’s a fairly new computer and not obviously underpowered. He sent me a track he was working on. I knew there would be issues with missing plugins and missing vst instruments but we thought it worth at least trying. I found his glitch problem on first load. I tend to use one track, end to end. He records with dozens of tracks each one separate. I might have perhaps a max of twenty or so tracks, he had over a hundred. Each one had racks of isotope plugins so insert effects and processing, effects sends and the track he sent me had over a hundred tracks each one with maybe ten plugins. Totally crazy and so easy to see how his computer was struggling. I think he must uses templates, so a new song loads up his starting point with ten instances of drums, each with the usual gates and dual snare mics. He might have six guitar channels each one with racks of guitar amps and effects etc etc. Even if he hasn’t yet recorded a bit of guitar. If you have six different guitars fine, but some tracks are empty but their processing active. Turning off unused processing removes the glitches, but people don’t seem to do it, but the do expect their system to cope! We expect video editing in 4K to be a tough thing for computers, but somehow we assume audio can be easy to process.
 
Remember, that total RTL is both recording and playback plus interim processing. I tried setting up a loop and recorded a metronome tick, then measured the total round trip delay and tried using the manual settings in Reaper. It turned out that the auto system was just fine. There were no plugins or other things to interfere or add load to the system.

As Rob said, putting lots of plugins into the system (some are really bad) can cause a lot of delay because the system has to catch the buffer, then spend processing time modifying it. Using my normal buffer size (128 at 88K), I've done recording without issues. However, when I added in Ozone Elements, things started getting glitchy. I bumped the buffer rate up to 512 and it went away. Since my recordings are mostly guitar and vocal stuff, VSTs aren't involved especially when recording, but I'm betting that it would be an issue if there were virtual instruments being used. That's when you probably need Marteen's Core I9 build.

There were also some differences between the way the computer is set up with PCIe, NVMe and SATA devices. I don't remember which was the better performer, but there were definite differences in terms of system delays. When one device delays the normal polling process, it causes delays down the line for all other processes (hard drives, video, networking, keyboard scans, mouse scans, audio interface, etc)

Its the same as if you are playing some computer games. A better system will run at higher frame rates, and will have less lag. If you can't keep up, lower the frame rate to cut the lag, or change the resolution of the video.
 
I use 512 buffer, 44.1 sample rate, and my latency is 11 msec, as Rich posted above. Never had a track line-up problem in Reaper - there is a setting for automatic alignment.
Ozone 'packaged' plug-ins like Elements and Nectar have multiple modules, so can definitely bog things down.
 
Reaper says my settings are 44.1 khz, 24bit, 352 spls, ~9.6/13ms. LIke others have said, I have never had a problem with track alignment, and Reaper's latency detector seems to work just fine.
 
Input monitoring latency is an entirely separate issue from record offset (or record latency, as it might be called). Input monitoring latency can only be minimized. Record offset can be completely compensated for down to the sample.
 
I mean, does this sound right? Or am I confused. The rest of the sound comes in post with Fx? Or with compression some how?

Taking it to basic one guitar and my voice with no effects. It is thin. To me at least. And the vocal is garbage. I nudged it 20+ ms and it is screwed up now.


It is like automatic. My head is 3' from the 4x12 cabinet. Then I go listen to the recording on the PC monitors. It small. It is fatigue? Or they somehow recreate it with FX?
 
Last edited:
I mean, does this sound right? Or am I confused. The rest of the sound comes in post with Fx? Or with compression some how?

Taking it to basic one guitar and my voice with no effects. It is thin. To me at least. And the vocal is garbage. I nudged it 20+ ms and it is screwed up now. I didnt save it before the nudge, but sure did after.


It is like automatic. My head is 3' from the 4x12 cabinet. Then I go listen to the recording on the PC monitors. Small.

A bit of level adjustment, eq, compression and reverb will make a world of difference on that. The voice is slightly thin and/or the guitar is too heavy in the low-mid range, but there's nothing inherently wrong with the audio quality. Well, you can always do more up front to optimize it, but that's a feedback process between tracking and mixing. If the voice comes out a little thin during tracking, you modify your tracking process. Or you can do it in post. Your choice.
 
Not too concerned about the voice. Just filler in this example. admittedly there is much I need to learn there on recording vocals.

Guitar, however, I can play. There is a boominess to the bass in the guitar track, that does not help the sound in any way. I pull the mic away, and it takes it away. Now its quiet. So then I increase pre amp gain and get room hiss. A dynamic I won't place but a foot or two away. I tried working it out with a HPF but got fatigued by then. Heard less and less difference.

I want to believe it is done with mic selection and natural in the room sound. Not EQ and compression. But I feel I am wrong and my true nightmare is beginning.
 
Last edited:
Back
Top