What is the difference?

LazerBeakShiek

Rad Racing Team
Mastering question. What is the difference, placing a VST master limiter in the main channels while rendering? As opposed to rendering the tracks out to wav anyway, then dragging the render in to apply a master limiter on the mains? What is going to change?
 
Another difference is that if you do mastering as a separate process, it makes it easier to get get cohesive and consistent mastering across a set of tracks (e.g. if you are putting together tracks for an album).
 
Both of the above are the main answers.

Processing-wise, there is no difference. However, there is some value to listening with a new set of ears in the context of the album/project that the song will be part of.

Mastering is also setting the time between songs and making sure all the songs sound like they belong together and transition well from song to song.

Rendering a naked mix for archive purposes also give you the ability to master it again in the future, when you might be putting together a collection of songs that need to be cohesive. Or just master it again because you have better tools and you've gotten better at it over time.

Limiting, compression and overall eq are hard to get rid of. Mastering a previously mastered track always ends in tears.
 
Latency question. What is the difference between using a buffer and using the Reaper latency offset? Everything being recorded is floating point. I was thinking I did not need to set the offset manually. My UA interface has a buffer. If I turn the buffer to off, and use Reaper's manual offset (preferences-recording), it is more accurate.

You guys use the manual or a buffer type thing and with what result?

Example..
Screenshot 2022-01-07 073341.jpg

Screenshot 2022-01-07 073520.jpg
 
Screenshot 2022-01-07 113952.jpg
The buffer thing I was referring from UA. If there is a high key load, the buffer will change or not be enough. So for me, during the most complex parts only, I would suffer latency lag..sometimes.
 
Not the same thing - a buffer is there to match the speed of your computer including the processing and the drives so they can keep up. Read stuff in - even if it's in little chunks and then the computer reads the out continuously and glitch free. DAWs allow you to shift events forward or backwards in time to fix delays - not really latency - so if you have a whole chain of processors that introduces a permanent 200mS delay to do all that processing, you just shift the goalposts, the DAW starts them all early, so the delay in them brings them back into time. Latency is of course a delay - but the important thing is to keep latency that changes separate from latency that does not. Buffers filling up take time, so making them smaller if your computer is powerful enough helps. You have to experiment. I've got an old monitor driven by two HDMI converters and the picture lags behind. One of my computers has the audio set to just delay the audio to match. I have not adjusted buffer sizes on my audio machine at all - it works around 9mS on the screen display, and I can't detect this, so as the buffer size I have? no idea!
 
Hmmm, ok. For the people who enter the offset manually in their DAW, are my numbers in the correct places? I don't have something backwards.

If the song is not complex, I get like 2-3 m/s of latency with no offset or correction. It gets hard to tell. 15-20 ok easy, but 2? And then I only would precive 2-3 ms as latency during a complex part...I don't know.
 
Hmmm, ok. For the people who enter the offset manually in their DAW, are my numbers in the correct places? I don't have something backwards.

If the song is not complex, I get like 2-3 m/s of latency with no offset or correction. It gets hard to tell. 15-20 ok easy, but 2? And then I only would precive 2-3 ms as latency during a complex part...I don't know.
I'm pretty sure we've been through this. There's no one right setting for everyone because systems vary. If you're going to set it manually you need to do a loopback test. Connect an output of the interface to an input. Play a signal (preferably something distinctive, like a snare track) from the output and record it to another track. Zoom way in and look for an offset in the waveforms. Then adjust the manual offset accordingly. Repeat until it matches as perfectly as possible. I'd adjust the input offset.
 
I'm pretty sure we've been through this. There's no one right setting for everyone because systems vary.
Right.
If you're going to set it manually you need to do a loopback test.
When I measure it out zoomed in I get the same as what is in the upper corner of reaper. More or less.

You know, is it possible that the computer just screws up at times when recording? Misplace a ms or something.
 
Last edited:
You say "more accurate" but when you set a global offset, I'm pretty sure the visible waveforms stay exactly the same - the difference is in the output timing - do you actually have a latency problem that all this faffing about with adjustments is trying to fix? I'm wondering if it's a real problem, or you've just noticed the screen display shows numbers you are assuming are a problem.
 
I think manually adjusting the offset helps.

How about this. Could this artifact in the GUI have anyhting to do with it? Does your cursor line make extra lines behind it when the midi information gets dense?

example.
20220107_145402.jpg
I imagine that is telling me there is an issue somewhere.
 
When I measure it out zoomed in I get the same as what is in the upper corner of reaper. More or less.

You know, is it possible that the computer just screws up at times when recording? Misplace a ms or something.
Input monitoring latency is a different thing from record offset. They have some common causes, but can't automatically assume that one value equals the other.

If your input buffer is too low and dropping samples, it's possible that the recorded audio will get out of sync. That's a different problem, with a different solution, from basic record offset.
 
Here's the most precise way to set the input latency in Reaper. It will tell you exactly how many samples behind the system is running. You just need to route an output to an input.

 
Back
Top