Recording at 48000Hz??

  • Thread starter Thread starter Croww
  • Start date Start date
Croww

Croww

New member
Hello all, I just had a quick question about the sampling rate when recording. I'm being a bit of a rebel, and working with Linux (Ubuntu Studio). After many many hours of treaking, and tons of trial and error, I can only get a reliable recording when I use 48000Hz to record. Will that make any difference when I later track down to 44100Hz for the mix? Just wondering.

Thanks in advance,
 
No, since most everybody tracks at above 44.1 anyway.
 
Hello all, I just had a quick question about the sampling rate when recording. I'm being a bit of a rebel, and working with Linux (Ubuntu Studio). After many many hours of treaking, and tons of trial and error, I can only get a reliable recording when I use 48000Hz to record. Will that make any difference when I later track down to 44100Hz for the mix? Just wondering.

Thanks in advance,

Sounds like you are using a soundblaster
 
No, since most everybody tracks at above 44.1 anyway.
That is soooooo not true that I don't even want to start...

I shouldn't say that -- Most "less seasoned" recordists I know record at high sample rates (mostly because of marketing hype). HOWEVER, 70-80% (depending on which polling data you read) of full-time audio professionals track at the target frequency.

Even the designers of some of the greatest digital conversion on the planet will plainly tell you that if you can hear the difference between 44.1/48 and 88.2/96kHz in their converters, they're broken. Heck, I track classical sessions at 44.1kHz. Always in 24-bit of course - That's a different story.

That all said - Yes, resampling to lower rates from higher rates (especially "goofy" conversion like 48 to 44.1kHz) can damage the audio. Some SRC's are very decent, some are very bad. Depends on what you're using on what.
 
Even the designers of some of the greatest digital conversion on the planet will plainly tell you that if you can hear the difference between 44.1/48 and 88.2/96kHz in their converters, they're broken. Heck, I track classical sessions at 44.1kHz. Always in 24-bit of course - That's a different story.

Of course, by extension, that means every audio interface I've ever used has broken converters....

Then again, the supposedly ultrasonic bird deterrents at the train station in San Francisco drive me nuts... and don't seem to be keeping the birds out particularly well.... My first thought was, "maybe you should try actually putting in something useful... you know, like walls...." But I digress.

Maybe it's just me.
 
Hmmm, when I visited a high-end studio a couple of years back he was recording at high res.

I record at 88.2 anyway just for the divisable factor.

Plus, what if you wanted to release your material on other mediums in the future? Then you're stuck with the 25 year old CD standard.

Then again, the death of high fidelity is nigh, so its not like it actually matters.
 
The only reason to work at 48k is when you have hardware that ONLY works at 48k:

Video cameras (video standard is 16bit/48k)

ADATs

SBLive/Soundblaster cards

Otherwise, lock it down at 24bit/44.1k or (if you're working with faint or highly dynamic jazz/classical folk) 24bit/88.2k and forgetaboutit....
 
Higher samplerates add more detail to your audio, but its especially noticeable in the high end. I can hear a difference between all the samplerates up to 96k, and so should anybody with relatively healthy hearing. After this, the change is too subtle for our ears to notice.

To say there is no reason other than your hardware to record above 44.1 is totally bogus.
 
Higher samplerates add more detail to your audio, but its especially noticeable in the high end. I can hear a difference between all the samplerates up to 96k, and so should anybody with relatively healthy hearing. After this, the change is too subtle for our ears to notice.

To say there is no reason other than your hardware to record above 44.1 is totally bogus.

It's also noticeable when you're using any plug-ins that use an FFT or whatever to do work in the frequency domain. The extra detail tends to reduce pitch correction artifacts, for example.
 
It's also noticeable when you're using any plug-ins that use an FFT or whatever to do work in the frequency domain. The extra detail tends to reduce pitch correction artifacts, for example.

A lot of plugs can process at higher rates for that reason, but it shouldn't mean you need to record at that sample rate.
 
To say there is no reason other than your hardware to record above 44.1 is totally bogus.

There's nothing bogus about it

*in theory* anyhow

IF we accept a bandlimited to 20khz signal is all we hear then *theoretically* 44.1khz SHOULD be enough to capture that, but the practical matter is, 1.whatever khz is NOT much room for the lowpass filter. That's where things tend to get messed up.

My stuff is cheap, I always record at 48khz. That still may not be enough, but it works for me in a way that 44.1khz doesn't on my gear.
 
Yes, we only hear to 20k (more like 16k for most of us, even with healthy ears). But when we move up to 96k, you give your converters that many more samples to capture shorter wavelength (higher frequency) information. So you'll get a much more accurate high end. 44.1 and 48 sound fine (great even, if you can mix!) but there is a lot more "approximation" when it comes to high frequencies.

To say that recording above 44.1 is pointless is just wrong. A few years ago this may have been an issue, but cpu speeds are fast enough, and hard drive space is cheap enough that people should be recording and mixing at LEAST at 48k.
 
By what mechanism do more dots mean less approximation IF only two dots can describe the sound 100% accurately?

I dont know this inside myself, but I see the great digital minds say that with only those two points they can tell you everything that is relevant within the passband.
 
Well, first off, by definition, digital isn't 100% accurate.

Your converters need to see 2 points to convert audio without aliasing, but that is different from the sort of sonic accuracy that we are talking about here.

At 48k, if you were to record a 24k sine wave, you would get two points for every second of audio. It would record 2 amplitudes of that wave each second. Simple math will tell you that if you record that same wave at 96k, you'd get 4 amplitudes/second. This is much more accurate, and would be an audible difference if we could hear this high. (if we had dog's ears, this would be completely obvious.

Now, granted, we can't hear 24k. But when you take the idea of this into account, it would make sense that you would get more detail with higher samplerates, but it would only be much more obvious in the higher end of frequencies.

Read up on Nyquist theory for more info about this.
 
A lot of plugs can process at higher rates for that reason, but it shouldn't mean you need to record at that sample rate.

If you think that is true, then you misunderstood what I said. The existence of a greater number of samples means that things like pitch detection are automatically more precise than if the data is at a lower sample rate. If you don't track at the higher rate, though, you don't get that benefit; I'm not aware of any audio apps that can do upsampling on the fly to run an effects chain at a higher sampling rate than the rest of the project.
 
If you think that is true, then you misunderstood what I said. The existence of a greater number of samples means that things like pitch detection are automatically more precise than if the data is at a lower sample rate. If you don't track at the higher rate, though, you don't get that benefit; I'm not aware of any audio apps that can do upsampling on the fly to run an effects chain at a higher sampling rate than the rest of the project.

Yup. Once you lose that info by not sampling it, its not coming back!
 
By what mechanism do more dots mean less approximation IF only two dots can describe the sound 100% accurately?

I dont know this inside myself, but I see the great digital minds say that with only those two points they can tell you everything that is relevant within the passband.

You have a choice at a lower sample rate as the signal approaches the Nyquist limit: lower phase accuracy or lower amplitude. Which one you get depends on how the converters do time averaging.

They typically sample at a much higher rate, then average several samples together to get a value for a given chunk of time rather than using a single sample. If they use a single sample method, you get potentially reduced amplitude. If you use the maximum positive/minimum negative value during the period, you get the amplitude, but the phase of the signal is shifted. If you use a moving average, you end up somewhere in-between.

The reason for this is that if you have a sine wave at 22.05 kHz and you sample twice in that period, and the signal level is 1V, if you sample it at the midpoint of the upward sweep, it's about .71V at each sample. If you take another 22.05 kHz signal, this time at .71V, and sample it at 44.1 kHz but you sample it at the peak you also get .71V for the sample. The two signals show up on your computer as identical even though one was supposed to be almost half again louder.

That's the problem as you approach the Nyquist point. The accuracy of amplitude diminishes greatly, you get pumping for signals that approach the sampling rate, etc. Now some folks will immediately jump on me for not mentioning that some of this is diminished by reconstruction filters, but realistically, no filter can create data that isn't there. If those two signals look the same when the computer captures them, they're going to play back the same, too.

The big difference with doing the downsampling in software is that computers can compensate for what would otherwise be an irrecoverable encoding error if done in hardware. I'm not saying low end downsampling code does, just that it is possible in software, while in converter hardware, it isn't really practical to do it in real time with the same accuracy.
 
Yup. Once you lose that info by not sampling it, its not coming back!

Well, the information I was talking about in that comment isn't really lost. It's way down in the single digit kHz range and below, so reconstructing a reasonable approximation of double-rate data is trivial. Audio apps just don't do it as far as I've seen....
 
I'm not aware of any audio apps that can do upsampling on the fly to run an effects chain at a higher sampling rate than the rest of the project.

ReaComp , a lot of Voxengo stuff, Refined Audiometrics does. There's even a switch either defined by the VST spec, or made defacto by so many plugin makers to autoswitch when doing an offline render
 
The reason for this is that if you have a sine wave at 22.05 kHz and you sample twice in that period, and the signal level is 1V, if you sample it at the midpoint of the upward sweep, it's about .71V at each sample. If you take another 22.05 kHz signal, this time at .71V, and sample it at 44.1 kHz but you sample it at the peak you also get .71V for the sample. The two signals show up on your computer as identical even though one was supposed to be almost half again louder.

That's the problem as you approach the Nyquist point. The accuracy of amplitude diminishes greatly, you get pumping for signals that approach the sampling rate, etc. Now some folks will immediately jump on me for not mentioning that some of this is diminished by reconstruction filters, but realistically, no filter can create data that isn't there. If those two signals look the same when the computer captures them, they're going to play back the same, too.

This is why there's wiggle room past 20khz and why so many converters sound better at higher sample rates. In theory a 44.1khz sampling rate will 100% accurately be able to describe a level and a point in time INSIDE the bandlimit which in our case I hope we agree is 20khz.

I wouldn't claim a theoretical 44.1khz converter could perfectly reproduce a 22khz signal, nor is it meant to
 
Back
Top