what sample rate do you record at?

  • Thread starter Thread starter garbagelarge
  • Start date Start date

when the intended medium for a recording is a cd, what sample rate do you record at?

  • 44.1 b/c of cpu and hard drive space considerations, or b/c my interface only supports 44.1

    Votes: 15 8.9%
  • 44.1 b/c I prefer to mix at the same sample rate that the public will hear it at

    Votes: 35 20.8%
  • 44.1 b/c conversion process negates any benifits of recording/mixing at higher sample rates

    Votes: 34 20.2%
  • I record at 48khz

    Votes: 39 23.2%
  • I record at higher than 48khz

    Votes: 45 26.8%

  • Total voters
    168
There is extended headroom with 24 bit. 44.1k/24 bit has the same headroom as 48k/24 bit.
 
Farview said:
There is extended headroom with 24 bit. 44.1k/24 bit has the same headroom as 48k/24 bit.
agreed!! So on that basis, if nothing else, I will tend to worry more about the 'bit-rate' rather than 44.1 or 48 !!
 
Bit DEPTH. Not bit RATE.

Big difference - Huge.
 
Sidenote comment -

This is the first time I've ever seen results like this - Don't know if it's a "home-rec" thing or what.

Every single poll I've seen of full-time industry professionals in recent years puts around 70-80% of them squarely at the target rate (44.1kHz).

In this poll, almost half (more than half? Wasn't paying *that* much attention) record higher. Never saw that coming...

And the *third* of them recording at 48kHz?!? What's that all about? :eek:
 
Massive Master said:
Bit DEPTH. Not bit RATE.

Big difference - Huge.

yep, you're right....my syntax error!
Either way.....I'm starting to record everything in 24bit and using cool edits's 32 bit mode....feels and sounds good.
 
Massive Master said:
Sidenote comment -

This is the first time I've ever seen results like this - Don't know if it's a "home-rec" thing or what.

Every single poll I've seen of full-time industry professionals in recent years puts around 70-80% of them squarely at the target rate (44.1kHz).

In this poll, almost half (more than half? Wasn't paying *that* much attention) record higher. Never saw that coming...

And the *third* of them recording at 48kHz?!? What's that all about? :eek:

Yeah, that's weird. I think all the 44.1'ers didn't vote because they just did recently in another poll. That was my story, but now I've put my vote here too. This poll is obviously being run by Carl Rove and the vast right wing conspiracy, who always think more is better. :eek:
 
I always track in 24bit 96khz
Mix in 24bit 96khz
Master in 24bit 96khz
Enjoy a much better sounding CD in 16bit 44.1khz and realize it was worth all the trouble and extra money to build a machine actually capable of doing 24/96 :D
 
Massive Master said:
Sidenote comment -


Every single poll I've seen of full-time industry professionals in recent years puts around 70-80% of them squarely at the target rate (44.1kHz).

:eek:


Hmmm.... must've missed that poll. Too busy reading interviews in EQ and Recording, with the big guys talking about how much better 24/96 sounds ;)
 
P.S. >>>>>>>>> http://www.saecollege.de/reference_material/pages/Recorders.htm#sampl

Look for these lines >>> "At a sampling rate of 96kHz you get 9.6 samples of a 10kHz wave and believe me, you can hear it.

In an article by Rupert Neve, I read recently, he said that we should aim for 24bit resolution and 192kHz sampling rate if we want to equal the quality of high quality analogue recording." on quote :D
 
P.S. >>>>>>>>> http://www.saecollege.de/reference_...rders.htm#sampl

Look for these lines >>> "At a sampling rate of 96kHz you get 9.6 samples of a 10kHz wave and believe me, you can hear it.

In an article by Rupert Neve, I read recently, he said that we should aim for 24bit resolution and 192kHz sampling rate if we want to equal the quality of high quality analogue recording." on quote
 
Hollowdan said:
I always track in 24bit 96khz
Mix in 24bit 96khz
Master in 24bit 96khz
Enjoy a much better sounding CD in 16bit 44.1khz and realize it was worth all the trouble and extra money to build a machine actually capable of doing 24/96 :D

Now.......That's what I'm talkin all about!!
Well put!
I'm with you and totally agree!!!
Regards,
Superspit.
 
So the verdict is some go for 44.1/48, some go for 96. Therefore i am going to do half my stuff in 48 and half in 96.

Seriously though, would it be better to look at it subjectively? Depending on the power of your machine of course, if you have a tune that doesn't use too many tracks or VST's then perhaps 96 is in order. If you are working on something fairly intensive then perhaps 48 is in order. Of course that doesn't work to well in practice. I start tunes with a small amount of tracks that soon turns into a large amount, by which point its too late. Been using 96 but thinking of going down to 48 just for the extra performance boost. But when I look at the graphs etc, it makes me want to stay at 96. :confused:

So, I'm going to build another music PC. I will use them both to record at the same time. One at 96, one at 48. That way I don't have the dilemma. :p
 
Look at the poll again and add up the 44.1 people. 44.1k seems to win.
 
Probably because someone is spreading the BS that higher sample rates are superior . Full time guys have been doing it enough and have commited enough mistakes to know otherwise. There are quite a few people that could be or ARE professionals here, but there are also a lot of people that use cracked software and ask questions like "what mic for best quality for rap" ?
so is it really surprising? It falls in the hands of the Learn-ed ones to educate the newer folks and offset the loads of BS that come through.

I do a LOT of DVD-Audio Classical/World Music/Persian/Acoustic productions and sometimes the people that pay me want high res for archiving and possible conversion to DSD. The Market I cater to is Picky classical listeners and snooty Audiophiles, so things are a little different. MY standard operating procedure is 24/44.1.











Massive Master said:
Sidenote comment -

This is the first time I've ever seen results like this - Don't know if it's a "home-rec" thing or what.

Every single poll I've seen of full-time industry professionals in recent years puts around 70-80% of them squarely at the target rate (44.1kHz).

In this poll, almost half (more than half? Wasn't paying *that* much attention) record higher. Never saw that coming...

And the *third* of them recording at 48kHz?!? What's that all about? :eek:
 
BigRay said:
Probably because someone is spreading the BS that higher sample rates are superior.

Maybe because some people can hear a difference. Try it yourself. I have, and I did. There is a significant improvement in HF response between 44.1kHz and 96kHz, at least with a Presonus FIREPOD. Different hardware will exhibit different behavior.

The Nyquist limit isn't the whole story. The Nyquist limit just says that you can reproduce the frequency, not that you can reproduce it accurately.

The difference has nothing to do with hearing details in the high frequency content as many people contend, however. Any details in that content would, by definition, be at a higher frequency still, and thus well outside the human hearing range.

The difference has to do with volume. The number of times you sample determines how accurate the waveform reconstruction can be. This results in a difference in the volume of signals as they approach the Nyquist limit. This is particularly significant for complex waveforms, as samples taken at two points in a complex waveform may not be anywhere near the peaks of the waveform. Oversampling can reduce this effect by using a moving average of multiple samples. This will yield a higher value than you would get with a single sample, but you still get rolloff as you approach the Nyquist point.

Compounding this problem is the antialiasing filter that is applied during the sampling process. Whether applied in firmware as part of downsampling from an oversampled ADC or applied in hardware as an analog filter, the antialiasing filter rolls off all signals as they approach the Nyquist limit. The effect of this antialiasing filter begins at a much lower frequency than the Nyquist limit, resulting in a loss of volume at high frequencies.

Because the Nyquist limit moves to a higher frequency as the sample rate increases, both of these two high frequency losses also move to higher frequencies. Thus, 96 kHz sampling provides audibly more accurate high frequency reproduction than 44.1 or 48 kHz sampling. When you listen to them in an A/B test, you will find that the 44.1/48 kHz sound dull by comparison due to this high frequency rolloff.

Eventually, however, the audio must be downsampled to 44.1/48 kHz. In theory, a software-based brickwall filter can be more precise because it is not constrained by having to run in real time on limited hardware (as opposed to doing this in the interface), and thus will not result in as much high frequency loss, though there will always be some. Whether this makes a real difference in the finished product or not depends largely on your audio sofftware and the algorithm it uses to downsample.

Note: you must have very good ears to hear the difference. Usually, the reduction in amplitude starts becoming significant at about 16 kHz, and possibly higher. The average person can't hear any difference at all because they can't even hear that high. :)
 
dgatwood said:
Different hardware will exhibit different behavior.
Correct, the difference has more to do with the implementation than the sample rate.

dgatwood said:
The Nyquist limit isn't the whole story. The Nyquist limit just says that you can reproduce the frequency, not that you can reproduce it accurately.
The Nyquist theory does say that you can reproduce frequencies up to the limit accurately.

dgatwood said:
The difference has nothing to do with hearing details in the high frequency content as many people contend, however. Any details in that content would, by definition, be at a higher frequency still, and thus well outside the human hearing range. The difference has to do with volume.
Those two sentences are in conflict with each other. Any small difference in volume that it would miss would be at a frequency higher than Nyquist, and therefor filtered out before the converters saw it.


dgatwood said:
The number of times you sample determines how accurate the waveform reconstruction can be. This results in a difference in the volume of signals as they approach the Nyquist limit. This is particularly significant for complex waveforms, as samples taken at two points in a complex waveform may not be anywhere near the peaks of the waveform. Oversampling can reduce this effect by using a moving average of multiple samples. This will yield a higher value than you would get with a single sample, but you still get rolloff as you approach the Nyquist point.
That would make perfect sense if it was even remotely how the reconstruction algorhythms work. It isn't.

dgatwood said:
Compounding this problem is the antialiasing filter that is applied during the sampling process. Whether applied in firmware as part of downsampling from an oversampled ADC or applied in hardware as an analog filter, the antialiasing filter rolls off all signals as they approach the Nyquist limit. The effect of this antialiasing filter begins at a much lower frequency than the Nyquist limit, resulting in a loss of volume at high frequencies.
At 44.1k, the signal is 1db down at 19.5k. At 48k, there is no attenuation below 20k. (with my Motu 24I/O, not exactly top-notch stuff) How much 19.5k do you really have in your mixes? It could easily be made up for with a high shelf EQ.

dgatwood said:
Because the Nyquist limit moves to a higher frequency as the sample rate increases, both of these two high frequency losses also move to higher frequencies. Thus, 96 kHz sampling provides audibly more accurate high frequency reproduction than 44.1 or 48 kHz sampling. When you listen to them in an A/B test, you will find that the 44.1/48 kHz sound dull by comparison due to this high frequency rolloff.
However, because of oversampling, each of the 96,000 samples are based off of less data.
 
dgatwood said:
The number of times you sample determines how accurate the waveform reconstruction can be. This results in a difference in the volume of signals as they approach the Nyquist limit. This is particularly significant for complex waveforms, as samples taken at two points in a complex waveform may not be anywhere near the peaks of the waveform.
Practical physical filter restrictions aside - Nyquist sampling will indeed reproduce the intended Nyquist frequency accurately. This is equally true for a complex waveform as it is for a simple sine wave because, at the Nyquest limit, any differences are filtered out anyway.

Consider a 20kHz sine wave. Any complexity added to that sine will be via the addition of higher frequency detail; i.e. frequencies higher than 20kHz. As those frequencies are over the Nyquist limit, they will be filtered, and the original sine will remain in a more or less intact state.

And as far as the number of samples per wave, consider this. The Nyquist sampling rate for reproducing any given frequency is only twice that frequency, plus some practical overhead for filter physics and such. Hence the 44.1kHz sampling rate for 20kHz reproduction. That's only just a hair over two samples taken per wave cycle.

By conventional thinking, there's no way that one could ever hope to accurately reproduce a full wave cycle from only two samples; the chances of those samples timing out so that they just so happen to read the maximum amplitudes of even a simple sine wave are microscopic. On the other hand there is a 50-50 chance that they will sample the wave at less than half of it's amplitude. It would, looking at it that way, be impossible to reconstruct even a simple sine from the Nyquist sample rate. In fact, even a 10kHz sine would only be getting four samples per wave cycles, still nowhere near enough "resolution" to reproduce it with any accuracy whatsoever.

But yet it works, and works accurately. That's because the waveform is reconstructed not by connecting the points of the individual samples, but rather by applying complex series of filtering and trigonemetric functions to the sample values themselves, mathematically "growing" the wave from the ground up (so to speak.)

It's these functions that are the basis behind the Nyquist theorm, and that allow such a slow sampling rate to be able to accurately reproduce the waveforms, not the luck-of-the-draw positions of the samples within the wave cycle.

G.
 
SouthSIDE Glen said:
But yet it works, and works accurately. That's because the waveform is reconstructed not by connecting the points of the individual samples, but rather by applying complex series of filtering and trigonemetric functions to the sample values themselves, mathematically "growing" the wave from the ground up (so to speak.)

In theory, yes, you can reconstruct the original waveform that way. In practice, however, DACs don't work that way. At best, a DAC attempts to reduce stairstepping by running at a faster rate and stepping a fraction of the distance several times, but you can't even guarantee that. A more typical DAC will jump at the sample rate from one value to the next, then will smooth off the resulting 44.1 kHz (or whatever) stairstep with a low pass filter. There's no trigonometric anything going on in a DAC.

http://en.wikipedia.org/wiki/Digital-to-analog_converter
 
Back
Top