24 bit in and around, 16 bit out?

  • Thread starter Thread starter jedblue
  • Start date Start date
Yes, everyone should record everything at 16 bit through a soundblaster card on a commodore 64 using packing blanket's for bass traps and egg crates for diffuser's in a domed mud hut.

It' all about the lowest common denominator.

Just kidding with you Ethan ; ) Cheers.

Now that's ridiculous, you're just being silly. Everyone knows C64's are 8-bit..
 
I wonder just what the "usefulness" of, say, 24 bit AD / DA, processing and mixing over 16 bit they refer to is?

more features for them to sell you.

Record at the highest bitrate you have available and don't clip the signal. It ain't rocket surgery.
 
more features for them to sell you.

Record at the highest bitrate you have available and don't clip the signal. It ain't rocket surgery.

In a DAW, it's probably just a matter of file size, so I guess who cares and I would agree.

With my standalone it's 8 versus 16 (sort of) tracks, which can matter, plus a beat gizmo that only comes loaded with 16-bit loops, which is not really a factor. There may be a bigger difference in adding, say, three background vocals and a few extra percussion bits, for example, to a piece than in 24-bit vs 16 bit in terms of the overall musical impact of a mix, just as an example.

And yeah, no matter what, practice safe levels! :)

Cheers,

Otto
 
In a DAW, it's probably just a matter of file size, so I guess who cares and I would agree.

With my standalone it's 8 versus 16 (sort of) tracks, which can matter, plus a beat gizmo that only comes loaded with 16-bit loops, which is not really a factor. There may be a bigger difference in adding, say, three background vocals and a few extra percussion bits, for example, to a piece than in 24-bit vs 16 bit in terms of the overall musical impact of a mix, just as an example.

And yeah, no matter what, practice safe levels! :)

Cheers,

Otto

Then record at 16 bit. The difference is almost inaudible using professionally recorded material on top notch systems. It ain't gonna matter with what we're recording and who'll be listening to our songs.
 
Seriously, has anyone here ever been listening to something and went, "man, I can totally tell this was recorded at 16 bit, 44.1 KHz before it got bounced down to 16 bit, 44.1KHz then converted to mp3". :rolleyes:
 
I tend to have mike or preamp noise at a higher level than 16-bit audio, already.

Exactly, and room noise on everything other than close-mic'd guitar amps is more like 70 dB at best. Understand that the finest analog tape recorder in the world is nowhere near as quiet (or low distortion) as 16-bit digital.

I'm going to do some recordings both ways and see if I really care about whatever difference there may be, if any!

That is the only way to know for sure. I wish more people would actually try it. :eek:

about the only possible benefit I see on this machine is if there is signal detail encoded below the recorder noise floor in 24-bit mode that would not be in 16-bit mode.

In this case "detail" is expressed as some amount of distortion. Distortion in a properly operating 16-bit system is orders of magnitude lower than the distortion in any loudspeaker. I mean, how low do we need? 0.01 percent? 0.001 percent? Perspective people! :D

--Ethan
 
16 bit vs 24 bit

To be completely honest, I question that study. There is a distinctly audible difference between 24 bit and 16 bit audio. However, lets for arugments sake say that I hear it because I have trained ears, or superpowers or whatever. The fact still remains, that as a 2 track master, that might be fine. However,
when you're working with a 16 or more track session, the combined resolution across the board will make a larger difference. Now combine that with higher resolution plugins (such as the IR1 reverb), then you'll really appreciate the added bit depth. Even if your final output is dithered to 16 bits.

-mixdownguru

In this study I found http://drewdaniels.com/audible.pdf the authors conclude that there is no audible difference between a high bit SACD and a 16 bit CD during playback as revealed in their long term double blind listening test.

However, they do make this interesting comment early on in the paper;

"The usefulness of the increased dynamic range afforded by longer word lengths for mixdown has never been in question."

If I accept their conclusions regarding 'no audible difference' stereo playback and if it's going to be a 16 bit 44.1 out cd final form, I wonder just what the "usefulness" of, say, 24 bit AD / DA, processing and mixing over 16 bit they refer to is?

My digital multitracker is fixed at 44.1 kHz 24 bit in and around and 16 bit cd / .wav out so I can't make any direct comparison. Any clarifications or experiences from those who've work with both 16 bit and 24 bit in AD / DA, processing and mixing would be appreciated.

G
 
I question that study. There is a distinctly audible difference between 24 bit and 16 bit audio.

That study is valid IMO, and proves conclusively there is no audible difference between "CD quality" and higher sample rates and bit depths. The main difference between that study and most people's "anecdotal evidence" is that study was done correctly - blind, repeated many times, and testing many people to get a statistically valid result.

when you're working with a 16 or more track session, the combined resolution across the board will make a larger difference.

It may seem it would work that way, but it really doesn't. 16 bits is sufficient to have distortion far lower than anyone could ever hear. Further, distortion does not add coherently. The distortion artifacts for a DI bass track are unrelated to the artifacts generated on a mic'd grand piano. So those bass and piano components of a mix are unchanged whether considered alone or as part of a mix. Residual noise does not add coherently either, not that the -96 noise floor of 16 bits was ever a problem. Compared to even the finest Studer analog tape recorder, "lowly" 16 bits wins hands down in every way one could possibly assess fidelity.

--Ethan
 
That study is valid IMO, and proves conclusively there is no audible difference between "CD quality" and higher sample rates and bit depths. The main difference between that study and most people's "anecdotal evidence" is that study was done correctly - blind, repeated many times, and testing many people to get a statistically valid result.



It may seem it would work that way, but it really doesn't. 16 bits is sufficient to have distortion far lower than anyone could ever hear. Further, distortion does not add coherently. The distortion artifacts for a DI bass track are unrelated to the artifacts generated on a mic'd grand piano. So those bass and piano components of a mix are unchanged whether considered alone or as part of a mix. Residual noise does not add coherently either, not that the -96 noise floor of 16 bits was ever a problem. Compared to even the finest Studer analog tape recorder, "lowly" 16 bits wins hands down in every way one could possibly assess fidelity.

--Ethan

I assume that the distortion artifacts from 16-bit quantization (or 24-bit for that matter) accumulate incoherently, the way they do with tape hiss, so that for every doubling of the number of tracks, you add about 3 dB of noise (i.e. your noise floor increases about 3 dB for every doubling of tracks).

Well, in my case it's probably just a matter of whether I need more than 8 recorder tracks (use 16-bit) or not (use 24-bit). As I recall from when I tracked on an old 16-bit Tascam DA-38, I could get pretty good sounding tracks and mixes, but I did notice a little build up of quantization noise if I had to compress stuff. I only really noticed it on headphones and it really wasn't that big of a deal, and my level practices are slightly better these days, anyway. I could see that if you had lots and lots of tracks and compressed a lot you could accumulate a slightly annoying amount of noise that isn't very fun to hear.

Cheers,

Otto
 
Last edited:
Exactly, and room noise on everything other than close-mic'd guitar amps is more like 70 dB at best. Understand that the finest analog tape recorder in the world is nowhere near as quiet (or low distortion) as 16-bit digital.

True, dynamic range is digital's forte and a weak spot for tape, but sometimes it takes tape to get the right sound. I never argue about comparative accuracy (what do I know?), but I do find that some music sounds more pleasant to work with on tape, especially certain machines, such as 3M, Ampex and my limited exposure to MCI. Never heard a Scully, but probably them, too.

Cheers,

Otto
 
I assume that the distortion artifacts from 16-bit quantization (or 24-bit for that matter) accumulate incoherently, the way they do with tape hiss, so that for every doubling of the number of tracks, you add about 3 dB of noise (i.e. your noise floor increases about 3 dB for every doubling of tracks).

It's not even that bad. Tape hiss is there all the time at the same level no matter how loud or soft the music is. But distortion is related to the level of the music. It is so soft, even with 16 bits, it's just not an issue. The finest loudspeakers in the room have orders of magnitude more distortion than 16 bit digital.

I could see that if you had lots and lots of tracks and compressed a lot you could accumulate a slightly annoying amount of noise that isn't very fun to hear.

Hard to imagine that has anything to do with the digital medium. Most sources recorded with a microphone, the ambient room noise is probably 20 to 40 dB louder than the background noise of the medium. That's more likely what you hear brought out when you compress tracks.

I do find that some music sounds more pleasant to work with on tape

Yes, a little bit of grunge can add some "glue" to a sparse mix. I've had great results with the free tape-sim plug-in that comes with SONAR. But my personal preference for most types of music is keeping the sound as clean as possible.

--Ethan
 
It's not even that bad. Tape hiss is there all the time at the same level no matter how loud or soft the music is. But distortion is related to the level of the music.
...
The finest loudspeakers in the room have orders of magnitude more distortion than 16 bit digital.
...
It is so soft, even with 16 bits, it's just not an issue. .
I understand your position here but why on earth would you inject speaker distortion?
They don't add noise, and they are cleaner at low levels vs getting worse at low level.

I can however site an example where '16bit didn't cut it -relatively speaking.
I noticed when I used to mix analog, on a Mackie 2408, mid grade stuff inserted, 166's, Valley 610, Symetrix, Yamama 2020 (ick!) if I didn’t keep levels on my PCM80 and 90 damned tight I can guaran-damntee you those 4 return tracks were the noisiest thing on the board. You would not want that calliope of 'chirps buzzes and grunge anywhere near your fades and quiet moments. :D
 
why on earth would you inject speaker distortion?

For perspective. What's the point in obsessing over 0.001 percent distortion versus 0.003 percent in a converter, when you'll never hear either due to speaker distortion?

I can however site an example where '16bit didn't cut it -relatively speaking.

Well, anywhere 16 bits is insufficient, analog tape would be far worse.

those 4 return tracks were the noisiest thing on the board.

I wasn't there so I have no idea what the problem was. It sounds like gain staging more than lousy gear. But "why on earth would you inject" that into this discussion? :D

<just teasing you.>

--Ethan
 
..I wasn't there so I have no idea what the problem was. It sounds like gain staging more than lousy gear. But "why on earth would you inject" that into this discussion? :D

<just teasing you.>

--Ethan
For perspective.
:) But sure it is a gain staging problem- me allowing my input to these 16bit processors to flash -18 to -12 indicators (as I recall), and preceding with 'what's to worry, and 'don't let them clip.
Perhaps though it is a poor example or at least atypical of '16bit' but I was surprised how well I got 'bit on a -96 noise floor. :D
 
I was surprised how well I got 'bit on a -96 noise floor. :D

Yeah, it probably wasn't you. Just as current 24-bit devices can only dream of actually getting anywhere near to the theoretical -144 dB noise floor, early 16 bit stuff was closer to 14 bits as I recall.

--Ethan
 
This discussion is only of academic interest to me.

I run a home studio which, like many others here, is a mixture of all sorts of equipment: some good, some garbage, some old, some new. When I used my old ISIS soundcard, I recorded everything in 16 bit, because that's all it could do. The Firepod allows me to use 24 bit, which is what I do now.

Is there a difference between my old 16 bit recordings and the newer 24 bit ones? I think there is, but not because of the bitrate. I like to think that over this 12 year period I have improved my recording techniques generally, and this has been the greater contributor to improvements in quality of sound.

I suspect that the inherent noise present in all elements of the recording chain overwhelm any difference that might be observable between 16 and 24 bits.
 
16 vs 24

I am completely dumbfounded that anyone would deny that adding another 50% of bit depth resolution coming into a summed down 16bit out provides no audible benefit. With all due respect, I am telling you what I hear first hand. I have had sessions that I have worked on in 24 bit and output to 16 and directly compared them to the same session where the source was converted to 16 before mix output and I am testifying first hand that there is an audible difference. I would like to know what exactly are you basing your opinion on.

Furthermore, in both of your examples, you are referring to mic'ed inputs and will then be subject to the actual AD conversion process. However, many VSTi Synths now generate 32-bit audio directly. All engineers know the phrase "Garbage In - Garbage Out". Not that I'm comparing anything to garbage, but I'm using it here to demonstrate that your example is subject to its source. As for my "Anecdotal Evidence" , wouldn't all evidence presented here be anecdotal?



That study is valid IMO, and proves conclusively there is no audible difference between "CD quality" and higher sample rates and bit depths. The main difference between that study and most people's "anecdotal evidence" is that study was done correctly - blind, repeated many times, and testing many people to get a statistically valid result.



It may seem it would work that way, but it really doesn't. 16 bits is sufficient to have distortion far lower than anyone could ever hear. Further, distortion does not add coherently. The distortion artifacts for a DI bass track are unrelated to the artifacts generated on a mic'd grand piano. So those bass and piano components of a mix are unchanged whether considered alone or as part of a mix. Residual noise does not add coherently either, not that the -96 noise floor of 16 bits was ever a problem. Compared to even the finest Studer analog tape recorder, "lowly" 16 bits wins hands down in every way one could possibly assess fidelity.

--Ethan
 
I am completely dumbfounded that anyone would deny that adding another 50% of bit depth resolution coming into a summed down 16bit out provides no audible benefit. With all due respect, I am telling you what I hear first hand. I have had sessions that I have worked on in 24 bit and output to 16 and directly compared them to the same session where the source was converted to 16 before mix output and I am testifying first hand that there is an audible difference. I would like to know what exactly are you basing your opinion on.

Furthermore, in both of your examples, you are referring to mic'ed inputs and will then be subject to the actual AD conversion process. However, many VSTi Synths now generate 32-bit audio directly. All engineers know the phrase "Garbage In - Garbage Out". Not that I'm comparing anything to garbage, but I'm using it here to demonstrate that your example is subject to its source. As for my "Anecdotal Evidence" , wouldn't all evidence presented here be anecdotal?

Anecdotal evidence: Yes . . . a lot of the opinions, preferences and so on here derive from anecdotal evidence, and anecdotal evidence, as we all know, doesn't constitute proof. What Ethan is referring to is not, however, anecdotal evidence. He is referring to rigorous, valid and repeatable evidence-based testing, with appropriate controls.

For that reason, simple A-B tests are unreliable; for one thing, the willingness of people to hear a difference comes into play.

But I am not discounting that there is a difference. I'm just discounting the impact that it has given the many other sources of 'impurities' in the process.
 
I am completely dumbfounded that anyone would deny that adding another 50% of bit depth resolution coming into a summed down 16bit out provides no audible benefit.
What "resolution"? The size of the dynamic range palate is increased, but the "resolution" does not change. It's still 6dB per bit, whether it's at -90 or -140dB.
With all due respect, I am telling you what I hear first hand. I have had sessions that I have worked on in 24 bit and output to 16 and directly compared them to the same session where the source was converted to 16 before mix output and I am testifying first hand that there is an audible difference.
I'm not questioning what you hear, but I would like clarification on *why* you're hearing it. You have two different truncations going on at two different points in the process. Are they both post-converter simple truncations or are you outputting different word lengths from the converter? Or a combination of the two? Simply put, are there other variables creeping into the equation above and beyond simple word length that may be creating the difference you hear?

Just as with sample rate, where depending upon the converter design and components used, some converter may sound better at one rate whereas others sound better at another rate, but not because of the sample rate itself, but because of the converter design. Also, different plugs can sound different at different word lengths or sample rates. Could there be a similar thing happening here somewhere along the signal path, or have all these possible variables been accounted for?

G.
 
Back
Top