Dithering question

I have a bit of a dilemma with my tracks. They are all recorded in 24 bit, but my KORG D1600 only allows me to mix down 8 tracks at 24 bit. It also allows me to mix down 16 tracks at 16 bit. The song I’m working on has many tracks and really could use the 16. I can import them (as 24 bit) to a 16 bit song with 16 tracks, but then mixing down would crunch it into two tracks that are 16 bit. The question I have is, should I dither all the tracks I have down to 16 before I copy them to a 16-bit song to mix them? Or, is this a waste of time and not going to make a difference? I’ve read that not dithering in a transition between 24 and 16 adds noise to the recording. Does this also apply to mixing many 24 bit tracks down to two 16 bit tracks?

The other option seems to be mixing down the drums to two tracks, guitars down to four tracks, lead and harmony vocals down to one track, keeping the bass on one track, and then mixing all of that down to 24 bit. I swear, though, that some of the nuances of the instruments are lost with this method. (The drums especially sound kind of squished in.) Any thoughts? Thanks.
 
Last edited:
re:

It is confusing, :confused:, my apologies. I think the most important sentences are these two:

I’ve read that not dithering in a transition between 24 and 16 adds noise to the recording. Does this also apply to mixing many 24 bit tracks down to two 16 bit tracks?

Dithering is often done to whole mixes after they've been mixed to 24 or some higher bit depth. The mixes are dithered down to 16 before they are put on a CD. I've seen in several sources that not dithering adds noise to the tracks when they are put to CD. In this case, I am mixing 24 bit tracks down to two 16 bit tracks. In essence, I am bringing it down to 16 before dithering. Would this form of mixing add noise?
 
I’ve read that not dithering in a transition between 24 and 16 adds noise to the recording. Does this also apply to mixing many 24 bit tracks down to two 16 bit tracks?
Dithering *IS* noise. Here's the fast (hopefully) explanation - slightly over-simplified for brevity:

Simply cutting off the last 8 bits of a sample is called "truncation". You are simply truncatiing - or removing - the last 8 bits so that your 24-bit sample is now only 16 bits long, but those first 16 bits remain exactly the same.

"Dithering" in this context is the act of adding quasi-random or patterned noise to the lat bits of your samples after truncating to 16 bits; i.e. whether the last bits remain their original 1 or a 0 value is determined by the dithering algorithm, not the actual original signal value.

The argument behind dithering is that dithering the 16th bit after amputating the last 8 bits sounds more "natural" or "pleasing" than simply amputating the last 8 bits alone. It tends to make the cut-off less artificially abrupt and more naturally "fuzzy".

What Ethan's point is - and despite our previous dust ups over this subject I mostly agree with his general point - is that whether one dithers a change to the single 16th bit of the sample or not is irrelevant because the one-bit change is such a minuscule change to the samples that it's virtually, if not downright completely, inaudible.

I'm not quite willing to use the absolute word "completely"; I believe there are mitigating variables that in some cases can allow some finely-tuned ears to hear some differences with the right music using the right dithering algorithms, but I agree completely with the idea that it's 99 times out of 100 not worth thinking twice about, and that for the average home recordist with the average gear and skills, it's so far down the list of issues affecting the sound of their recordings as to be a virtual non-issue.

G.
 
Any time you down sample (whether 1 track, 'mix' of tracks, etc.) You 'truncate' data, use fewer numbers to represent the source, the truncation inevitably introduces 'quantization' error. Dithering introduces noise in attempt to ameliorate awareness of those error's, to randomize the error, and with noise shaping move them to less noticed frequency bands.

Roughly, generally speaking you try to avoid dithering the dither . . . I.e. Dither once (if at all) in the record, edit, mix, master process.

It has been a number of years since I read it but I remember Izotope's mastering guide discussing dither relatively lucidly

But as mentioned by other poster Dither is not something you need to obsess over. In projects that initiate and remain in house I work almost exclusively with floating point files but I also intersect with all sorts of other work, analog recordings, 16 bit, 24 bit, etc. I convert sample rates all the time (32 bit here, down to 24 for client to add tracks on his integer only system, importing additional tracks recorded @ 16 bit remotely by client, etc.) Without dithering. It has been years since I even attempted a serious review of impact of dither, multiple dither etc. But after about fifteen years of digital recording what I can say is that no one has, yet, ever said, "What this tune needs is more dither!"

General rule of thumb: dither once, in the final downsample to glass master (or whatever) (though it is unlikely until you pile up a couple of hundred dithers even you will notice if you accidentally double dither (w/o noise shaping you might notice before you hit a hundred)
 
I can tell you that dither is mostly irrelevant for typical pop music.

I don't know what kind of playback system you have Ethan, but it could be time for that hearing test.

Dither doesn't matter?
Cables quality doesn't matter?
Converter quality are all the same?

Believe me, I am not going to get in an argument about it with you, but your telling kids just starting out (future engineers) that dither doesn't matter in pop music and that's not true.

Why not tell them to put on a pair of quality headphones through a quality amp and turn a dither plug in on and off on the appropriate materiel and see if they can HEAR the difference?

20 year old's tend to have very good HEARING.; )
 
Why not tell them to put on a pair of quality headphones through a quality amp and turn a dither plug in on and off on the appropriate materiel and see if they can HEAR the difference?

20 year old's tend to have very good HEARING.; )

Not the same thing. Sure, dither algorithms need to introduce noise, and depending on the algo, could be not very audible or quite audible.

But Ethan's point is that dither is unnecessary to prevent quantization distortion on any real-world acoustic signal, and he's correct.

I used to argue with him, until I tried as hard as I could to induce measurable quantization distortion on a real-world acoustic source recorded at 24 bit and truncated to 16 bit. Sure, everybody can measure QD on a digitally-generated signal of sufficient simplicity, that's child's play. But try it on a real source, it's extremely difficult.

We know that more complex signals suffer from less QD than simpler signals--you can demonstrate that for yourself with various synth waves as long as you like. So if there is QD to be measured, we need to start with the easiest, simplest signal, because that will be the worst case.

Also, to be measurable, the signal has to be simple enough to see the distortion. Not too hard, I used an organ pipe with relatively few overtones. A flute should work OK, something like that, few overtones. You get the idea.

Then you need to get the recording as noise-free as possible. I used a KSM44 at about 12 inches from the source.

Next, I decided let's make this really interesting--while still at 24 bit, I applied a looong fade to black over the recorded -6dBFS peak signal. As that level drops, QD should get worse on truncation, correct? And since I was reducing my noise floor at the same time, there should have been less signal in that new LSB to self-dither.

The result? I couldn't measure any QD whatsoever, all the way to 16 bit black. If you like, I'll dig up the test file, it's still on my server somewhere . . .

Once you get to the complexity of a finished pop tune, it's a near certainty that no QD will occur on truncation. Will the sound of the signal change? Sure, but no more than any dither you might select. On truncation, "self-dither" should in theory add the same amount of (uncorrelated) noise as a dither algo, and obviously on headphones that can become audible.

So dither or not depending on whether you like the sound of your dither algo. Heck, maybe don't noise-shape it, plenty of people seem to prefer a bit of added noise (and IM distortion, not too different than QD actually), given the summing craze.
 
Thanks for the posts. It was informative to read (Glen) what is going on when one does or does not dither. The experiment of mshilarious was also interesting.

Based on what everyone is writing, I’m leaning towards not doing it. It seems that there would be just about no difference. By not dithering at that early stage, I’d be mixing 24 bit tracks down to two 16 bit tracks. Also, I would not have to dither the mix at all, because it would already be at 16.

This is pretty unorthodox by today’s standards though. How many people here mix numerous 24 bit tracks down to two 16 bit ones? Everyone I know mixes down to 24 bit (if they can).
 
I'd be willing to bet that in 99% of home recordings - 24 bit vs. 16 bit is nearly irrelevant - and probably the least of most peoples worries.........


Need proof? Go listen to JJR by Lyle Lovett.......

.
 
How many people here mix numerous 24 bit tracks down to two 16 bit ones? Everyone I know mixes down to 24 bit (if they can).
Like oretez said, the general rule of thumb is to not reduce bit length until you have pretty much everything else done; do all your work in at least as many bits as you recorded in (more, if you want to work in floating point), and then shorten to 16 when you're ready to print.

As far as "mixing down to 16 bit", that phrase really describes two separate processes. Mixing down - or summing to stereo - is one step and changing bit length is a second one. Maybe some DAWs allow you to do both in one fell swoop, but when you break down what it's doing, it's really got to do them sequentially, either the tracks are reduced in bit length and then summed, or they are summed and then the result is bit length adjusted (I assume the latter.)

All else being equal, It remains a good idea to save the mixdown in 24-bit. You never know when you might want or need to re-master it for special playlist use or a compilation disc, or simply a change in personal taste; and it's best to have a full 24 bits to work with if you can.

G.
 
Thanks for the posts. It was informative to read (Glen) what is going on when one does or does not dither. The experiment of mshilarious was also interesting.

Based on what everyone is writing, I’m leaning towards not doing it. It seems that there would be just about no difference. By not dithering at that early stage, I’d be mixing 24 bit tracks down to two 16 bit tracks. Also, I would not have to dither the mix at all, because it would already be at 16.

Sure you would (if you need to), that's still a truncation. As Glen said, save your final mix at 24 bit. There is not really much question that 24 bit is technically better than 16 bit, even though you might need to listen very critically on headphones to hear the difference.

The question I am addressing is whether any of the various flavors of dither are technically better than truncation on actual material, not whether 24 bit is better than 16 bit.
 
Sure you would (if you need to), that's still a truncation. As Glen said, save your final mix at 24 bit. There is not really much question that 24 bit is technically better than 16 bit, even though you might need to listen very critically on headphones to hear the difference.

The problem with this method though is that I can't save the mix at 24 bit. It's automatically converted to 16 when mixed down. The many tracks stay at 24, but the resulting two tracks from the bounce go to 16.

SouthSIDE Glen said:
As far as "mixing down to 16 bit", that phrase really describes two separate processes. Mixing down - or summing to stereo - is one step and changing bit length is a second one. Maybe some DAWs allow you to do both in one fell swoop, but when you break down what it's doing, it's really got to do them sequentially, either the tracks are reduced in bit length and then summed, or they are summed and then the result is bit length adjusted (I assume the latter.)

I don't know what the system is doing, but it may well be the latter as you say. I don't have control over this, and can't make the resulting mix stay at 24 unless I import the tracks to an 8 track song.

Knowing that I can't keep a mix at 24 when using 16 tracks, does this change your opinion? (I direct this question to everyone.)


The question I am addressing is whether any of the various flavors of dither are technically better than truncation on actual material, not whether 24 bit is better than 16 bit.

I knew this, I just made a general statement that kind of covered the tone of all responses. The argument that dithering may not technically be better than not dithering was at the time another indication that it may not have been a necessary task.
 
Knowing that I can't keep a mix at 24 when using 16 tracks, does this change your opinion? (I direct this question to everyone.)
Does the Korg allow you to save the individual tracks as 24-bit and then either burn them direct to CD or export them to computer? If so, you could always record your tracks using the Korg and then move the 24-bit WAV files to your PC for mixing there.

Otherwise I'd say, you gotta do what you gotta do. As has been expressed here both explicitly and implicitly, esoterica like word length, truncation and dithering are fairly low on the list of things that'll make or break the quality of your sound.

Better to have music at 16-bit than nothing at all at 24-bit :).

G.
 
Does the Korg allow you to save the individual tracks as 24-bit and then either burn them direct to CD or export them to computer? If so, you could always record your tracks using the Korg and then move the 24-bit WAV files to your PC for mixing there.

Yes it does, but I actually wanted to utilize the on board effects and compression of the Korg. For some reason, it sounds better than what I have on my PC, even though I use the PC for other tasks pertaining to recording.

Just to clarify, because I think we're in pretty confusing territory, the tracks that fill the 16 slots stay in 24 bit. It's just the two resulting bounced tracks (when I mix those 16 slots down) that go to 16. I think Glen understands me, but because this is such a difficult problem to explain, I just wanted to make sure that what I said makes sense.

I think I have my answer - I'm probably just going to bounce to the two 16 bit tracks. When I crunch everything together just to stay in 24, it seems that many a nuance is lost.

So, for the future :), would you say it is better to just work in 16 bit in my set up? Or, is there a (very slight) benefit to using 24 and mixing to 16?
 
The last thing I want to do is argue about dither, but I think it's a good practice to get into and would highly recommend using dither when you "have" to reduce the word length. I think the issue is even more relevant with eight miles high's korg because you are dealing with a cumulative effect of many tracks not being dithered.


The quote I found below explains a bit about dither and is easy to understand and then people can draw there own conclusion:

Because 24 bit gives you 144 dB of dynamic range, when you do have real loud playback levels, noise is no longer an issue. For example, if you are relatively close to an airline's jet engine {one taking off} you won't be able to hear the footsteps of any person nearby, that's because our hearing threshold starts from 0 dB and goes up to the threshold of pain at around 130dB SPL. So, a more noticeable problem would be when you have really loud levels, where you get a relative higher background noise level because you're unavoidably amplifying that circuitry oscillation noise which some people sometimes call it thermal and some just call it "analog warmth".

Obviously, these are just my opinions, but if you read on, you may find some validity to my ideas. To me they are valid, but it's not like I'm an equipment designer or an expert in electronics. I'm just presenting some interesting observations and nothing more. It's up to you and make your own conclusions.

Here are some dithering tests which should be small in size so as to not be a problem downloading and the differences should be clear or noticeable enough to be reviewed even from a laptop computer.

These samples generate a tone signal in double precision floating point with roughly 300 dB of dynamic range. In addition, the mathematics were done at that precision until it was converted to samples.

One test is made of a 3 kHz test tone for 4 seconds which fades linearly from full scale down to zero. The "no_dithered file" is truncated to 7 bit and the "dithered" is, of course, dithered then truncated to 7 bit.

http://musicmasteringonline.com/test/no_dither.wav

http://musicmasteringonline.com/test/dithered.wav

The second test is in essence the same with the exception that it sweeps all the frequencies linearly from 1 kHz to 5 kHz, and is as I said before, both dithered and un_dithered versions at a 7 bit depth.

Note that as the signal level slopes down, you can clearly hear in the non-dithered versions a change in oscillation {sounding very similar to aliasing} and also, notice that near the end of the tones, as the signal gets smaller and smaller, these sines waves sound closer to square waves than sine waves. In all dithered versions, the signal is buried in dither noise but the sine wave remains still sounding like a sine wave and that is from a 7 bit file which has a 42 dB signal-to-noise floor ratio.

http://musicmasteringonline.com/test/sweep_no-dither.wav

http://musicmasteringonline.com/test/sweep_dithered.wav

So, the bottom line is that dither doesn't get rid of artifacts, but spreads them out more uniformly. So, instead of being highly correlated with the signal, they are spread out as background noise. This is obvious and you can clearly hear this on the sample tests where the dithered versions are nosier but preserving the qualities of the test tone. Thus, higher sampling frequency dithering rearranges or moves the noise spectrum so that it is higher at the frequency bands that we are unable to perceive and lower where we can, or are able to.......
 
Last edited:
Man, this dithering thing has turned into a classic textbook case of each side picking a doped experiment that supports their own viewpoint but does little to nothing to advance the true reality or science of the subject.

It's a very tough, abstract subject, especially for artistic types who may tend not to be so mathematically inclined. OTOH, the math geeks out there tend to fail in their understanding of the translation of the math to the reality of music.

It's the same story with other similar subjects like jitter, sample rate, and why the press doesn't have the common adult decency to leave Paris Jackson alone already.

In such situations, I say just let your ears decide. If you feel that dithering makes your mix sound better, then do it. If you feel it's a waste of time, then don't waste your time. It's one of those procedures that does more for the doctor than it does for the patient.

The only thing I can guarantee is that - unlike a thousand other far more important production decisions - whether you choose to dither or not to dither, it will have approximately ZERO effect on how your recording is received by the public.

G.
 
So, the bottom line is that dither doesn't get rid of artifacts, but spreads them out more uniformly. So, instead of being highly correlated with the signal, they are spread out as background noise. This is obvious and you can clearly hear this on the sample tests where the dithered versions are nosier but preserving the qualities of the test tone. Thus, higher sampling frequency dithering rearranges or moves the noise spectrum so that it is higher at the frequency bands that we are unable to perceive and lower where we can, or are able to.......

Yes, those are test tones, as I said, child's play. Repeat with an acoustic source at 16 bit, please. Of course as bit depths shrink to silly levels, QD is a problem.

As I said, I can repost links to my test files, they will show this quite clearly. I am willing to admit that someone else might be capable of producing an acoustic recording that shows measurable QD on truncation to 16 bit, but I can't do it. And if it's that hard, a finished pop tune is nearly certain to have no QD on truncation.
 
In such situations, I say just let your ears decide. If you feel that dithering makes your mix sound better, then do it. If you feel it's a waste of time, then don't waste your time. It's one of those procedures that does more for the doctor than it does for the patient.

Of course, let your ears decide, but if one cannot scientifically establish that dither is necessary on a real-world track, then one must accept that the added noise of dither (or truncation) is the selection between different effects. It's like choosing which bus compressor you prefer.

Now, if your recording is mostly relatively simple synthesized sounds, you should probably strongly consider dither.

mshilarious' theorem: any real-world acoustic source is sufficiently complex (and/or noisy) to self-dither to 16 bit resolution.

The only thing I can guarantee is that - unlike a thousand other far more important production decisions - whether you choose to dither or not to dither, it will have approximately ZERO effect on how your recording is received by the public.

Yes.
 
The only thing I can guarantee is that - unlike a thousand other far more important production decisions - whether you choose to dither or not to dither, it will have approximately ZERO effect on how your recording is received by the public.

G.

Thanks for the dose of reality. Seems that other areas should be arresting my attention instead.

It's a very tough, abstract subject, especially for artistic types who may tend not to be so mathematically inclined. OTOH, the math geeks out there tend to fail in their understanding of the translation of the math to the reality of music.

Yeah, but the problem is that I am an artistic type and a math geek :mad::eek:!!! Understand the torment I'm going through? :):)
 
Back
Top