Ethan Winer
Acoustics Expert
Folks,
A week ago I invited folks here to compare a series of audio files, to see how audible are the differences between dithering from 24 bits to 16 versus truncating. I also included files that were truncated to (approximately) 13, 11, and 9 bits, which correspond to recording at peak levels of -20, -30, and -40. There are five files altogether, as described on my web page at www.ethanwiner.com/bitstest.html. Here are results:
I received 23 replies altogether, and only one person identified all five files correctly. Not to minimize the one person who got them all right, but since the two worst files were fairly obvious, one person in 23 getting the order of the remaining three files correct likely falls within the boundaries of normal probability. Then again, based on his comments ("#1 was very clean throughout ... the guitar string buzz on #4 was more grainy") shows that perhaps he really did hear the changes in quality!
The short answer is that most folks were not able to distinguish any of the files. This comment is typical: "To tell the truth I couldn't hear any difference." In fact, one person thought the dithered file (the best one) was the worst, and a few picked the worst one (9 bits) as the best! So it seems that grit - whether caused by analog tape or digital artifacts - can make a recording sound "better" even if it is in fact worse in terms of pure accuracy. Note that the totals below do not add up to 23 because most people got some right but missed others.
Number of people who admitted they heard no difference in any of the files: 4
Number of people who thought the best file sounded worst: 1
Number of people who thought the worst file sounded best: 4
Number of people who correctly identified the two worst (11- and 9-bit) files: 9
Everyone who sent me their guesses has been sent the list of files in order of degradation. I am not posting the order here, so that others who want to try the test can still do so in the future. After listening to the files, email me which you think is which and I'll send the results by return email.
Failings of this test: Several people pointed out that the real advantage of 24-bit recording is when an entire project is recorded and mixed at 24 bits, because the increased resolution helps minimize errors that accumulate from gain changes and plug-in processing. At some point I hope to test that too. And though nobody mentioned this, I'll add that another possible advantage of 24-bit recording is the smoother decay of instrument and reverb tails. Neither of these were tested here, so all that this test can claim to show is how important 24 bits and dithering are when used with full-level audio. However, this experiment also shows that recording at an average level of -20 to avoid distorting - another claimed advantage of using 24 bits - is probably not as harmful to audio fidelity as many would think.
Thanks to all who participated!
--Ethan
A week ago I invited folks here to compare a series of audio files, to see how audible are the differences between dithering from 24 bits to 16 versus truncating. I also included files that were truncated to (approximately) 13, 11, and 9 bits, which correspond to recording at peak levels of -20, -30, and -40. There are five files altogether, as described on my web page at www.ethanwiner.com/bitstest.html. Here are results:
I received 23 replies altogether, and only one person identified all five files correctly. Not to minimize the one person who got them all right, but since the two worst files were fairly obvious, one person in 23 getting the order of the remaining three files correct likely falls within the boundaries of normal probability. Then again, based on his comments ("#1 was very clean throughout ... the guitar string buzz on #4 was more grainy") shows that perhaps he really did hear the changes in quality!
The short answer is that most folks were not able to distinguish any of the files. This comment is typical: "To tell the truth I couldn't hear any difference." In fact, one person thought the dithered file (the best one) was the worst, and a few picked the worst one (9 bits) as the best! So it seems that grit - whether caused by analog tape or digital artifacts - can make a recording sound "better" even if it is in fact worse in terms of pure accuracy. Note that the totals below do not add up to 23 because most people got some right but missed others.
Number of people who admitted they heard no difference in any of the files: 4
Number of people who thought the best file sounded worst: 1
Number of people who thought the worst file sounded best: 4
Number of people who correctly identified the two worst (11- and 9-bit) files: 9
Everyone who sent me their guesses has been sent the list of files in order of degradation. I am not posting the order here, so that others who want to try the test can still do so in the future. After listening to the files, email me which you think is which and I'll send the results by return email.
Failings of this test: Several people pointed out that the real advantage of 24-bit recording is when an entire project is recorded and mixed at 24 bits, because the increased resolution helps minimize errors that accumulate from gain changes and plug-in processing. At some point I hope to test that too. And though nobody mentioned this, I'll add that another possible advantage of 24-bit recording is the smoother decay of instrument and reverb tails. Neither of these were tested here, so all that this test can claim to show is how important 24 bits and dithering are when used with full-level audio. However, this experiment also shows that recording at an average level of -20 to avoid distorting - another claimed advantage of using 24 bits - is probably not as harmful to audio fidelity as many would think.
Thanks to all who participated!
--Ethan