Exporting files at 24 bit

  • Thread starter Thread starter paulybear
  • Start date Start date
P

paulybear

New member
If I'm exporting files from Cubase at 24 bit and they were recorded at 16 bit, am i doing damage to the file?

The reason being I have some files at 24 bit and some files at 16 bit and would prefer to master at 24 bit obviously.

As you can see I'm somewhat of a beginner so take it easy please
 
It certainly won't hurt to go to 24-bit...

If you're mixing from a collection of 16-bit files, think of it as summing to 24-bit. Cubase is working in 32-bit anyway (if I'm not mistaken).
 
if one entire project is all 16-bit audio...I guess there'd really just be no point in going to 24-bit (waste of hard drive space)?

But then again, maybe you're taking it somewhere else to do more work on it, and then it would be beneficial to make it 24-bit...yes?

but then again, if you are just mastering yourself, on a 16-bit system, will your effects be proccessed 16-bit or 24-bit anyway?

i'm confusing myself....haha
 
It should not hurt them to change them to 24bit. You won't have the full advantages of 24 bit since the original A/D was not done 24 bit, but your plug ins may sound a little better when processing at 24 bit. I would bet the difference though is pretty negligable. I have this personal belief that the biggest advantage of 24 bit happens at the initial A/D stage, and not during the actual processing. As far as mastering goes, if there is a chance that 24 bit will work better with your plugins than 16 bit and you already have to make a decision between changing some 16 bit files to 24 bit, or some 24 bit files to 16 bit, i would make them all 24 bit since there would be some slightly negative effects of changing a 24 bit file to 16, but none that I can think of from changing a 16 bit file to 24 bit:) Hope that all makes sense:D
 
that's exactly the info i needed. thanks a million. i had another q about whether Wavelab and Waves, Ozone plugins is better than using T-Racks for mastering?
 
I mix in 24bit then bounce to 16bit to save on CD or usb stick space. I then convert back to 24bit for mastering. I know i will loose some quality converting to 16 then back to 24, but the loss is very little if mastering in 24bit.
 
Converting back to 24-bit won't magically make it sound better, thats not how samplerates work. You can't put the quality back into a downsampled signal. However with that said, there are tricks to getting some of your quality back later in the process.

For example, sonic maximizers help compensate for generational loss due to any number of things: bad A/D conversion, changing samplerates, the recording wasn't done so well, etc. So you could always pass your final mix through a maximizer and spice it up a little more.

If you do use one, be aware that it can make recordings sound too "crispy" or thin if not set properly.
 
Depending on what mastering software you intend to use I would leave the mixes at the original bit depth and import to a 24 bit (or 32 bit float) format during mastering.

Processing at 24 bit has advantages in regards to quantization distortion, and aspects of dithering. And from the info I've received and debates I've had with Mr. Blackwood and Katz, it's more important to process at 24 bit than to record at anything more than 20 bit.
 
Yeah, mastering at 24bit will improve the quality compared to at 16bit even if converted to 16bit and then back to 24bit. Obviously you really want to keep it at 24bit until burning.
 
Depending on the nature of the post-processing, 24-bit could sound significantly better, even with 16-bit source material. Digital audio has limited precision, depending on the number of bits. The greater the bits, the greater the precision.

When you sum multiple sources, you effectively lose precision from each input unless you increase the number of bits used to represent the output. So by mixing to 24-bit precision, you aren't losing as much quality at that phase.

When you then do the final normalize and process it down to 16-bit 44.1 kHz or whatever your final output format is, you'rel basically taking a 16-bit-tall slice out of the... maybe 22.5 bits of dynamic range that you actually used, and then representing that slice as 16 bits in the output. The result is a full 16 bits of precision in the output stage. That's a lot better than you would if you mixed down to 16 bits of precision initially, then stretched the 14.5 bits of actively-used precision into 16.

Of course, if your output is perfectly normalized at 16-bits to begin with and you aren't doing any post-processing, then the issue is moot, as the results should be exactly the same, assuming that whatever you use for post-processing handles the down-conversion in the same way. In that case, it's just a question of which tool does the rounding/truncating/dithering.
 
Back
Top