How bad is truncation really?

teainthesahara

New member
If im moving from a 24bit device to 16 device bit via optical cable with no dithering employed, what do you really hear as a result? I've heard the term "grainy" used to describe this. Is this an obvious "grainy" sound...or is it a not so obvious sound ('not so obvious example': simply tracking at 16bit - i.e. the effect of less head room/worse noise floor, less natural reverb tails etc).

T
 
if the convertors are good you wont hear a grainy sound.
i'm a sceptic on this issue. i go back to the point plenty of huge hits were done years ago using 16 bit mastering for example.

there are a billion arguments/positions. just visit google groups rec.audio.pro,
and youll see them.
 
Where you might hear it is in reverb tails and fades... you won't hear any difference with high-levels signals, only when the levels start approaching infinity.
 
Thanks bluebear and manning1,

So, my current understanding is that losing those 8 bits via truncation (between two hardware devices) should only effect the signal when a low level predominates (eg. as a struck chord fades out on a guitar). Sources recorded loud, and who's transients make up alot of the sound (e.g. drums) will be less or not effected by the truncation.

That's about right, right?

T
 
yes. maybe others might confirm this , but ive heard the very high end convertors like mytek and others dont have these problems.
but the question is - do you want to spend thousands on high end superb convertors ? all of lifes a compromise !
everythings getting converted to mp3's these days often, which kind of makes the whole effort questionable imho.
one compromise with lower end sound cards is to carefully edit the effects trails if you can. some people are masters at this ive been told.peace.
 
manning1 said:
yes. maybe others might confirm this , but ive heard the very high end convertors like mytek and others dont have these problems.
but the question is - do you want to spend thousands on high end superb convertors ? all of lifes a compromise !
everythings getting converted to mp3's these days often, which kind of makes the whole effort questionable imho.
one compromise with lower end sound cards is to carefully edit the effects trails if you can. some people are masters at this ive been told.peace.

Thanks for the advice manning.
Just so you know, Im not really looking at this from a 24 vs 16 bit perspective, nor converter quality. What im after is the sound of truncation when stepping down from 24 to 16 with hardware devices (i.e. no dithering employed). How does that grainy sound really sound! My impression so far is that it wont sound obvious - no more obvious than simply using stock converters found on prosumer digital mixers (Boss/korg/fostex), and recording at 16bit.
 
To avoid being misunderstood here: We are not trying to persuade you that external dither is altogether pointless. Theoretically, the ADI-8 PRO could benefit from sophisticated dither or noise-shaping when transferring data to 16-bit media. In reality however, DC-free converters and limitations posed by real recording environments negate any advantages that dither might bring.
- RME article


Mixsit, thanks for posting that. Is there any reason why the results they got using the ADI-8 would not likewise be found in other A/D units (e.g. budget stuff: Alesis AI3, ART Dio).
 
teainthesahara said:
- RME article
Mixsit, thanks for posting that. Is there any reason why the results they got using the ADI-8 would not likewise be found in other A/D units (e.g. budget stuff: Alesis AI3, ART Dio).
Thanks for the thought. ;) I read what's out and about, but have to take it with only partial understanding.
They also point to our bottom noise level as providing dither noise'.
To answere your question -I haven't got a clue! :p
 
Back
Top