The fact that I still track to tape first before I dump to DAW, probably goes a long way to keeping me "safe"...but I just want to be clear about a few things WRT your comments, as I may start doing more and more direct-to-DAW stuff.
In the cases you mentioned above...I'm assuming that their front end was
already crapping out, and THAT'S where the bulk of their pinched/unfocused/distorted sound was coming from.
IOW, to go back to the question I asked you a few posts back...
IF the signal coming out of the front end is clean/good...
but HOT...does the converter in any way cause sonic degradation
just 'cause the front end is feeding it a hot signal (but below clipping)?
IMO...most of the people with these bad/hot audio situations are the ones using the all-in-one boxes...so it's hard for them to
separate out what their built-in pre/DI/comp/EQ is doing VS. the actual A/D in their all-in-one-box.
I would say that if you can monitor your front end signal
before it gets to the A/D...if it sounds right, then the actual dBFS level is not all that critical as long as you are not clipping.
That said...since a lot of people may ONLY be monitoring their post-A/D signal...then yeah, to "play it safe", it may be best to just
visually stay in that safe zone.
Since I don’t use “all-in-one” boxes…even if I decide to track to DAW instead of going to tape first…my SOP would be to just
let the analog front end tell me where to set the levels and not really worry all that much about the DAW’s dBFS level other than to make sure it’s not clipping.
I’m sure if people follow that SOP, their levels would rarely be too hot anyway since the analog front end would/should “take care of it”, and as long as they’re
listening to what it’s doing…I don’t see how they could ever get into trouble.
Am I way off base here with that thinking?