i thought it is best to record around -18dbfs

  • Thread starter Thread starter djclueveli
  • Start date Start date
The guy who started this thread was talking about the digital peak levels and nothing else. The discussion that he quoted also only talked about that.

True, there is the issue of the analog gear that runs ahead of the A/D converter and IMHO it's just as important an issue as the converter peak levels. But it's not the topic that was raised!

Shouldnt we start a separate thread discussing the matching of analog pre's etc to converters? Are you guys taking me down because unlike you I stuck to the topic?

Tim
Talking down? Hell no. The only thing I've done is try to contribute to the demise the notion that 'Hot is better.
:)
 
If you figure you're peaking somehwere in the -3dBFS to -9dBFS range (with your digital inputs set at unity), that you're probably driving the analog input of your converter at somehwere within a few dBs or so of what it's designed for.



G.

exactly. ........i think alot of people are looking WAY too much into this. Im a simple man, and i just set my analog mix console track to flat 0 on the fader, and slowly adjust the gain trim pots until that tracks level meter is barley peaking, and never over the "red zone (0)".

and when you look into your computers DAW (i run my console into my computers sound card into my computer).....its should read that is coming in the range of -8 to-5 db on your DAW track. BUT THATS "full scale", and not analog scale. Thats what everyone seems to be be questioning in here, about the analog/digital conversions.....

.....who cares! just set it up so your analog input (mic pre/whatever) its not clipping over zero! and what it comes up in your digital DAW, .....it....is... what... it.... is!!!

way too many people are looking too far into this!

im sure im gonna get negative feedback for this post, and "red chicklets"......

...but its just how i feel.
 
The guy who started this thread was talking about the digital peak levels and nothing else. The discussion that he quoted also only talked about that.

True, there is the issue of the analog gear that runs ahead of the A/D converter and IMHO it's just as important an issue as the converter peak levels. But it's not the topic that was raised!

Shouldnt we start a separate thread discussing the matching of analog pre's etc to converters? Are you guys taking me down because unlike you I stuck to the topic?
*sigh* I really don't understand why everyone finds gain structure to be so hard of a subject.

Tim, no one has gone off-topic here; we're all talking about digital recording levels. The point that I was trying to make is that the key to understanding good digital recording levels is understanding the A/D conversion itself. Put another way, digital levels revolve entirely around the actual conversion factor itself; digital levels cannot be treated seperately from the analog levels going into the converter. One determines the other, and vice versa.

The digital dBFS scale is not a stand alone recording scale independent of analog levels. The calibration of the converter is what decides how many "y" dBFS winds up coming out of an "x" dBu analog signal for any given recording signal chain.

The fact is there's a reason why the engineers who design and make this gear we use, misuse, and abuse calibrate the converters the way they do - namely where line level converts to some 18dB (give or take) below 0dBFS; they want to design their gear so that the engineer can pump "normal" (i.e. averaging somewhere near line level or 0VU) levels into the converter, levels that the converter is designed to operate at just like every other piece of gear in the chain is, and have the digital signal come out the ass end at a level that uses plenty of digital bits to work well and sound well, while still leaving enough headroom for the dynamic range.
bkkorneker said:
just set it up so your analog input (mic pre/whatever) its not clipping over zero! and what it comes up in your digital DAW, .....it....is... what... it.... is!!!
It's really pretty much that simple. Pump the right levels into the analog side of the converter, let the converter do it's thing with little or no gain change after conversion, and pretty much the right levels will come out the digital side (with minor changes needed only to accomodate those signals with a crest factor that exceeds the headroom of the converter.)
mixsit said:
I was looking for a straight ahead ref. to 'no loss of resolution' at lower digi levels
All one has to do is some simple math. There are 6 decibels per digital bit of resolution. If one is recording at 24 bits, that's 6x24, or 144dBs of dynamic range to work with. Even if we take off the lowest bit as noise (as many recommend), that leaves 138dB of dynamic range to play with.

Now, 138dB is more than enough. Even if the first bit represented absolute silence in an anechoic chamber, 0dBFS would be 138dB higher, which is beyond the pain threshold of the average human ear and can easily cause deafness.

Add to that the fact that it's extremely rare - especially on the home recording level - to have an upstream recording chain that's going to have a composite dynamic range of more than about 110dB, even with good gain staging on the part of the engineer. That means that in the 24-bit digital domain, the usable dynamic range is some 28dB wider than the best average real-life signal going into it. You could lop off 4 bits (24dBFS) and still have enough "resolution" (a lousy word, but we'll run with it for this purpose) to cover it.

G.
 
Last edited:
I think the old advice is as good as it gets. Track as high as you can but dont clip.
I understand it's been addressed already, but that is certainly old, but never "good" advice.

Tracking as hot as you can without clipping (depending on the front end chain) causes noise, adds distortion, reduces clarity and focus, throws the S/N into chaos, *spectrally* changes dynamics and *dynamically* changes the spectrum. "All things bad" can come from tracking too hot. The system wasn't designed to use up all the headroom - It was designed to keep *more* headroom intact. Not enough IMO (I tend to track around -24dBRMS or lower). On top of that, it slaps you in the face once again by making you turn the levels down considerably for mixing.

Doesn't make an awful lot of sense, does it...
 
I understand it's been addressed already, but that is certainly old, but never "good" advice.

Tracking as hot as you can without clipping (depending on the front end chain) causes noise, adds distortion, reduces clarity and focus, throws the S/N into chaos, *spectrally* changes dynamics and *dynamically* changes the spectrum. "All things bad" can come from tracking too hot. The system wasn't designed to use up all the headroom - It was designed to keep *more* headroom intact. Not enough IMO (I tend to track around -24dBRMS or lower). On top of that, it slaps you in the face once again by making you turn the levels down considerably for mixing.

Doesn't make an awful lot of sense, does it...

Massive, I'm sorry but "not clipping" is "not clipping". Whether we peak at -30dbFS, or at 0dbFS and no higher, it amounts to the same thing in sonic terms. No clipping.

Any decent setup will not introduce any more distortion or noise if peaked at just below clipping than if it had peaked at -30dbFS.
Exceptions to this may well be tube amps or analog tape which can gradually introduce more distortion with increasing gain, at least beyond a certain point. Interestingly they are the "old" and yet you call my argument "old".

Headroom is headroom. Or should we have competitions like " I can tell that vocal was tracked with only 5db headroom. It would have sounded better with 20db headroom"....

We call it "headroom" because it is there to be used if needed. There is no prize for having not used it up once the tracking is done.

Tim
 
Last edited:
Then you can decrease the gain to make sure everything fits under 0dBFS.

G.
Thanks Glen, I couldnt have said it better myself. Can you please pass this on to Massive Master? It might sound better coming from you...

Tim
 
Thanks Glen, I couldnt have said it better myself. Can you please pass this on to Massive Master? It might sound better coming from you...
John knows what he's talking about, I have nothing to school him on here.

The quote you took out of my post makes me feel like you simply don't understand - or worse, choose to ignore - what we have tried to explain, it is being used so incredibly out of context. That was referring to turning the gain down when the crest factor was wider than the headroom designed into the converter. In fact, this is something that John even implied as a problem when he said he thought that the headroom built into standard converter calibration wasn't enough. He knows the score.

What I said was that occasionally one may actually have to turn the input gain down because the dynamics of the audio signal might actually be wider then the available headroom in the converter at 0VU. Or, if you wish, that if you run 0VU RMS into the converter, that the peaks will clip, so one needs to turn down the input gain to fit the signal in. That is NOT AT ALL the same thing as saying that one should record as hot as possible without clipping. It doesn't even come close.

G.
 
You make arguments against your own points Tim. The original topic was loosely based around digital peak levels. You say no clipping is no clipping and no distortion is no distortion... Then you say track as close to 0 as possible in the digital realm. Well, if you put those two statements together than you are contradicting yourself. Tracking with peaks that close to 0 IS adding distortion, its just that it is coming from the analog front end and not from the converter itself. I guess we would be lucky at this point though because according to you the converter would be running at its sweet spot and would faithfully reproduce the screwed up signal that was just sent to it.

I also disagree that any of this is off topic. The actual first thing in this topic refers to tracking at -18. My statements served to clarify to the original poster that -18 as used in previous conversations on this board was not intended to be the peak value, but the average value. In addition to that, those values actually coincide closer with the quotes from the article in question much more so than your "track as hot as possible" approach. The time of the ADAT was then and not now (thank god). Digital equipment has changed and evolved and as a result so has the approach. The real lesson here should be to properly gain stage and not worry about the converter. Keep your front end running as designed and then the backend will also. This is partly why converters to not need input trims. They are already designed to run properly with the rest of the system. THIS is why I felt like your advice was both incorrect and wrong. It isn't personal, but advice like this is given far too often. Usually by people that have no idea why they say it, but just regurgitate things they here that often come from bad sources.
 
So which part of "dont clip" do you not understand?
It's not the "don't clip" part we have a problem with:
Tim Gillett said:
Track as high as you can but dont clip...The truth is, a signal that peaks at 0dbFS and goes no higher is the best level.
Its the idea of basically peak normalizing a raw digital track in real time that you originally espoused that we have issues with.

I don't understand why you're getting so defensive about this, Tim. Honestly. It's not a personal attack on you in any way, shape or form. Take a deep breath and relax, bud :).

G.
 
I think it's safe to say that it is time better spent to worry about the analog side of things first.

Don't go clip anything at the input or the output, and do what you can to avoid having to use extreme settings (understanding that exceptions need to be made in the event of extreme circumstances - i.e. tracking mouse farts with old ribbon mics, etc.). This will keep any distortions and noise to a minimum and keep the front end of things within their optimal settings.

From there, assuming you've got the ability to switch between +4 and -10 on your digital input, you have some room to finesse things.

But I wouldn't go stressing out too heavily over how many bits you're using or not using. Just give yourself some room, don't clip anything and you should be good to go. Don't ignore or underestimate or under-utilize the luxuries that advances in digital recording technology have afforded us over the last 10 years.



.
 
If it's true that the devices work best at line level, then recording "as close to clipping" is obviously not the best way to do it.

Either way, I don't hear a difference, in the tracks or in the final mix, except for maybe a little extra noise if recording too hot.
 
I think one thing that needs to be looked at here is the fact that it is not at ALL uncommon for the converter's electronics themselves to distort when coming near where they would write 0dBFS

One way to see this, thought it wont always tell you the whole story is to send a signal generator to your ADC while watching its level in a DAW. Unless its TOTAL crap itll stay pretty linear with +1db adjustment from the generator meaning a 1dB rise in your daw's meter...As you get around-6dbFS though you might see it starts taking more input to get the same gain change (set to increment in tenths of a db or finer to see this best)
 
I think the old advice is as good as it gets. Track as high as you can but dont clip.
Once we start giving out numbers like -18dbFS or -10dbFS some will think that it's some sort of magic figure that results in a perfect track. Give people the reason behind the practice not just the practice itself.

The truth is, a signal that peaks at 0dbFS and goes no higher is the best level, theoretically anyway, for tracking at minimum converter noise.
The problem is, in a live recording session, arranging that situation is almost impossible. Hence the safety margin.

But the size of that margin people will always disagree on. It's only ever a guesstimate as to how much higher the signal might go, and that varies greatly depending on so many factors.

Maybe we need a crystal ball plug in which seaches for the peak level before the talent has even stepped up to the mic. Now that would be some tricky software.

Tim
Here is my unedited original post. Show me where I said to even attempt to peak normalize in real time, (as opposed to getting good S/N on the track) however impossible that is, and however pointless it would be, considering the DAW can do it with hindsight and perfectly- if it is needed at all.

Now, read what I said about the need for a safety margin (ie: headroom in live tracking) and the very good reason for it (we cant predict the future). Read what I said about the magnitude of that being something different people might disagree on in terms of exact numbers but that the underlying practice of allowing headroom is solid and uncontested. The exact numbers are secondary. Understanding the principle is the key.

Other issues raised by others such as inaccurate metering or poorly matched gear which clips before 0dbFS are side issues irrelevent to the main enquiry made by the guy who started the thread. If your gear clips or distorts at anything less than 0dbFS, why not get it fixed? OTOH if you can live with that, cool. That's your choice.

I choose to have a system that clips at 0dbFS and not less. I guess I'm just weird.

Tim
 
I think one thing that needs to be looked at here is the fact that it is not at ALL uncommon for the converter's electronics themselves to distort when coming near where they would write 0dBFS

One way to see this, thought it wont always tell you the whole story is to send a signal generator to your ADC while watching its level in a DAW. Unless its TOTAL crap itll stay pretty linear with +1db adjustment from the generator meaning a 1dB rise in your daw's meter...As you get around-6dbFS though you might see it starts taking more input to get the same gain change (set to increment in tenths of a db or finer to see this best)
That's an interesting twist, pipe, which begs a couple of follow-up questions, if I may stroke you for more info...

1.) Define "not at all uncommon". Do you have any good numbers or at least guestimates on just how common this may be?

2.) How bad does that non-linearity get? Are we talking a variance of one dB over the last 6 dBS? More? Less?

3.) Based upon your description of the problem, it sounds like there is accurate conversion at +4dBu, but as the converter approaches it's maximum operating level, the conversion becomes non-linear. How exactly does that effect the old standby method of determining the calibration of a converter based upon it's maximum voltage spec?

For example, if a box is rated at (for example) a maximum level of +24dBu, the general rule of thumb is that means that device is calibrated so that +4dBu (0VU) converts to -20dBFS. Does this remain true with the non-linearity you introduce, and if so, does that mean that the box will never truely reach 0dBFS? or does it mean that +24dBu does equate to 0dBFS, but the calibration at 0VU will be something higher than the math implies?
Tim Gillett said:
Show me where I said to even attempt to peak normalize in real time.
The notion "track as hot as you can without clipping" describes a process that has the same same effect in tracking that a peak normalization would have after the fact. They both bump the overall volume of the track in a linear fashon to a point where the highest peak just falls under 0DBFS. You may not have intended to describe a real-time peak normalization, but both of those ducks walk, quack and smell exactly the same.

Look, Tim, I'll grant you that in the second half of your OP you took back half of what you said in the first half. But when one starts out with what sounds all the world like the thesis of the post which says verbatim, "I think the old advice is as good as it gets. Track as high as you can but dont clip", you can understand how we might miss how you then try to explain through a few general caveats that in fact the old advice actually isn't as good as it gets.

And even taking your entire post, caveats and all, that still doesn't equate to recording what comes out of the converter with no digital gain boost whatsoever, which is what the rest of us are explaining as being the cleanest route to take. You still are implying that, even with a safety factor, boosting digital gain to get the most bit usage without clipping is the way to go. We have been trying to explain why that is not the case.

Maybe once your rep strength moves off of 666, you'll see the differences in the devil in the details ;) :D.

G.
 
It all depends on the singer. Riding the vocal is fine to make sure you 1) don't clip loud passages of the song, and 2) don't record to low and get a taste of the noise floor. But if a singer knows/learns to back of the microphone during loud passages, this eliminates the need to ride the fader (they ride the level on their end).

What you want is a singer who backs off during loud parts (or yells) so you capture the ENERGY (apparent volume, not actual volume) of the performance.

So coach your singers to use proper mic technique!
 
I give up Tim, you win. Track as hot as you can without cliping and we will just pretend that your analog front end isn't suffering and sputtering to keep up. LIke Glen said, the biggest problem that I see is your initial bold statement that plainly conveys bad practice.

Think of it like a car maybe... Just because you get more power at a high RPM certainly does not mean that everything should idle there.

Or maybe think of it like a camera. Say we have an image we need to capture on camera. Tracking hot is like using a 10 megapixel camera to take a picture of something that is blurry, out of focus, off center etc... The 0 megapixel camera will take a beautiful picture of something that looks like shit. The other option available to this photographer though may be an image that is well focused, with good color depth and spacing etc... Now if this photographer took this pictre say with a 9.5 megapixel camera, which picture would come out better? Which would be more useful? Technically speaking the 10 megapixel image is og higher resolution. The downside though was that the highest res image here is of a garbled subject. The 9.5 megapixel image however while tehnically is lower in resolution, was able to captre a beautiful object and maintain 95% of the image quality that the 10 megapixel camera did. To make this totally clear, the nasty image is what happens when you drive your front end too hard just because you felt the need to get evry last teeny fraction of a bit out of your converter. The beautiful image is what happens when you take care at your front end and utilize proper gain staging. Even though you technically had to capture it at a "lower quality", the end result actually ended up being of far higher quality to the end user because the slightly reduced quality of the conversion process actually captured a much higher quality signal from the get go. So to summarize, tracking to just below 0 on your converter is akin to excellently converting a lacking and lower quality signal. Leaving yourself some headroom by properly gain staging your analog front end nets in a still excellent conversion of a superior source. I know which one I will choose almost every time....

As for this statement... "So which part of "dont clip" do you not understand?" The part I do not undertand is where you say not to clip the converter, but track as hot as possible so in essence go ahead and clip or at the very least distort your source signal. Your own statement ,since you are adhering stricly and solely to the original topic at hand (which actually is not necessarily the only original topic) which referred to peak levels, actually contradicts itself. Thats the part I do not understand.
 
"not clipping" is "not clipping". Whether we peak at -30dbFS, or at 0dbFS and no higher, it amounts to the same thing in sonic terms. No clipping.

Any decent setup will not introduce any more distortion or noise if peaked at just below clipping than if it had peaked at -30dbFS.

Tim, the PSW links that mixsit posted show that this isn't the case, thanks to intersample peaks.

Nika Aldrich's article "The Consequences of Traditional Digital Peak Meters" explains the concept well. It's available here: http://www.cadenzarecording.com/papers.html

As Paul Frindle demonstrates on the PSW thread, the signal peak can exceed the sample peak by as much as 6dB. Which means it's absolutely possible to clip even if your peak meter says you're not clipping.
 
Any decent setup will not introduce any more distortion or noise if peaked at just below clipping than if it had peaked at -30dbFS.

Tim
That depends on the source. A drum hit peaking at -30db or -1db probably won't have an more distortion, but a sine wave will.

Most preamps can handle a transient at +20dbu, but not many of them can do +18dbu RMS and without distorting. That's why it's best to stay in the preamp's sweet spot, line level.

The difference between a good setup and a bad one isn't whether the preamps distort, it's how they distort.

That's one of the reason why people push the level on neve preamps, because they distort in a really musical way. That goes for most of the sought after preamps.

The argument about "as hot as you can without clipping" is true from the digital conversion perspective, but without an analog chain feeding the converter, there is nothing to convert. If you screw the pooch on the analog side, quantization noise is the least of your problems.
 
1.) Define "not at all uncommon". Do you have any good numbers or at least guestimates on just how common this may be?G.

A few of the popular chinese 8 analog I/O plus 2 spdif type interfaces have done this under my testing. Ive tried a few different methodologies and found the same thing. One more expensive standalone adat and tdif converter got quite a bit of distortion in the last 3 dB, though it stayed pretty linear

2.) How bad does that non-linearity get? Are we talking a variance of one dB over the last 6 dBS? More? Less?

Its pretty crazy just how much distortion it takes to make something nonlinear in level response, even a .1db

if you mess around with distortion algos in a daw for instance, it takes some pressty serious signal mangling before it really effects level much.

So Im not talking about 5 db needing to push the last 3 to full scale, but I have seen 4, and even 3.2 was some pretty nasty distortion

now, Im talking sine waves, which I know arent always relevant, but they can also be the easiest to deal with so you would think they would be safe all the way up, but thats not what Im seeing.

Thats said, the Tascam DA-38 ran it right to the rails with no noticeable distortion, and that was "old" converter tech....

One thing we need to keep in mind, and why Im a little worried about peaking anywhere NEAR zero, is the absolute level involved

If we are saying 0 is -18, think of how much gain over zero that is! Thats asking great classic devices to REALLY be operating in a range they werent exactly optimized for

3.) Based upon your description of the problem, it sounds like there is accurate conversion at +4dBu, but as the converter approaches it's maximum operating level, the conversion becomes non-linear. How exactly does that effect the old standby method of determining the calibration of a converter based upon it's maximum voltage spec?

Its maximum input voltage or its maximum output voltage? Its weird but I've seen cases of reconstructing waveforms where the DAC could put out a number higher than what its dbfs should be...I think Nika wrote a paper about this

For example, if a box is rated at (for example) a maximum level of +24dBu, the general rule of thumb is that means that device is calibrated so that +4dBu (0VU) converts to -20dBFS. Does this remain true with the non-linearity you introduce, and if so, does that mean that the box will never truely reach 0dBFS? or does it mean that +24dBu does equate to 0dBFS, but the calibration at 0VU will be something higher than the math implies?The notion "track as hot as you can without clipping" describes a process that has the same same effect in tracking that a peak normalization would have after the fact. They both bump the overall volume of the track in a linear fashon to a point where the highest peak just falls under 0DBFS. You may not have intended to describe a real-time peak normalization, but both of those ducks walk, quack and smell exactly the same.

What Im seeing is that if I have everything at 0VU then the converter is also sitting at what it considers zero. Some of them at -18, some at -12, some at -15. They all seem to be pretty good about that...Even the cheapies are pulling that to .01 or so db, nothing horrible out of whack. but from there, the math doesnt always work, if -18dbfs is 0VU Im seeing +19 or +20 to get zero dbfs to read on the daw's meters, frightfully sometimes even if the converter's lights are showing an over it still might be 0.8 dbfs down on the daw.
 
Back
Top