i thought it is best to record around -18dbfs

  • Thread starter Thread starter djclueveli
  • Start date Start date
if the user keeps an eye on both but especially doesnt overcook the fast peak reading meter level, why isnt that good enough?
Tim
Yolu are right, it is important to emphasize the differences between analog VU metering and digital dBFS metering, not only in the difference in the ballistics (quasi-averaging vs. peak reading), but also in the decibel scales used.

Your above quote is extremely close to the absolute truth, if I understand your meaning correctly. However, I think it's emphasis on the peak meter still kind of misses what the real fundamental key behind this whole thing is, and it can still lead to an "improper" gain operation.

SHIFTING THE PERSPECTIVE
The key, IMHO, is in understanding the jump from the VU meter scale to the dBFS meter scale, and the fact that there is a calibration, a specific conversion factor between the two. Once one understands that 0VU = xdBFS, and that "x" is determined by the individual A/D converter, they then can easily understand three important things:

- that since 0VU = line level, therefore there's a level on the dbFS peak meter that is also equal to line level.

- that if they are pushing good gain into the analog side of the converter, they'll usually get good gain coming out the digital side as well.

- that modern-day converters are purposely designed to set line level on the digitial side to find a proper balance between headroom and S/N ratio, and that this line level is several dBFS lower than most initially imagine.

Put this all together, and it really sums up this way: In most cases the ideal digital recording level is simply set by pushing the ideal line level into into the analog front of the ADC, letting the converter do it's thing, and recording the natural digital levels it puts out without any further gain modification. The only exception to that would be if the audio had a huge crest factor exceeding the digital headroom in the converter calibration, in which case we find digital clipping. In that case we simply turn down the gain into the analog front of the ADC the dB or three that we may need to make room in the converter.

By working from the basic concept of the conversion factor itself, there is a whole ton of simple slap-one's-head information that pops out that makes the whole process quite simple, understandable and de-mystified, while ensuring the bast possible tracking levels and gain structure methods at those final stages in the chain.

LOSING THAT PERSPECTIVE
What you said in the above quote does fit that procedure in one particular solution of that way of stating it, but there are other solutions that fit that way of stating it that are not correct.

That quote still implicitly treats the VU meter (and therefore the analog line leval) and the dBFS meter (and therefore the digital levels) as two rather discrete entities, and the connection to optimum line levels on the dBFS side of the equation is not made. Therefore it's simply watch to make sure you don't clip, with little-to-no reference as to where the rest of the signal should be laying.

With no line level guidance there, it's a small step to the erroneous concept of as long as you're not clipping, you're OK; so track as hot as you can without clipping, This idea is really a hold over of an old analog truism related to S/N ratio, but for reasons alread discussed in this thread, doesn't actually apply the same way at all to digital. But when we use it, we're then left with a gain structure strategy that is not the same as what's described from the conversion factor angle at all.

LOOPHOLES IN GAIN STRATEGY
Additionally, When the line level link is not carried through between the metering systems and they rather are kind of considered seperately, then the correct idea of adjusting one's final digital level by adjusting the analog signal going into the converter is no longer implied or intuitive. Many folks might be (and based upon the history of questions on this board - may real folks really are) temped to reduce any clipping they find by instead pulling back on the digital input levels on their computer interface driver or in their recording software. This may darken the clipping lights on their dBFS meters, but it does not eliminate the actual source of the clipping - namely in the converter itself. The levels will be pulled back in the computer after the clipping has already happened, which of course too late; the damage has already been done.

And a similar misunderstanding can happen in the other direction: If they see the levels are too low, even based upon a line level RMS reference, they might be - and often are- tempted to rectify that by boosting the digital input gain on the computer. And we all know that all that does is boost the overall digital volume, raising the composite noise floor volume and reducing available headroom for the mixing and mastering processes yet to come. And there is no increase in resolution when boosing it digitally that way, even though we are using more bits; all the digital boost is doing is adding more zeros to the end of the value, it's not really increasing the precision of that value at all. So there's several reasons why that's not a good idea.

AVOIDING THOSE PROBLEMS
All those potential traps are elegantly avioded while the levels are automatically kept on an reasonably optimum track, however, when one approaches it from the perspective of averaging the analog input signal into the converter at somewhere around it's expected line level, and then letting the converter do it's thang naturally.

THE WHOLE -18 THING
And that is where the idea of digitally tracking with an average RMS level somewhere in the mid-to-late negative teens dBFS come from. Most folks on these boards pick -18dBFS not so much as a magic number, but as a common example that is usually a "close enough" oversimplification for short, easy answers given to those at hobbiest level recording interests.

PRO vs. CONSUMER LEVELS
Tim Gillett said:
Still, I think that to generalize about this and make a general prohibition on the top say 6db of digital recording room simply because it may be a problem in a professional environment due to amplifier equipment limitations is I think unwise. Many users on this forum dont use pro levels at all
Don't think of so much as a "prohibition" on the top dBs as an explanation of a technique which renders impotent most needs and desires for intentionally using them.

And it doesn't matter if one is using a converter input designed at +4dBu line level ("pro" or "commercial" levels) or -10dBV ("consumer" levels). Either way the analog side of the converter considers those the designed oVU line level for that device and the "sweet spot" for signal operation, and either way the converter calibration will still be calibrated to convert that particular line level to it's designated digital dBFS level. It's calibrated around the 0VU level for that device input. For example, there are some Soundblaster-class cards out there that convert their expected -10dBV input (0VU, as far as THAT card is concerned) to -18dBFS the same way that a given prosumer or pro interface will convert a +4dBu (0VU for THAT unit) to -18dBFS.

G.
 
Last edited:
Well, I've read nothing in this discussion to change the way I do things. I shall continue to track with the highest peaks at about -6dbfs which usually puts my RMS at about -18 or so. I'm happy with that.
 
The most significant bit has all the other bits to back it up. This is the audio range from -6 to 0 dBFS.

The least significant bit is by itself. The range of choices for dynamics are pretty simple here - either you have audio at ONE LEVEL ONLY, or no audio is present whatsoever. The 16 or 24 bit word length builds on it from there. Each time the word length adds a bit, the available number of splits to register a distinct dynamic level is doubled. 2 bits gives you 4 available choices and 12 dB range to be considered. The audio will sound like crap because there aren't enough variations to render it at an accurate level.

At 16 bits, the overall number of splits possible is simply 65,536. Half of that range lives in the area from -6 to 0 dBFS. As you keep moving down in 6 dB increments, the range keeps getting cut in half.


The main excuse that makes any kind of sense for printing hot levels in 16 bit systems is this "dynamic resolution" thing, or whatever.

I'm sorry sir, I'm just not buying this dynamic resolution thing. Say we are dealing with audio at a set bitrate --16bit. The audio is dynamic but hits up to the full scale (0dbFS) and we want to turn it down 6db. So now we gotta fit 65,536 steps of volume changes into the 32,768 possible steps for the range between -12 and -6? Is this really what you are saying? Because it strikes me as rubbish. Hopefully I am misunderstanding what you are trying to say. My understanding is that there are as many amplitude steps (in a given bitrate, say 16) between 0 and -6 as there are between -90 and -96. Only problem is the LSB sounds nasty/unnatural since it is simply on or off. For the digital device to be able to interpret any amplitude range between -96 and "digital black," more bits are needed because the steps will have to be much smaller. And changing this bitdepth doesn't screw up your amplitude by packing your 0 to -12 range now into the 0 to -6 range or something, it just extends that LSB deeper towards "digital black" by giving more possible amplitude values below what you already had.
Convince me otherwise. :)
 
Tim, you mentioned this.... "Quite possibly the pre's they are using dont struggle to drive sinewaves right up to 0dbFS (peak) cleanly"

This may explain why you have been looking at it the way you do. IOf this were true, then your advice would actually be great advice. However, I would like to see one example that meets this criteria at even +10 which is short of 0dbFS by at least 6 db on most any converter out there.
 
Tim, you mentioned this.... "Quite possibly the pre's they are using dont struggle to drive sinewaves right up to 0dbFS (peak) cleanly"

This may explain why you have been looking at it the way you do. IOf this were true, then your advice would actually be great advice. However, I would like to see one example that meets this criteria at even +10 which is short of 0dbFS by at least 6 db on most any converter out there.
 
Well, I've read nothing in this discussion to change the way I do things. I shall continue to track with the highest peaks at about -6dbfs which usually puts my RMS at about -18 or so. I'm happy with that.
That statement would only be true for signals with a crest factor of 12db. A synth pad for example might only have a crest factor of 3db, which would put the RMS around -9dbfs if the peak were -6dbfs.

That seems to be the point that gets missed. Different sounds have different peak to average ratios. So setting your recording levels based soley on peak levels ignores the proper gain staging of the analog side of the chain.
 
Funny, I always thought adding a certain number of db's lead to a standard VU meter's sensitivity wasnt quite the same thing as gain staging but there I go. I learn something every day...

Tim
You are still missing the point. The signal needs to average around 0dbVU for proper gain staging. The headroom built into the conversion process is there so you can capture signals with a crest factor of 18db without clipping the converter.

Again, the converter doesn't care what the signal level is (as long as it doesn't clip), the analog chain does. That is why it is important to use proper gain staging, so you don't blow it on the analog side on the way to the converters.
 
For those who have an interface or SIAB where gain staging is largely if not entirely internal and fixed, it shouldnt be an issue as the manufacturer should have sorted that out. But I agree they may not have either.

Anyway, hope that clarifies things re the amp distortion issue.

Tim
If $2000 per channel preamps can't do it, what are the chances that an SAIB with 8 mic preamps (and a 24bit mix buss????!!!!) costing $1000 is going to have these super awesome preamps that will outshine the best of the best?

This is the main reason why those things tend to sound bad.
 
I'm sorry sir, I'm just not buying this dynamic resolution thing. Say we are dealing with audio at a set bitrate --16bit. The audio is dynamic but hits up to the full scale (0dbFS) and we want to turn it down 6db. So now we gotta fit 65,536 steps of volume changes into the 32,768 possible steps for the range between -12 and -6? Is this really what you are saying?


I think what the advice is about is leaving some headroom!!! You dont often track a signal that is so flat that you know there will be no excessive transients. It's about leaving headroom.Your not tracking at 16 bit still are you??? you can afford to leave some bits unused in 24bit ( Blasphemey!!!! Leave no bit behind:p:confused::p)

Honestly folks , how screwed up is this siuation? Why the hell should the person tracking and mixing get freaked out about getting as close to the maximum allowed by the media?? , leave that to a card carrying member of the mastering mafia!!
Just track it clean, avoid clipps, and, if you want a track to be louder in the mix and you run out of fader then use the sub-bussses!!!!!:rolleyes:put a compressor every track and on the 2buss if must:)!!!, loud can be acheived without clipping!!




:D
:D:D
:D:D:D
 
Last edited:
Different sounds have different peak to average ratios. So setting your recording levels based soley on peak levels ignores the proper gain staging of the analog side of the chain.



Well said !

Once again I think people are reading about how the M.E. sets his limiter out to -.10 Dbfs and confusing the maximization of the final product with tracking and mixing. You don't have to dance w/ clipps and overs when your tracking and mixing!!!!!!!!!!!!!!!


That said..... go ahead and use up your precious time by trying to .....USE EVERY DAM BIT!!!!:D

:D
:D:D
:D:D:D
 
Double post that the BBS won't let me delete. Sorry.

G.
 
Why the hell should the person tracking and mixing get freaked out about getting as close to the maximum allowed by the media??
Some old habits and biases are tough to break. That goes back to analog tape as a rule of thumb for ensuring that the maximum signal-to-noise ratio gets applied to the tape.

Now, people still hold on to that idea, even though the usable dynamic range in 24-bit digital recording is more than enough to handle the lesser S/N ratio of the analog chain preceeding it, and therefore there's no compelling reason to push the recorded signal level high as there was with analog tape.

It's not about making the top bits forbidden so much as it is making them mostly unnecessary (in the tracking stage.) In most tracking cases there's just no need or advantage to using them.

G.
 
Last edited:
I think what the advice is about is leaving some headroom!!! You dont often track a signal that is so flat that you know there will be no excessive transients. It's about leaving headroom.Your not tracking at 16 bit still are you??? you can afford to leave some bits unused in 24bit ( Blasphemey!!!! Leave no bit behind:p:confused::p)

Honestly folks , how screwed up is this siuation? Why the hell should the person tracking and mixing get freaked out about getting as close to the maximum allowed by the media?? , leave that to a card carrying member of the mastering mafia!!
Just track it clean, avoid clipps, and, if you want a track to be louder in the mix and you run out of fader then use the sub-bussses!!!!!:rolleyes:put a compressor every track and on the 2buss if must:)!!!, loud can be acheived without clipping!!




:D
:D:D
:D:D:D

Thank you, that was quite useless. You are missing the side-concept that I was trying to seek clarification on. I am fully on board with the 0VU tracking concept. I roll my eyes at YOU, sir. :rolleyes:
 
Some old habits and biases are tough to break. That goes back to analog tape as a rule of thumb for ensuring that the maximum signal-to-noise ratio gets applied to the tape.
True - BUT...

*Tape* had about the same operating level as the gear pounding on it. A signal that's hitting tape at 0dBVU (a "hot" tape level) is the same as a signal hitting digital at -18dBRMS (or wherever the converters are calibrated).

The signals some people want to record at would cook right through analog tape (figuratively speaking).

That, PLUS the fact that there's no inherent noise to be concerned with, it's a slam dunk.
 
What's gonna happen when we move onto 32bit stuff?
I doubt we will. 32 bit float is only used for calculations, 32 bit non float would have more dynamic range than is possible in the earths atmosphere. There would be no advantage to having another 48db worth of dynamic range below the noise floor of any peice of gear you could hook to it, including the converters self noise.
 
I'm sorry sir, I'm just not buying this dynamic resolution thing. Say we are dealing with audio at a set bitrate --16bit. The audio is dynamic but hits up to the full scale (0dbFS) and we want to turn it down 6db. So now we gotta fit 65,536 steps of volume changes into the 32,768 possible steps for the range between -12 and -6? Is this really what you are saying? Because it strikes me as rubbish. Hopefully I am misunderstanding what you are trying to say. My understanding is that there are as many amplitude steps (in a given bitrate, say 16) between 0 and -6 as there are between -90 and -96. Only problem is the LSB sounds nasty/unnatural since it is simply on or off. For the digital device to be able to interpret any amplitude range between -96 and "digital black," more bits are needed because the steps will have to be much smaller. And changing this bitdepth doesn't screw up your amplitude by packing your 0 to -12 range now into the 0 to -6 range or something, it just extends that LSB deeper towards "digital black" by giving more possible amplitude values below what you already had.
Convince me otherwise. :)
Reg,

Let me try to give it a shot based upon my newfound (hopeful) understanding of the topic.

Yes, you are right, that added bits *do* extend the dynamic range downwards. That remains the same as we have understood all along.

However, extra bits can also increase the precision of the loudness measurement. Notice I said precision, and not resolution. While the two are on the surface the same (one increases precision by increasing resolution), I find the term "precision" to be more conducive to understanding what's actually going on. Someone please correct me if I my interpretation is way off. Let's start with these two statements:

1.) Any given bit in the word is only worth a 6dB "step in range" when it is the most signifigant bit in the word. By this I don't mean the first bit in the word, but rather the first "1" in the word (any preceeding 0s have no value other than placeholders identifying the position, and therefore the value range, of the first "1".)

2.) All bits following that most-signifigant "1" have an exponentially decreasing value, giving them an exponentially increasing precision, much like the digits after the decimal point in a real number decimal value. The values for each of the following digits are added to the base range value of the MSB, refining it's value. The more digits following the MSB, the finer the precision of the refined value. This precision is what's been previously referred to as "resolution".

Now, let's look at all that a bit further:

For an 8-bit word, here is a table showing the base value for each bit if it should happen to be the MSB of the word (the same principle extends on to a 24-bit word; I'm just saving myself some typing and some math ;) ):

1 ----- -6dBFS
2 ----- -12dBFS
3 ----- -18dBFS
4 ----- -24dBFS
5 ----- -30dBFS
6 ----- -36dBFS
7 ----- -42dBFS
8 ----- -48dBFS

So an 8-bit word actually has a full potential dynamic range of 48dB, which is 6dB per bit (and a 16bit word has a range of 96dB - 48x2 - and a 24bit word has a potential range of 144dB - 48x3.). That holds true as we know it.

Therefore an 8-bit word with a value of 00010000 - where the 4th bit is lit up - represents a volume of -24dBFS. A value of 00011111, however, has a value of -24dBFS +3dB + 1.5dB + 0.75dB + 0.375dB. This adds up to a total value of -18.375dBFS. As the next highest value is 00100000, which equals -18dBFS, that means that at that point in the scale there's an implied resolution of 0.375dB.

Take that same example up one bit, and see the difference. 00111111 equals -12.1875dBFS whereas 01000000 equals -12dBFS. Tha's an implied resolution of 0.1875dB, or twice the implied resolution. In fact, since the precison doubles with every bit below the MSB, the apparent "resolution" does as well.

As far as your example of dropping the volume, it's not a matter of having to fit 10 pounds of "steps" into a five pound bag as you described, It's rather simply that the new lower value is resolved with one step less precision than the previous louder value was.

Does that help out at all?
massive master said:
True - BUT...

*Tape* had about the same operating level as the gear pounding on it. A signal that's hitting tape at 0dBVU (a "hot" tape level) is the same as a signal hitting digital at -18dBRMS (or wherever the converters are calibrated).

The signals some people want to record at would cook right through analog tape (figuratively speaking).

That, PLUS the fact that there's no inherent noise to be concerned with, it's a slam dunk.
All true, and all due to the excess dynamic range available with the 24bit digital format. Which is why I included that 2nd paragraph behind that quote ;) :).

G.
 
Last edited:
Grrr, Let us delete our own post!!!!!!!!!!!!! :mad:
Better to bite yer tonge! mamma said if you can't say somthing nice!!!!!!!!!!!!!!!!!:p:p:p:D:D

:D
:D:D
:D:D:D
 
Last edited:
You are still missing the point. The signal needs to average around 0dbVU for proper gain staging. The headroom built into the conversion process is there so you can capture signals with a crest factor of 18db without clipping the converter.

Again, the converter doesn't care what the signal level is (as long as it doesn't clip), the analog chain does. That is why it is important to use proper gain staging, so you don't blow it on the analog side on the way to the converters.

So the issue of the analog driver amp distorting before the converter reaches 0dbFS is NOT a gain staging problem?

Back to Bob Katz, what is his recommendation for the problem of driver amps distorting on some signals before 0dbFS is reached? Dont use the last few db's, or the last bit? As a temporary solution, yes.
His recommended solution? Install a driver amp with a higher undistorted output. More analog headroom.

I agree with him. I think it's called gain staging.

Tim
 
Back
Top