Metering in digital domain

For proof, see the article I wrote for GC Pro's Audio Solutions magazine:

[url: Audio Solutions magazine[/url]

. . I also prove the same points with nulls tests in my hour-long [url...Audio Myths...[/url] video .....

See my article and video linked above for more info on how to test this.

, maybe 1.5 Million views on YouTube .....[url....A Cello Rondo....[/url]

Newer render with much better quality:

[url....A Cello Rondo - HD Version....[/url]


[url....Ethan's Audio and Music Bio Page....[/url]

You would do well to read the article I linked to previously, and try to understand it. For extra credit, read and understand my [url....Perception....[/url] article, and watch my hour-long [url....AES Audio Myths.....[/url] video which proves these points.
.... buy my book [url.....The Audio Expert....[/url] and read it all the way through.

.... My book, which explores all of this stuff, was accepted by Focal Press....

--Ethan


seems borderline spamming to me. :cool:

just sayin...
 
Hmmm, I think there are two divergent points of view here, both of which may have some elements of truth.

First off, on his main argument, I'm with Ethan on this. Everything else being equal, there will be no difference in the sound of a recording with peaks up near 0dBFS and one 12 or 18dB lower. However, in the real world there are other things that come into play (or at least MAY come into play).

The most compelling reason for lower levels is the avoidance of digital clipping. I hope we all agree that digital clipping is nasty, ugly stuff with no redeeming social qualities! I like to set my levels so the peaks are far enough below 0dBFS that, even when performance adrenalin raises the live levels, my recordings are safe. How much below 0dB? It depends on the performer and the musical style--but in all cases, I'd rather have slightly lower levels than a great take ruined by clipping.

Second, there's practicality. If you're going to end up with 10 or 20 or 30 tracks to be mixed together and all of them are up near 0dBFS, then, in order to mix, you have to pull them all down to avoid clipping on the mix anyway. (I'll ignore for the moment mixing in 32 bit floating point where it matters a lot less since you can just normalise downwards). If you have to pull down all your levels anyway, why not start a bit lower?

Third, if you ever use outboard effects with analogue inputs, then you need to feed out of your system at sensible analogue levels. Ingrained from my broadcast days, I'll align things with a constant tone at -18dBFS on my DAW and set levels so this is equal to 0dBu in the analogue world (and I do the same in reverse when setting up levels for recording from my mixer).

Now, the dodgy bit (pun intended):

I've never had a plug in that objected to levels approaching 0dBFS--but, if some claim to have ones that work that way I can't argue with them if I haven't tried the specific plug in. MAYBE this could be an issue with certain things that model analogue noise (or that are just badly written).

So, do I believe that, in the digital domain, -18 sounds better than 0dBFS? Nope. Call me a sceptic. However, do I see practical (and measureable) reasons to keep levels comfortably below clipping? Absolutely.

(And I don't see Ethan's contribution as in any way off topc.)
 
The most compelling reason for lower levels is the avoidance of digital clipping.

Of course, setting appropriate levels is a standard part of recording.

I hope we all agree that digital clipping is nasty, ugly stuff with no redeeming social qualities!

Actually, that's another myth. :D I did a test a few months ago where I intentionally clipped a recording of an acoustic guitar in the digital domain by raising the level to be 2 dB above 0dBFS. Then I lowered the volume to match the original for a fair side by side comparison. Yes, it sounds slightly different from the original - not unlike slightly overdriving analog tape - but it's not horrible as some would have you believe. I'll be glad to share the file if anyone cares. Or, better, just do the same experiment yourself in any two-track audio editing program.

(I'll ignore for the moment mixing in 32 bit floating point where it matters a lot less since you can just normalise downwards). If you have to pull down all your levels anyway, why not start a bit lower?

Well, don't ignore that altogether because it's hugely important and relevant to this discussion. But I agree there's no need to record as close to zero as possible. I've been arguing that for years. Even with "only" 16 bits, the noise floor is 20-30 dB quieter than analog tape. So recording with peaks around -10 or even lower is fine. I object only when people claim that recording at -20 dB sounds "better" than recording closer to zero, because it doesn't. Unless gain-staging is wrong somewhere else in the chain. Recording at conservative levels is mainly for safety and convenience.

Third, if you ever use outboard effects with analogue inputs, then you need to feed out of your system at sensible analogue levels.

Absolutely. I work entirely ITB, so this doesn't affect me. But outboard gear doesn't enjoy the luxury of 32-bit floating point math, so levels in and out definitely matter.

I've never had a plug in that objected to levels approaching 0dBFS--but, if some claim to have ones that work that way I can't argue with them if I haven't tried the specific plug in. MAYBE this could be an issue with certain things that model analogue noise (or that are just badly written).

Agreed, and I mentioned that earlier in this thread.

(And I don't see Ethan's contribution as in any way off topc.)

Thanks. And thanks for adding useful content to this discussion.

--Ethan
 
Hmmm, I think there are two divergent points of view here, both of which may have some elements of truth.

First off, on his main argument, I'm with Ethan on this. Everything else being equal, there will be no difference in the sound of a recording with peaks up near 0dBFS and one 12 or 18dB lower. However, in the real world there are other things that come into play (or at least MAY come into play).

The most compelling reason for lower levels is the avoidance of digital clipping.

I disagree. The most compelling reason for lower levels is to keep the analogue components in the signal path from producing distortion on input and, in the digital domain, to prevent inter-sample distortion at the DA output. Clip point is not necessarily the point at which distortion will be produced. It may manifest many dB BELOW clip point, depending on the quality of the components.

This article by Thomas Lund sums it up nicely:

www.tcelectronic.com/media/lund_2004_distortion_tmt20.pdf

I hope we all agree that digital clipping is nasty, ugly stuff with no redeeming social qualities! I like to set my levels so the peaks are far enough below 0dBFS that, even when performance adrenalin raises the live levels, my recordings are safe. How much below 0dB? It depends on the performer and the musical style--but in all cases, I'd rather have slightly lower levels than a great take ruined by clipping.

Headroom is not a new idea and translates perfectly to the digital domain, as long as you know how to translate your dB scales and know how all of your digital and analogue gear relate to one another. The 0dBfs/clipping problem becomes moot as soon as you think in these terms. This, after all, was why the K system was developed. It just makes sense.

Second, there's practicality. If you're going to end up with 10 or 20 or 30 tracks to be mixed together and all of them are up near 0dBFS, then, in order to mix, you have to pull them all down to avoid clipping on the mix anyway. (I'll ignore for the moment mixing in 32 bit floating point where it matters a lot less since you can just normalise downwards). If you have to pull down all your levels anyway, why not start a bit lower?

I agree but what about the distortion that might have been introduced by using preamps with inferior components that introduce distortion at 6 or 9dB below clip point? That distortion will still be there no matter how much you turn it down and at 20 or 30 tracks, the cumulative effect will kill your recording. You'll no doubt get harmonic buildup somewhere in the spectrum no matter what your digital levels are.

Third, if you ever use outboard effects with analogue inputs, then you need to feed out of your system at sensible analogue levels. Ingrained from my broadcast days, I'll align things with a constant tone at -18dBFS on my DAW and set levels so this is equal to 0dBu in the analogue world (and I do the same in reverse when setting up levels for recording from my mixer).

A sensible view. Can't argue with that.

Now, the dodgy bit (pun intended):

I've never had a plug in that objected to levels approaching 0dBFS--but, if some claim to have ones that work that way I can't argue with them if I haven't tried the specific plug in. MAYBE this could be an issue with certain things that model analogue noise (or that are just badly written).

Exactly, so, if you don't have the time to null test every plugin (like most working engineers) the best thing to do is just keep your levels conservative and get on with the job which, of course, mostly entails LISTENING. It's amazing how the more listening experience you have, the less you need to know the results of a null test. Yes, they're interesting but for fuck sakes, who's got the time and the energy to test ALL of their plugins this way? How exactly will knowing the results make my mix any better? Did engineers of old refuse to work with an 1176 until they knew the results of a null test? Fuck that.

So, do I believe that, in the digital domain, -18 sounds better than 0dBFS? Nope. Call me a sceptic. However, do I see practical (and measureable) reasons to keep levels comfortably below clipping? Absolutely.

Sure, if you record something conservatively (making best use of your analogue gear) it'll be more robust processing in that range, especially in floating point or double precision. HOWEVER. If you have pushed your recording levels into producing distortion on input at the analoge stage and then proceed to slam your levels ITB and within plugins that MAY or MAY NOT handle overs very well, you are putting yourself in a situation where the risk of compromised sound quality could become a reality.

Rather negate that shit altogether by practicing conservative levels and just LISTENING. You'll save yourself hours of speculation, null tests, and fidelity paranoia and you'll be able to just get on with your work.

Over and out.

Cheers :)
 
Just to add, this is an interesting point:

Thomas Lund said:
The analog level of a sine wave at fs/6 (8 kHz when sampling at 48 kHz) can be up to 1.25 dB above the peak level in the digital domain, while at fs/4 the discrepancy can be up to 3 dB.

Mull on that for a while.

Cheers :)
 
Actually, that's another myth. :D I did a test a few months ago where I intentionally clipped a recording of an acoustic guitar in the digital domain by raising the level to be 2 dB above 0dBFS. Then I lowered the volume to match the original for a fair side by side comparison. Yes, it sounds slightly different from the original - not unlike slightly overdriving analog tape - but it's not horrible as some would have you believe. I'll be glad to share the file if anyone cares. Or, better, just do the same experiment yourself in any two-track audio editing program.

Well, in this we have to disagree in terms of perception. I find even 2dB of digital clipping a fairly unpleasant sound but that's just my opinion. However, it's also worth saying that the effect gets worse rather rapidly as you go above your 2dB test point. In any case, I don't think anyone here is arguing that digital clipping is a "good thing" or that proper gain staging to ignore it shouldn't be one of the absolute basics of recording.

Well, don't ignore that altogether because it's hugely important and relevant to this discussion. But I agree there's no need to record as close to zero as possible. I've been arguing that for years. Even with "only" 16 bits, the noise floor is 20-30 dB quieter than analog tape. So recording with peaks around -10 or even lower is fine. I object only when people claim that recording at -20 dB sounds "better" than recording closer to zero, because it doesn't. Unless gain-staging is wrong somewhere else in the chain. Recording at conservative levels is mainly for safety and convenience.

I only disregarded 32 bit float operations for the sake of keeping the discussion in this thread simple. In my own "real world" I use a DAW that works natively in 32 bit float. In any case, that was just a detail--I fully agree that, as long as clipping is avoided, I can't hear a quality difference between -20 and -1. Actually, that's not quite true. If I have to apply significant gain to the -20 signal, I may well start to hear background noise (depending on what was used earlier in the chain).

Absolutely. I work entirely ITB, so this doesn't affect me. But outboard gear doesn't enjoy the luxury of 32-bit floating point math, so levels in and out definitely matter.

I'm in a slightly "hybrid" position on this. In my studio, I'm 100% ITB. However, much/most of what I do is sound for use in live theatre--so very often my mixes (even sometimes stems) are played back live which implies at least some analogue stages.



Agreed, and I mentioned that earlier in this thread.



I disagree. The most compelling reason for lower levels is to keep the analogue components in the signal path from producing distortion on input and, in the digital domain, to prevent inter-sample distortion at the DA output. Clip point is not necessarily the point at which distortion will be produced. It may manifest many dB BELOW clip point, depending on the quality of the components.

No disagreement that analogue levels are important. If, to satisfy a silly quest for your digital recordings to peak at some silly-high level you have to push the analogue input stages too high then clearly this is a bad thing. However, that's not the sort of situation I was referring to. At least on my set up I could achieve overly high digital levels without pushing my analogue stages at all. No, I obviously don't do that--but, for the sake of my previous post, I was assuming sensible gain staging throughout the system. Indeed, gain staging is something that I get pretty OCD about--it comes from far too many years in the broadcast industry where we were constantly aligning levels from source to destination via every intervening stage.


I agree but what about the distortion that might have been introduced by using preamps with inferior components that introduce distortion at 6 or 9dB below clip point? That distortion will still be there no matter how much you turn it down and at 20 or 30 tracks, the cumulative effect will kill your recording. You'll no doubt get harmonic buildup somewhere in the spectrum no matter what your digital levels are.

At the risk of sounding cavalier, if somebody has a pre amp so poorly made that this is an issue then they should save up for better. Reducing your digital levels to compensate for deficiencies in your analogue gear should be a stop gap only--it doesn't mean it's a good way of working in the long term. It's a bit like driving your car at 30 mph on the freeway because you know your brakes aren't good enough to do the speed limit. In any case, I was talking more about general principles than trying to compensate for every bit of defective or badly built gear.


Exactly, so, if you don't have the time to null test every plugin (like most working engineers) the best thing to do is just keep your levels conservative and get on with the job which, of course, mostly entails LISTENING. It's amazing how the more listening experience you have, the less you need to know the results of a null test. Yes, they're interesting but for fuck sakes, who's got the time and the energy to test ALL of their plugins this way? How exactly will knowing the results make my mix any better? Did engineers of old refuse to work with an 1176 until they knew the results of a null test? Fuck that.

I basically agree--but my conservatism starts earlier. I work with a relatively small range of hardware (and tend to have investigated the quality before I put it to actual production use) and an even smaller selection of plug ins--but that's just me. However, if I came across a plug in that started to sound bad with levels that should be "legal" I'd probably just stop using it rather than compensate for its shortcomings by adjusting all my levels downwards.

In any case, we're getting to the stage of debating how many angels can fit on the head of a pin--or how many dB can fit in an audio file. In the "real world" I think most of us actually work in pretty similar manners and set levels in similar ways. The only thing I take real exception to is the assertion that ALL ELSE BEING EQUAL -20dBFS sounds better than -1dBFS. Yes, there are factors that can make this so and mitigate towards more conservative levels, especially when considering everything else in the chain. However, purely in the digital domain there should be no difference at all--and my experience of listening confirms this.
 
Preamps are not the be all, end all of the analogue stage. There are numerous components in even a simple mic preamp that can cause distortion at many dB below clip point and I can GUARANTEE that many of the prosumer stuff out there utilizes these cheap inferior components. Distortion is not always immediately evident and can slip past when you're tracking so easily but when there's 20 to 30 tracks of the stuff, it builds up into something worse.

Converter chips? Most of the time bad sound is not it's fault. It's the analogue crap they put before and after it that causes the distortion.

ITB? Yes, a strict 0dBfs signal will sound the same as a lower signal on paper (heh) but when that signal hits the reconstruction stage there are many dangers if, 1. there are extraneous intersample peaks, and 2. the analogue components in the DA are not up to scratch. Sadly, this is the case for much of the entry to mid level stuff on the market. Many of these manufacturers use common chips like the Asahi Kasei stuff but cut corners and costs by skimping on the analogue stuff. THIS is the MAIN SOURCE of distortion. And then we have even crappier components to deal with in the consumer market in MP3 players and CDs. Read that Thomas Lund article. It's very informative on this topic.

Cheers :)
 
What were we talking about....????

I read both pages...I can't figure it out.

:D

I use analog a lot in my hybrid setup..preamps of course, but also track to tape, etc...
When I dump into the DAW, you know, I never even bother to look at the interface/DAW input levels anymore.
About the only thing I do is check to see if the A/D interface is set to +4 or -10, depending on which analog gear is feeding it signals.
All my level settings are done in the front end analog stage, and those I do at the source and at the analog gear it's feeding, partially by analog meter and partially by sound/taste.

When I then dump to DAW, sometimes the DAW levels end up in the -6 range, sometimes in the -18 range from track to track...I just let them fall where they may.
I don't obsess about setting *DAW interface levels* to a specific range. I will NOT turn down or turn up a preamp just to hit a desired digital level. I let the analog front end do the work of setting proper levels.

When I come back out of the DAW to mix OTB, I will trim my levels at the console as needed, as I may have changed digital levels somewhat while doing edits/comps/etc and while pre-mixing in the DAW, but it's never extreme, more just about balancing levels out between tracks, so when those levels come out of the DAW, they hit my analog OTB gear at about the same analog level as what I had at the front end.
I don't much pay any attention to the DAW levels or obsess where they are hitting.

Of course, for ITB mixing, the DAW levels are important when you start stringing the digital plugs together per track.
 
I find even 2dB of digital clipping a fairly unpleasant sound but that's just my opinion. However, it's also worth saying that the effect gets worse rather rapidly as you go above your 2dB test point.

I agree with all of that. I guess my point is that clipping distortion is the same whether it's in an op-amp circuit or the input stage of a digital converter. Either way the result is a simple flattening of the waveform peaks. Versus the warnings we see all the time claiming that digital clipping is somehow horrible compared to analog clipping.

I fully agree that, as long as clipping is avoided, I can't hear a quality difference between -20 and -1. Actually, that's not quite true. If I have to apply significant gain to the -20 signal, I may well start to hear background noise (depending on what was used earlier in the chain).

The background noise in a recording is almost always dominated by room tone, though with dynamic and ribbon microphones the preamp hiss can dominate. Both of these noise sources are much louder than the noise floor of even 16-bit digital recording.

At least on my set up I could achieve overly high digital levels without pushing my analogue stages at all. No, I obviously don't do that--but, for the sake of my previous post, I was assuming sensible gain staging throughout the system.

Exactly.

At the risk of sounding cavalier, if somebody has a pre amp so poorly made that this is an issue then they should save up for better.

This too. Further, the notion that inexpensive gear sounds bad because it's made with "cheap" components shows a serious lack of knowledge about how audio gear is manufactured. Good resistors cost less than 2 cents, even in the relatively small quantities bought by audio product manufacturers. Good capacitors cost more than good resistors, but they're still much less expensive than the salary of the engineers who design the gear. Likewise for all other electronic components. Parts that really do cost more for higher quality are generally things like switches and LCD display panels. (And audio transformers, which most modern gear doesn't use.)

The only thing I take real exception to is the assertion that ALL ELSE BEING EQUAL -20dBFS sounds better than -1dBFS. Yes, there are factors that can make this so and mitigate towards more conservative levels, especially when considering everything else in the chain. However, purely in the digital domain there should be no difference at all--and my experience of listening confirms this.

You totally get it, and that's all I'm saying too. It is a myth that audio sounds wider or "less congested" etc when recorded at conservative levels. If that really is true for someone's system, then something else is wrong. Why these things are argued about so vehemently escapes me, especially since the belief that lower levels sound better was just disproved with a controlled test!

--Ethan
 
IMO....one of the problems (if not the main problem) that most home rec guys have with digital levels has to do with erroneously confusing monitoring levels with track/bus levels....and also their concern that if there's too much daylight between their signal level and max - "0dBFS" level on the meters, that their mix will fall short of commercial loudness.
So the meters end up dictating where their levels should be.

Not to mention, on top of that, there's the classic "fader-creep" that often occurs during mixing in both analog and digital worlds, but I think that with ITB mixing, especially for the home rec crowd, piling on lots of plugs per track is an all too common occurrence, and that screws up levels easy enough so....you nudge up your other track to compensate - aka "fader creep".

I do it too, but know enough o watch out for it.
On my console I have one of those "big knob" thingies (TC Electronics Level Pilot), and on it I've marked my 75, 80 and 85 dB SPL monitoring points. I'll start the day at the low point, but often end up at the high point, and sometimes up to 90 dB.
Same thing with the console faders...I'll find myself nudging things up...but, my solution for that is to use one track or pair of tracks as my reference, and I never nudge them. It's usually the drums, the OH stereo pair.
I set them at unity on the console faders and then adjust all my other tracks in balance with them. If I notice during the session that say...my guitar tracks are sounding much louder than my drums at some point...I look at the faders, and if they have crept up, I'll pull them back along with others, so that the levels are again in balance with my reference OH drum pair...but then I'll reach for that "big nob" and turn up my monitors a notch if I want a slightly punchier listening level.

It is a tug-o-war game, and I go through this every session, but I've set my references and my listening levels, so I always use that to reel things back in.
 
The only thing I take real exception to is the assertion that ALL ELSE BEING EQUAL -20dBFS sounds better than -1dBFS. Yes, there are factors that can make this so and mitigate towards more conservative levels, especially when considering everything else in the chain. However, purely in the digital domain there should be no difference at all--and my experience of listening confirms this.
Geez, I've missed a lot of this thread. And before anyone thinks I'm talking about anything else -- All things being equal IN DIGITAL should sound exactly the same. I'm not even thinking about digital (although technically, even an AD converter is an analog piece up until the chip).

It was asked somewhere if I've ever actually done the experiment. Yes - More times that I care to mention (and the dozens and dozens - Hundreds - of e-mails I've received from previously frustrated recordists who did nothing more than start tracking at the levels their gear was designed to run at is more than I need).

This is simple stuff -- Analog gear is spec'd at a particular voltage. If anyone thinks you can triple that voltage - and although the circuit doesn't actually fail (clip) it's actually going to sound the same -- Especially with dense material... That's like expecting a car to get the same MPG @ 2kRPM as it does at 6kRPM. Don't get me wrong here -- With some gear, it can actually sound pretty cool (I tend to drive API 312's a little harder than usual - Crane Song's Flamingo can do all sorts of interesting tricks when it's overdriven - But that's the point - It's being overdriven. Not to the point of failure, but certainly past the point of "normalcy").

I admit hearing little difference on Ethan's tracks -- I even admit hearing little difference with certain gear. But stack 30 tracks of raging metal guitars and thick synth pads on "typical" quality gear -- The difference isn't subtle - It's startling.
 
Analog gear is spec'd at a particular voltage. If anyone thinks you can triple that voltage - and although the circuit doesn't actually fail (clip) it's actually going to sound the same -- Especially with dense material... That's like expecting a car to get the same MPG @ 2kRPM as it does at 6kRPM.

I don't see how an engine's gas mileage versus RPM relates to sound cards and converters. Most consumer sound cards have no hardware input volume control, only a software control panel that should always be set to maximum. So the correct way to set record levels is by adjusting the output volume of your preamp or mixer feeding the line input. However, some outboard converters do have input volume controls. I recently bought a Focusrite Scarlett 8i6 which has inputs that accept both mic and line levels. When I did this test I used a separate preamp, which was then split to both of the Scarlett's line inputs at once. The softer input was set to 11:00 o'clock, and the louder one was raised to 3:00 o'clock.

My point is that, sure, it's not advisable to turn the input volume all the way down so you need to blast +30 dBm into the input to get a usable record level. But other than such obviously improper gain staging, the analog input stage of a sound card or converter can handle a very wide range of signal levels. In other words, the analog portion will not overload before the digital portion. Again, my point is less about clipping distortion, which is easily measured using sine waves at various static levels. Rather, I address the claim that when recording at low levels, music sounds "wider" and "less congested" and other such terms that can't be verified because the words are vague and imprecise.

I admit hearing little difference on Ethan's tracks -- I even admit hearing little difference with certain gear.

Yes, and the lack of difference is proven by the -50 dB residual after nulling the files. That's one huge feature of null tests. Even if one thinks they hear a difference between two Wave files, a small null residual proves there really is little or no audible difference. It only seems like there's a difference.

But stack 30 tracks of raging metal guitars and thick synth pads on "typical" quality gear -- The difference isn't subtle - It's startling.

I'd like to see evidence of this in the form of result files similar to the two mixes I did for this test. Logically speaking, the "stacking effect" does not occur the way people believe, and I explain this in depth in my AES Audio Myths video in the section starting at 28:28. The real issue is the masking effect. From my article Perception - the Final Frontier which I linked to earlier:

Perception article said:
The most significant thing that happens when tracks are summed is psychoacoustic. An electric bass track, where every note can be distinguished clearly when solo'd, might turn into rumbling mush after adding a chunky sounding rhythm guitar or piano. I'm convinced this is the real reason people wrongly accuse "summing" or "stacking" for a lack of clarity in their mixes.

If you can post a pair of mixes comprising many tracks recorded at two different levels, I'd love to examine them in an audio editor program to assess their differences. I'm pretty sure there will be no difference, even with 30 tracks of guitars and pads etc. But I'm glad to change my mind in the face of solid evidence!

--Ethan
 
(I really don't know how many times I'm going to have to explain this...)

I'M NOT TALKING ABOUT CONVERTER INPUTS

Sorry, I don't mean to yell. This happens all the time.

I'm talking about preamps. Circuits designed to spec at "around 1 volt" that are being pushed to 300% over spec might sound a little different...

Although I certainly don't exclude converters -- Some of which sound just fine at obscene voltage (HEDD) where others fall apart completely (ask my old MOTU).

Run ONE mic to TWO preamps - One outputting around 1 volt (which will give you around -24 to -18dBFS signals depending on how your converters inputs are calibrated to that voltage) and the other around 3 or 4 volts (which will likely be somewhere near clipping on the converters -- But feel free to calibrate the converter inputs to play nicely with that amount of voltage coming in) and see if they sound the same.
 
I'M NOT TALKING ABOUT CONVERTER INPUTS

Gotcha. I thought we were still talking about recording with peaks at -20 sounding better than close to zero. Hopefully my recent test put that myth to bed, yes?

I'm talking about preamps. Circuits designed to spec at "around 1 volt" that are being pushed to 300% over spec might sound a little different.

Other than inferior low-voltage toob designs, most electronics circuits are pretty clean right up to the point of hard clipping. With some older gear that has transformers, distortion can increase a bit before hard clipping, and of course that happens with analog tape. But the modern lower-cost solid state gear I'm aware of doesn't use transformers. Again, I'm glad to see some examples that show otherwise, and I'm certainly not trying to be combative. But I've seen many factually wrong claims in this thread, such as "cheap components" sound bad, and that stacking 20-30 tracks increases distortion when in fact distortion is reduced.

Although I certainly don't exclude converters -- Some of which sound just fine at obscene voltage (HEDD) where others fall apart completely (ask my old MOTU).

Okay, though I don't think I ever argued that "obscene voltages" are not a problem.

Run ONE mic to TWO preamps - One outputting around 1 volt (which will give you around -24 to -18dBFS signals depending on how your converters inputs are calibrated to that voltage) and the other around 3 or 4 volts (which will likely be somewhere near clipping on the converters -- But feel free to calibrate the converter inputs to play nicely with that amount of voltage coming in) and see if they sound the same.

That's exactly what I did in my re-amp test, with the signals 20 dB apart. I didn't measure the voltage coming out of my preamp, but it was probably at least 2 volts. I'll gladly do this test again using larger voltages if you'll accept my use of the preamps in a Mackie 1202. I'll send a sine wave out of my sound card into two inputs of the Mackie, and raise the gain of one preamp to record just below 0dBFS and the other 20 dB softer. Is this an acceptable test for you?

Or maybe you can do a test using preamps you've had this experience with, and post two Wave files? It was mentioned earlier that many recordists can't be bothered doing tests for themselves, so you and I are doing a public service by testing this stuff for them! I certainly want to see this put to bed for once and for all, and hopefully others also want to know.

--Ethan
 
At this point if the argument is that there is no real sound quality change when being at -20dbfs or close to 0dbfs wouldnt it just be safer to stay at the lower db range just to keep from clipping and adding any unwanted distortion to begin with? Say I am recording and mixing a project with a large track count, when summing all the audio to the master wouldnt it just be a safe option to have the room? Am I seeing this the right way?
 
You can also just drop the Master Volume down to an acceptable level...same thing, but avoiding near-"0" tracking levels certainly would be safer, though there's a alot of "wiggle" room between -20 and 0. :)
 
So that part is true that lowering the master is as effective as dropping the channel faders assuming none are clipping?
 
You can also just drop the Master Volume down to an acceptable level...same thing, but avoiding near-"0" tracking levels certainly would be safer, though there's a alot of "wiggle" room between -20 and 0.
So that part is true that lowering the master is as effective as dropping the channel faders assuming none are clipping?
With many floating-point systems, to some extent, yes - you can lower the master fader. It's not what most would consider "good technique" but it'll avoid clipping the buss.

As far as the INPUT chain is concerned, it's completely unrelated -- It'd be like cooking a steak till it's completely burnt and then pouring ice cubes over it to make it more rare. It doesn't work like that.

At this point if the argument is that there is no real sound quality change when being at -20dbfs or close to 0dbfs wouldnt it just be safer to stay at the lower db range just to keep from clipping and adding any unwanted distortion to begin with? Say I am recording and mixing a project with a large track count, when summing all the audio to the master wouldnt it just be a safe option to have the room? Am I seeing this the right way?

Some of the argument -- In any case, "lower" (a.k.a. "normal") is almost always a better thing. Headroom is a wonderful thing that should be cherished and celebrated -- Too many people are in too much of a hurry to use it all up...
 
Back
Top