Does having an LED levels meter on the interface matter to setting gain right?

Chelonian

Member
All interfaces have some kind of indicator to show you if there is signal or clipping. For many inexpensive ones, it's just a green and red light. But some have level meters, like the MOTU M series, that show where in the range you are (green...orange...red!).

I'd think setting your level as high as possible without any risk of clipping is the way to go, but my question is: Can you determine that in the DAW by looking at level meters there? Or really, is that too late in the chain to know if it clipped at the point of the A to D conversion?

I'm trying to ascertain if the LED level meters are just for coolness's sake or if they really make it easier to set input gain levels on the interface as well as possible.
 
I've only used one interface.. so I'm making the assumption most, if not all, behave similarly, as do DAWs in this respect.

I would have to say in most cases those (extra) LEDs are eye candy. In my DAW the level meter for recording is pre-fader and the fader has no function at that point. So the interface sets that level and probably won't get anywhere near clipping when the DAW meter is indicating a proper level.
 
Chelonian said:
I'd think setting your level as high as possible without any risk of clipping is the way to go

Not really. Headroom matters. It's good practice to set your levels without any risk of clipping but it does not have to be as high as possible. Audio signals are designed to operate at line level. Digital converters aren't made to a specific standard, but it's generally accepted that line level in the analog domain equals around -18 dBfs in your DAW. It's useful as an average value, not peak.

There are 2 different types of signal to consider. Transient peaks are fast percussive sounds. A lot of sources can have transient peaks. An obvious example is drums. Steady state signals are sounds that are mostly sustained notes like a synth or distorted electric guitar that have little or no spiky peak energy. Many sounds can combine the two types where you have transient peaks and sustain such as a clean guitar or an acoustic, piano, vocals etc...

Setting the average level of a steady state signal like distorted guitar is easy. If the meters stay parked right around -18 dBfs or so you're fine. At the other end of the spectrum if you're recording drums or something, the peak values can go higher but the average is still lower because the sound is all attack and decay with relatively little sustain. Letting your peaks go not higher than a certain amount can be helpful. Maybe that's -6 dBfs? -10 dBfs? It's not critical as long as it doesn't clip.

Digital meters work by showing you the sample values. With higher frequencies it can be an issue as the actual waveform can have a higher "true peak" value than what the position of the sample is. Sometimes it's called intersample peaks. As your levels get closer to 0 dBfs, there is potential for clipping that the meters won't show you.

With analog tape you set the levels around line level and the medium has a sweet spot. Keeping things on the strong side helps to improve signal to noise ratio and can affect the character of the recorded signal. Digital recording has a much lower noise floor and there is no sweet spot. There's usually little to no consequence to recording digital levels that are too low, within reason, unlike analog. Recording too hot is much more of a problem. When you can hear things get grainy and mushy from recording hot, it doesn't really matter what the lights and meters are doing.
 
Level LEDs on interfaces are useful to indicate the presence of a signal as sometimes you have set somthing up and have lost it!
Also a steady 'green' LED indicates something is wrong and steady RED LED indicates something is seriously wrong!

But yes, the lovely meters on the MOTUs are a bit of an overkill and would be a lot more useful if they had dB calibrations(but I STILL want an M4!) The DAW is where you need to set ;levels and as said, NOT "as high as possible"! The noise floor for 24 bit recording is -144dBfs and no analogue amplifier can get close to that so even recordings at -30 even -35dBfs can be digitally boosted to -20dBfs and no digital noise will be added.

BTW, I don't agree with "setting analogue tape to a 'sweet spot" ? The current fashion for "tape sound" was not in evidence for engineers of old. They just wanted to make THE best recordings they could constant;y balancing noise against distortion. In the 'classical' field engineers went digital pretty much as soon as they could!

Dave.
 
ecc83 said:
BTW, I don't agree with "setting analogue tape to a 'sweet spot" ? The current fashion for "tape sound" was not in evidence for engineers of old.

Or was it?

It's a terribly interesting question I would entertain, possibly. Surely if they worked with tape they created evidence. If the engineers were so old that they never got to work with digital, what option to tape sound would they have as a basis for comparison?

Wax cylinder?

In any case were you to set a tape level some 10 or 20 dB lower than optimal, the signal to noise consequences would be demonstrably more prominent than a digital recording in terms of system noise. Yet it's often used as an argument for setting "auto destruct" levels in digital that simply aren't necessary.
 
Or was it?

It's a terribly interesting question I would entertain, possibly. Surely if they worked with tape they created evidence. If the engineers were so old that they never got to work with digital, what option to tape sound would they have as a basis for comparison?

Wax cylinder?

In any case were you to set a tape level some 10 or 20 dB lower than optimal, the signal to noise consequences would be demonstrably more prominent than a digital recording in terms of system noise. Yet it's often used as an argument for setting "auto destruct" levels in digital that simply aren't necessary.

The refence was, as ever, the original sound. I know the BBC boffins were nipping constantly from control room to studio to compare the live sound with that coming out of the monitors they were developing.

The problem of tape noise V distortion was tackled by skillful balancing. Recording engineers back then could usually follow a score so they could anticipate fffs! Further dynamic control was need to enable that fussy medium, black disc to be cut!

It tends to be forgotten in forums such as this that for decades, sound engineers in both radio and recording were concerned mainly with capturing THE most accurate representation of the music they could. Only with the coming of the 'pop/rock' revolution did folks start to get 'creative' with various forms of distortion. Even when Dolby A became wildly available many engineers chose to run tape a few dB below previous levels and get a cleaner, i.e. less THD sound and keep only part of the 10dB or so noise reduction Dolby NR could provide.

And, after all, what was "Tape Sound"? Is it an Ampex, a Scully, an EMI or a Studer? Which tape and who biased it up and how?
What it wasn't is a 1/4" 4 track tape on a Teac with DBX!

Dave.
 
ecc83 said:
The problem of tape noise V distortion was tackled by skillful balancing.

I agree. My point being that digital is more forgiving if you come in on the low side and doesn't buy you anything if you aim for maximum levels before clipping.

ecc83 said:
It tends to be forgotten in forums such as this that for decades, sound engineers in both radio and recording were concerned mainly with capturing THE most accurate representation of the music they could.

It's an intuitive ideal on the surface unless we realize that what we really want is a capture that is flattering to the source or euphonic, rather than most accurate. Using a measurement grade microphone for audio recording would surely be an aid in the quest for accuracy, yet not a popular choice in practice.

ecc83 said:
Only with the coming of the 'pop/rock' revolution did folks start to get 'creative' with various forms of distortion. Even when Dolby A became wildly available many engineers chose to run tape a few dB below previous levels and get a cleaner, i.e. less THD sound and keep only part of the 10dB or so noise reduction Dolby NR could provide.

A somewhat narrower window of opportunity for gain staging than digital affords then?

"Creative distortion" began as an accident, but I believe there are many folks who feel it was a happy one. I'm certainly appreciative of the creative efforts of Pete Townshend, The Beatles, Hendrix, etc...

More to my original point, digital clipping is not creative distortion. It's destructive.

ecc83 said:
And, after all, what was "Tape Sound"? Is it an Ampex, a Scully, an EMI or a Studer? Which tape and who biased it up and how?
What it wasn't is a 1/4" 4 track tape on a Teac with DBX!

All of the above, including the Teac. Dismantling the nuances of every recording format is well beyond the scope of my comments on gain staging.
 
Yes the LEDs matter for setting gain. Even the ones that are only green or red. The manual should tell you how to set your gain using the LEDs. The digital converters are calibrated to the audio signal and there is a spec for it and it might be mentioned in the manual. For consumer and prosumer level interfaces, the calibration is a hard-wired level and not adjustable. Higher end converters are adjustable.

ecc83 said:
Also a steady 'green' LED indicates something is wrong and steady RED LED indicates something is seriously wrong!

I don't agree with this at all. Most, if not all, interfaces that have only green and red LEDs will tell you to set the gain so green is always on when you're recording your source audio at the level you want. Some manuals will tell you to set the gain so the green is always on and the red LED will flicker only sometimes. Ideally, you never want the red LED on all the time, but I'm guessing manufacturers design in a little bit of room for those who feel they have to push it (thinking they'll get a 'warmer' sound - LOL)
 
...Most, if not all, interfaces that have only green and red LEDs will tell you to set the gain so green is always on when you're recording your source audio at the level you want. Some manuals will tell you to set the gain so the green is always on and the red LED will flicker only sometimes. Ideally, you never want the red LED on all the time, but I'm guessing manufacturers design in a little bit of room for those who feel they have to push it (thinking they'll get a 'warmer' sound - LOL)

Mine has a single green and red LED. I take it up to a solid green with a sometimes flickering red when I can. Sometimes I back it off just until the green is solid and no red at all.
 
My Presonus only has overload leds.

Having meters would be handy to indicate that a signal is going through. However, Reaper does that job anyway when I've armed a track for recording, and that's where most of my attention is, i.e. focussed on the screeen in front of me.

I'm happy so long as the peak leds aren't flashing and I can see action happening in Reaper.

I wouid have higher priorities than meters for the front panel of an interface. For example. more combo inputs rather than them being on the back (as I observe on most of the current crop of interfaces).

.
 
"It's an intuitive ideal on the surface unless we realize that what we really want is a capture that is flattering to the source or euphonic, rather than most accurate. Using a measurement grade microphone for audio recording would surely be an aid in the quest for accuracy, yet not a popular choice in practice."

Maybe it is a cultural thing in UK (and much of Europe) but in some 60 years of interest in sound reproduction and recording the quest has for been "The Closest Approach to the Original Sound" I mentioned the BBC. They were responible in the early days for some of the best quality audio electronics which permiated the studio world here. The BBC were in the forfront of monitor loudspeakers. The "L" series. The LS35/a is world renowned for its accuracy albeit at modest levels. Those speakers set the benchmark followed by better such as the Spendor BC1 and no loudspeaker calling itself "Hi Fidelity" would be much thought of in the pages of such magazines as Hi Fi News (before the tweaks got in!) Studio Sound, Wireless World, The Gramophone unless it came close to those mentioned.

Microphones? Much used was the (now) Coles 4038 ribbon, a microphone noted for a wide, flat response and very low distortion*. "Measurement" microphones really did not exist outside the NPL.

*If a mic has those qualties plus excellent damping it can hardly help but give a very faithful reproduction.

Dave.
 
My Tascam only has overload LEDs, but in the Tascam control panel, you have 16 faders with digital meters. Thus you can bump up the signal going to the DAW, or drop it if you want.
 
My Tascam only has overload LEDs, but in the Tascam control panel, you have 16 faders with digital meters. Thus you can bump up the signal going to the DAW, or drop it if you want.

I don't have the gear to test Rich but those fader cannot change the level going into the A/D converters can they?

Dave.
 
I'm sure its done after the AD converters since they are virtual faders in the control software. Its probably done in the DSP chip where there is also EQ and compression. While the mic inputs have a volume knob, the line inputs on the back do not. It does change the signal sent via USB to the DAW. I used it to boost the line inputs from my stereo preamp when I transferred some records recently, which were well below what I wanted.

Tascam's comment about the faders is:

Faders, level meters

Use the channel fader of each channel to adjust the level of that channel. Use the master fader to adjust the master level.

The input signal level of each channel is displayed in the corresponding channel level meter. The mixed output signal level is displayed in the master level meter.

The channel level meters and the master level meter are displayed using green bars for the range of −12 dB or less, yellow bars for the range of −12 dB to −6 dB, and red bars for the range of −6 dB or higher.
 
Back
Top