Does digital signals need go be cleaned up?

Status
Not open for further replies.
I think what it means is, if you have too much bass rumbling in a track/instrument and you want to turn that track up to hear some of the higher frequencies, then the bass frequencies can become a problem as you turn it up, therefore, a high pass or low shelf can help enable you to turn that track up without a problem of low frequency build up.

It is that way too, it is both ways. You remove some of the less necessary vocal frequencies so that other frequencies in that region don't have to compete as much and so that they are perceived more clearly at the desired signal level. This also has the impact that by not having to increase those sound sources as much in signal level in order to make them clear/dominant enough in the low end, their mids and highs will stay less loud too, hence giving the sound sources in the mids and highs less frequencies to fight with as well. And now to what you added to it: When you then want to make those low frequency dominant sound sources a little more dominant in the mids and highs, the product of what you then end up with in the lows will not as quickly/easily be a product towards mud.

Thank you, we are getting somewhere! :thumbs up:
 
Nothing. MusicWater is just another vague, ambiguous troll trying to run this site in circles.

+1.

MW. Your post is all about acoustic instruments or amplified instruments. The question is specifically about digitally produced sounds.
Why are we assuming there's background noise, and why is it different to normal noise?
And even if we accept that there is some background noise, why are we recommending workarounds instead of addressing it at the roots?

If you want to contribute to a friendly community...
read the question.
 
+1.

MW. Your post is all about acoustic instruments or amplified instruments. The question is specifically about digitally produced sounds.
Why are we assuming there's background noise, and why is it different to normal noise?
And even if we accept that there is some background noise, why are we recommending workarounds instead of addressing it at the roots?


read the question.

He's gonna keep talking in circles while not saying anything relevant, then play the victim card when people come down on him for being deliberately vague and nonsensical. Same ol same ol antics from the same ol same ol troll that HR.com can't get rid of.
 
OP asked about the use of lo-pass, hi-pass filtering, I provided a hi-pass filtering example to him - a valuable one - so that he can understand it more clearly.

OP asked about VST tracks. He didn't ask for generic apply everything to everything troll "advice".
 
He's gonna keep talking in circles while not saying anything relevant, then play the victim card when people come down on him for being deliberately vague and nonsensical. Same ol same ol antics from the same ol same ol troll that HR.com can't get rid of.

Agreed.
I'm out.
 
+1.

MW. Your post is all about acoustic instruments or amplified instruments. The question is specifically about digitally produced sounds.
Why are we assuming there's background noise, and why is it different to normal noise?
And even if we accept that there is some background noise, why are we recommending workarounds instead of addressing it at the roots?

read the question.

The impact of hi-/lo-pass filtering is the same with digitally produced sounds. In that case, what you also do to a much greater degree is to improve the signal-to-background noise by muting and signal level automating tracks, because when you have a lot of distorted frequencies (relative to the frequencies of the original sound source) that combine at low signal levels, the perception of the whole will be very distorted by the product of the distortion and dynamic range of each, you kind of end up with little of all qualities and none of those qualities you truly want. The arrangement plays a critical supporting role too. Obviously you're still limited to the quality of the original input frequencies from each sound source and how they combine, but the decisions made are primarily made on higher levels than on the FX level, on the higher levels the scope of impact of dealing with the issues is far greater.
 
Last edited:
That would have been a great answer.

Not being funny here - This is a genuine question. Is English your first language?

No, it's not. I try my best to explain these things in as good English as possible, so do all the rest in here that don't have English as their native language.
 
No, it's not. I try my best to explain these things in as good English as possible.

Fair enough. I wasn't sure at first, but that's bound to make it difficult.

The correct thing to do would be to do what actually needs to be done based on how it sounds

I think this is the key thing to take away from this thread.
Asses every audio track objectively regardless of source.
 
Asses every audio track objectively regardless of source.

Hmm. That and not to forget to take advantage of knowing the sources well by collecting technical information about them. Working with samples is a completely different thing than working with real sound sources. For instance, you want to limit the number of types of distortions involved, because the product of those various distortions are going to eat up the recording. It's all going back to the core components used by the companies behind the VST instruments, what converter clocking technology was used, what power quality was used, what input and output stage signal capacity was used, what amps and monitors were used in what kind of acoustic climate, what sampling technology was used (e.g., how many layers were sampled) and so on...
 
Last edited:
The OP's question was answered in the first response, reinforced in the next handful.

This thread is now going nowhere, So I've closed it.
 
Status
Not open for further replies.
Back
Top