Convention what instruments are put in L/R stereo channel

  • Thread starter Thread starter nuoritoveri
  • Start date Start date
N

nuoritoveri

New member
Hi! I registered to this forum because I thought you might know this.

I read on wikipedia about the pseudo-stereo technique used for example in pop music recordings (In subject Stereo, section Common usage).

So as I understand in such recording there are some instruments that are present/strengthened in one channel and not present/attenuated in another.
I noticed this while listening to variety of rock-related genres.
But is there convention for this, are some instruments usually put in L or R channel? Are certain instrument/voices usually put together?

If such convention exists also some sources of information will be much appreaciated.
Thanks in advance,
nuoritoveri :)
 
The only rule is that you should be guided by your ears, if you're talking rock/pop music. You will work out that spreading some things too wide sounds bad and keeping others too tight sounds bad. Listen. Experiment.

Other genres with live type recordings (classical for instance) may make more use of positioning things in a particular spot equivalent to their actual position in an ensemble.
 
There are no rules. There is SOME conventional widom. Things like bass, lead vocals that usually get center panned but that's not hard and fast. Rules are made to be broken.
 
Thanks for replies :)
To be more specific I'm not asking on how I should arrange instruments in my own recording but rather I pursue the cues on analysing the existing ones.
In this perspective Track Rat response was quite useful :)

I mean something that would be useful in automatic analysis, for example:
- if I can for example make some hypothesis that some instrument is prominent only in one channel then I can switch to analysing this channel alone
- if I presume the leading voice is prominent in both channels I can somewhat distinguish it from chorus voices

The thing is I'm now reading articles about automatic transcription of music and all authors collapse the audio recordings to mono as the first step. I was wondering if there could be some benefit from analysing stereo channels separately.
 
To be more specific I'm not asking on how I should arrange instruments in my own recording but rather I pursue the cues on analysing the existing ones.

There are a couple of approaches you can take.

1 Arrange the instruments in the stereo field as if you were placing them on a stage to perform.

2 Arrange the instruments in the stereo field to create an interesting sonic landscape.


I mean something that would be useful in automatic analysis, for example:
- if I can for example make some hypothesis that some instrument is prominent only in one channel then I can switch to analysing this channel alone
- if I presume the leading voice is prominent in both channels I can somewhat distinguish it from chorus voices

The thing is I'm now reading articles about automatic transcription of music and all authors collapse the audio recordings to mono as the first step. I was wondering if there could be some benefit from analysing stereo channels separately.

Engineers often mix in mono first to make sure that all instruments are represented well, and that none is sonically obscuring another.

But listening to each channel separately is an exercise I wouldn't bother with. I would simply concentrate on the sonic integrity of the whole stereo spread. I don't believe there is a benefit in analysing stereo channels separately, because they never exist in real life separately (unless you've blown a speaker). As a listener, what you hear from one channel is always coloured by the influence of the other.
 
My default is to pan the drums more or less according to audience perspective and fit everything else around that. But there are no rules.
 
Back
Top