Stereo vs mono question

adam125

New member
Hi guys,

I'm wondering a few things about stereo,

1: If I pan a guitar track to the right, is it normal in mixing to click the button above the plugin-chain that makes it into stereo, while being panned with a single track to the side?

2: If I click the button from mono right to stereo and all the plugins on the track are still set to mono, what does that do to the sound?
(When I click the button from mono to stereo in Logic, all plugins become stereo, and when I click the button back to mono the plugins don't change back to mono. Is this normal? How could I fix this then?)

3: Also, having a kick effect in saturation turned to stereo, is that a normal thing more as an stereo-effect on the kick?

Still learning and mainly using headphones at the moment so the stereo-effect is maybe not as obvious to my ears.

Cheers, Adam
 
Unless you're using some kind of effect that creates a stereo output from a mono source, like a reverb, stereo delay or stereo chorus, I don't think changing the track settings to stereo will make a difference.

Changing the plugins to stereo might increase the amount of CPU the effects use if it means they're now processing double the audio (even though it's just the same audio duplicated).

I doubt a saturation effect produces a stereo output from a mono source.

Headphones should make the difference between mono and stereo even more noticeable than on speakers.
 
Yep - creating a stereo or mono track makes no difference, it is still mono until you start to add processing that creates either realistioc or simulated stereo width.
 
Hi guys,

I'm wondering a few things about stereo,

1: If I pan a guitar track to the right, is it normal in mixing to click the button above the plugin-chain that makes it into stereo, while being panned with a single track to the side?

2: If I click the button from mono right to stereo and all the plugins on the track are still set to mono, what does that do to the sound?
(When I click the button from mono to stereo in Logic, all plugins become stereo, and when I click the button back to mono the plugins don't change back to mono. Is this normal? How could I fix this then?)

3: Also, having a kick effect in saturation turned to stereo, is that a normal thing more as an stereo-effect on the kick?

Still learning and mainly using headphones at the moment so the stereo-effect is maybe not as obvious to my ears.

Cheers, Adam
1, no definitely not, keep it mono, then pan wherever you want. To prove the point, solo your track and pan hard left, notice your master has nothing coming out of the right channel and it's all on the left?

2, You're lucky, a lot of 3rd party plugins I've used in the past don't switch when I click the stereo button meaning I need to re-set them all.

3, Saturation is used first and foremost to add harmonics which fatten and liven up a signal while at the same have the effect of adding soft knee compression, you can stereoize a mono track using saturation, but I would rather not, saturation I like to apply directly under the track if I am parallel processing but lately I am applying more saturation direct to the track for way more control
 
1, no definitely not, keep it mono, then pan wherever you want. To prove the point, solo your track and pan hard left, notice your master has nothing coming out of the right channel and it's all on the left?

2, You're lucky, a lot of 3rd party plugins I've used in the past don't switch when I click the stereo button meaning I need to re-set them all.

3, Saturation is used first and foremost to add harmonics which fatten and liven up a signal while at the same have the effect of adding soft knee compression, you can stereoize a mono track using saturation, but I would rather not, saturation I like to apply directly under the track if I am parallel processing but lately I am applying more saturation direct to the track for way more control
Ah, so you have to tweak everything in a plug-in all over again to the same setting as they used to be while switching to stereo then? What happens to the sound when a track is mono and all the plugins are set to stereo, or vice versa? Or am I misinterpreting things haha? Cheers Adam
 
Yep - creating a stereo or mono track makes no difference, it is still mono until you start to add processing that creates either realistioc or simulated stereo width.
Ah okey, do you why it's a function then? I mean when do i want a stereo track? Feel like i'm never turning the button to stereo unless i'm listening to an mp3 or have a bus? Cheers, Adam
 
Unless you're using some kind of effect that creates a stereo output from a mono source, like a reverb, stereo delay or stereo chorus, I don't think changing the track settings to stereo will make a difference.

Changing the plugins to stereo might increase the amount of CPU the effects use if it means they're now processing double the audio (even though it's just the same audio duplicated).

I doubt a saturation effect produces a stereo output from a mono source.

Headphones should make the difference between mono and stereo even more noticeable than on speakers.
It says 'channel.mode', and then you can switch it to either mono/stereo. Also has a 'character' button with a stereo effect and a knob in the middle of the plugin where you can switch to mono/stereo. All these three options make me a bit confused, thinking the kick should be in the middle. Would you know if all these options would be meant for a kick or more suited for other things. Using the Gsatplus saturator. Cheers, Adam
 
I don't know that plugin so I don't know what the options do, but I would stick with mono unless I had a particular reason to do otherwise.
 
You've got yourself a bit tied up. A mono source, say a mic or guitar can be on a mono track or a stereo track, it makes little difference. This mono track can have a stereo effect placed on it, when it becomes 'bigger' or 'wider' etc. You need to keep in mind that other sources, like synths and samplers can be sort of stereo. Not real, as in two mics, but stereo in terms of sound spread. So a synthesised piano would have more low notes to one side and more high ones to the other, with middle C bang in the middle. That's a bit like a real piano when you play big chords or arpeggios up and down the keyboard. Now we have so many sounds that are spread all over the place that we need stereo tracks in the DAW to manage them. None of these are real stereo of course - oddly nowadays real stereo is far, far more gentle. Even two mics on a guitar is rarely stereo, but just twin tracks with different EQ and panning making up the sound. Most times in Cubase I create stereo tracks, because that's what most sound sources are, twin tracks with a common fader and processing, but when I record a single mic, I often forget and record it to a stereo channel. It just costs me a bit of wasted disk space, that's all.
 
Yeah, if I made something too wide and I decide I want it back mono, some plugins don't switch back to mono properly and I literally need to load a new mono plugin in just under it then quickly copy the settings, then remove the old stereo plugin. But it doesn't happen very often at all and most of the time I am fine because the plugin manufacturers are getting over that issue now, it's doesn't happen with all plugins anymore like it used to.

I personally think It's important to keep eg: a melody guitar mono (if you recorded with 1 mic), all the way down the end of the line it will be stereo and very wide probably but your actual track you recorded your guitar on should be mono. You'll have a send going to lots of stereo reverbs and delays probably which in turn then gets routed to a main stereo bus, but having your main track in stereo will completely screw with the stereo image if you pan it because it will turn into a stereo balancer. which only balances the channel between the left and right speakers but you lose control of width and direct placement of dry source when we are talking about a mixture of stereo balancers and stereo aux's, a stereo balancer will just readjust the energy between the speakers to make something lean off to the side or down the middle without really caring about your stereo image you've crafted up to this point. If your guitar is mono and you pan to place it firmly in a location then all of your stereo aux's after that will place it correctly. Another use would be if you're setting up an ambient room verb for your overall tracks, you must keep this stereo otherwise everything will get shoved down the middle, even sources that are hard panned, keeping it stereo will allow your verbs to be placed directly under your main panned tracks.

Edit: Apart from that it's WAAAY faster. to keep your main track mono, if you pan it off to the side, all stereo aux's after will compensate for it and you need not pann any of your aux's along with your mono track.... because they are stereo, so they will maintain placement of your mono panned track. As a logic user, I wanted to let you know why I do things. in other DAWS it may not even matter.

A stereo track is also very risky imo, if you add plugins like your saturator, it will spread it out to stereo by default with no control, you'll lose that direct mono placement and your vocal will get smeared right away before youve even started, but if your main mono track is mono, then adding your saturator as an insert will be a mono version of the same plugin, so you can then use it to thicken/compress your track with mild distortion, much to the same effect as if you were to record that track with a mic and you pushed your tube pre-amp, it would be 1 mono and very focused track. You would lose this if you had it in stereo then started adding plugins without much care. It might still end up sounding awesome though, but I think it's good to be aware of what is happening when you work with a mono track/vs stereo. Even if the stereo track is just a 1 mic mono recording
 
Last edited:
Not with you at all on the difference between pan and a stereo balance. The law is a bit different but what you’re describing isnt my experience at all. The stereo balance fades left out or right out and set to middle is a bit lower than a mono track panned to centre and stereo separation doesn’t change with balance. The two channels are still separate? Width has little to do with panning of a stereo or mono signal. If you have a realistic sound field in stereo and change the balance one side drops but the positioning slowly reduces to a mono mix that’s unbalanced. Maybe we’re talking about the same thing differently? For me the width of a centrally panned sound source will be the width speaker to speaker and it seems to collect in the centre. Invest the polarity of one channel and it seems to cling to the sides and not the centre? I’ve never come across your plug in problem. Is that a specific plugin? Perhaps one I don’t use?
 
Hey Rob, by far the biggest reason I would use a mono track is so I don't accidently turn my mono centerd recording into a confusing stereo image. If I want to place something into the mix I want to be using a mono reverb on a mono recording to emulate a mic being pulled further away from the dry recording, to me.... this is a pretty big deal. Using a stereo reverb will give a completely different effect, and gives the driest center and up-frontness with a bloom over reverb on the sides out of the way. Which is not what I want sometimes...

Ok you could say you shouldn't be using a reverb on an insert anyway, which is true.. But what about saturation? where I actually do want to affect the whole signal recording to fold back in those transients? Parallel processing achieves a completely different purpose.

I actually just tested this out, but if I have a dry mono center panned track in stereo and add some saturation, the meters fluctuate quite a lot between the left/right, and by adding a little more saturation the image starts to lean off to the side even with only a modest amount of drive. So then we have started out completely wrong seeing as saturation is pretty much the very first thing I do, and by the time we send out to a bunch of aux's our image is already leaning, compounding the problem even further.

It's my workflow, I could get a perfectly good mix with all stereo tracks but I feel like I could so easily screw up, and mixing is hard enough as it is

Edit: With the panning, It's not anywhere near as big of an issue, but I like a single channel panned directly where I want it to be, eg: lead guitar panned 40% Left, the stereo aux's after that will lean appropiately automatically but I can use the stereo balancers on those instead to shift the energy back to center, so my dry signal is still 40% left but the delays/verbs etc still have equal energy around the sides. Using a stereo balancer after moves the dry 40% panned guitar and all of the stereo stuff aswel moving the whole image over to the side equally is all I'm saying. The panning problem is not a deal breaker, but working creating fake stereo right from the get go is a bit risky imo
 
Last edited:
I'm going to have to say I've not head any of these things you are describing, but maybe we just do different kinds of music. Most of my recordings are things like choirs, small ensembles and even opera - plus lots of sampled pianos and other real instruments. The favourite plugin reverbs I use often I use to sometimes simulate real buildings, so I might record in a big church, but then pop in an opera singer recorded in the studio, and I'll be matching the artificial reverb with the real reverb. If I have a mono source, and add my favourite reverbs, one of which is one supplied with Cubase PRP - The Reverence ones - these when gradually added just sound right, and they don't have the artefacts you have. I don't understand the folding back in of transients and I'm not certain I understand saturation when used as you mention it. Meters fluctuating from left to right happens when you record real reverb as the phase differences between left and right arrive from different distances. Viewing the stereo end product on a stereo scope has movement visible left to right in the real recordings and it looks and sounds similar on the synthesised reverb. In the case of the stereo reverb on a panned source, I never shift the reverb - that stays dead centre as it would be in real life. If you move a sound source from the centre to the left in a reverberant space, the reverb still has left and right components and the good simulations just process it like in real life, our at least the ones I use do? The saturation and the weird leaning I'm having trouble with, sorry. Are we talking about processing type saturation like we get in some tube processors (real or simulations)? I rarely add them to a reverberant source because I hate that sort of roughness they produce, so my reverbs stay uncompressed and unsquashed in every sense.

I suspect we do things quite differently, so I can't relate to your workflow, but I'd like to try to appreciate how you're working. Me? I record the real and the sampled/synthesised tracks, and usually sort panning first and then add reverb to the tracks that need it. Some of the sampled/synthesised tracks already have reverb as part of their sound and I might use more of this, but usually the favourites I always use.
 
I guess I wouldn't attack your kind of mixes in the same way as mine. I pan mono reverbs all over the place quite often. I am working on a track with a couple of lead guitars interplaying with each other and they are panned about 50% left and right, so I have a couple of mono reverbs sent from each of the leads 100% Left and right in a criss cross fashion to just give a lush soundscape but the lead guitars sound fairly dry, a bit like having some kind of wide stereo synth pad just playing underneath. I just experiment with that mix occasionally and fall back onto it every now and then because it's insanely tough compared to a lot of other mixes.

I always hard pan mono reverbs on each double tracked rhythm guitar also, it makes it wider but also balances out the frequencies at the same time. ie: If 1 guitar has a lot of bottom end, and the other guitar has a lot of high end, criss crossing the verbs helps shift some of the low and high energy over to the other side so just 1 ear isn't recieving all of the high end all of the time. It's something I do a lot, But I understand why you wouldn't need to do any of that, I can also understand if you are dealing with a lot of stereo tracks, or all stereo tracks.

I don't really compress my reverbs unless it's side chaining to a different channel, which I do every time to some extent. I haven't really got as far as trying out saturation on reverbs and delays yet either, I'll try out things like that once I already have a spectacular mix.

I was talking about applying saturation direct to the recorded vocal, or melody guitar or whatever. If I am recording straight into my interface then I am alwyas inserting saturation to every single track in my mix as one of the first things I do, this is why It's so important for me to keep it mono, otherwise I would just be fake stereoizing every track, and they then start to lean or fluctuate (Which I understand is normal behaviour). Mixers like *MixbusTV" just insert the SSL channel strip on every channel instead, some people use that analogue sounding DAW, I have started to reamp my tracks through the ArtTubeMP and I think it's worth the extra effort to use hardware instead of plugins to thicken. I know you are not a fan of saturation, but in my A/B mixes, the difference is staggering at how pleasing the saturated tracks are to the ear, and how much more punchier/lively they are.

I feel like saturation sound much better for rounding off the transients than compression does. Or at the very least use saturation first to make the compressor react in a better way. It's the transients that can sound harsh so I noticed when I was re-amping a few of my harsher tracks, they came out the otherside so much thicker and warmer, even the room resonances (600hz whistle) became barely noticeable. It's like fucking magic
 
Back
Top