Does a Multi-channel "output only" PC card exist?

  • Thread starter Thread starter sonusman
  • Start date Start date
sonusman

sonusman

Banned
mrclay,

You know, a soundcard like that could have a potential market.

No secret that software mixing is dubious at best. I am in possesion of many software based recordings that suffer from many other problems aside for A/D/A conversions and what not.

For a engineer that really only needs 2 in and enough out's to run each track to a mixer would be a nice feature that has many uses in less complicated recording setups. If you think about it, you could also use the two A/D converters for running the mixers outputs back to the hard drive for recording the mix.

Uh, Slack, most every A/D/A recording machine and or card has seperate converters for A/D and D/A conversions. A card that only had 12 converters total like suggested could be made for less money than a 16 converter card (8 in 8 out). Only about a 25% savings, but, a savings none the less.

DropD. As I stated earlier, I am not really convinced yet of softwares ability to effectively mix a lot of tracks. It really comes down to processing power. Especially with real time automation of volume, compressors, effects, aux sends, etc...you really start racking up the processing power needed. The chances of errors in the audio double with every volume change, inserted compressor, reverb added, etc...

With an external mixer, you are not limited by processing power at all. You are only limited to the sound quality of any devices used, and how many of them you have. But the sky is the limit.

I also question the internal bit processing power in software mixing using a $120 CPU. Digital audio processing requires big time calculations to be done in real time. Good digital processors process at at least 50 bits, and this can extend up to 72 bits for the better processors. You are asking a lot of your computers CPU to offer this quality of processing in real time, or even to not mess up the zillions of calculations in the case of rendering all the processing on the file. Personally, with all the software based stuff I have heard, I don't think many PC's are up to the task with more that a few tracks that are using eq, volume fades, compression, gates, reverbs, etc....The numbers are astronomical in calculating.

I still think the best bang for the buck is an external mixer. The sonic accuracy is retained much better, and you don't suffer the results of inadequate internal bit processing.

PC's and software are good for editing, and maybe processing one or two tracks at a time (but for mixing you need to hear all the processing at once, something that is hard to do with even a stereo .wav file that is using a few different processors). But multi track mixing is just too much of a demand to do it effectively on a PC with software. I have heard nothing by anyone (using 8 or more tracks) that would suggest otherwise to my ears.

Having used WaveLab with the Steinberg Mastering Edition plugin's to master files, I have noticed that as I start stacking up the real time processors, the sound tends to get more distorted as I go along. It only takes like a compressor and a mastering quality EQ before I start hearing these effects. In effect, the CPU is creating errors at the D/A converters that are audible. Once I apply the processing, I don't hear the artifacts anymore, but the problem is that the sound is a little different at that point. I need the ability to hear it "as it is going to sound" while monitoring in real time. I have a Celeron 400 which is suitable for the task. But, think about having even 8 tracks (4 times as many as my stereo .wav file) all with seperate processing like a volume change, eq, a compressor, effects. Even with double the processing power available, you would still hear errors as the processor and software is not able to process that much info in real time. So it really comes down to how can I effectively mix using software and plugins if I cannot have confindence that what I am hearing with real time processing is what it will sound like once I actually apply the processing? You can't, and I would suspect that many people who are new to recording experience similar problems, but just think that they are doing something wrong (of course, that may be that case too... :D).

Anyway, I don't invest much in software designers claims that their stuff works very well in real time. To my ears, it doesn't at all!

Digital mixers suffer from similar problems. These $2-10k mixers just don't have the horsepower under the hood to accurately process the audio properly. That is why most of the stuff that is mixed using digital mixers sound so stale and lifeless. The original audio is so corrupted by poor processing.

These are not just theories that I am spouting off. They come for some serious evaluations of analog vs. computer digital precessing, and some indepth research into this area. I always send people over to www.digido.com to hear what an expert in digital processing has to say about it.

Everyone claims that their digital gear works so well. Yet, I see post's all over wondering why their mixes sound so bad compared to bigger time recordings. I always say it is the equipment. I stand behind this. Low quality digital gear is not going to produce even professional sounding recordings, let alone stuff that would be considered "stellar".

So anyway.

mrclay, I think you should direct this question to a design team staff at a manufacture of audio cards. It is an excellent question. Your idea is interesting, and if someone was to provide such a card, hell, I might even consider buying one.

Ed
 
I'm pretty much into digital recording right now because it's so simple, not for the super hi-fi end of it (although I do the best I can to keep my 16/44.1 tracks clean). Although I've been happily mixing in Cooledit (tediously of course w/ volume curves and such), from working with a friend w/ a Layla, mixer and external effects I know it's just easier for me to get a more natural mix working with physical faders and EQs..

I find I actually like the sound of a rough (even cassette 4 track!) recording when it's mixed pretty well. I have a friend who records track by track almost entirely with one handheld mic for vocals and a PZM in one place on a wall (with the instruments right where they are, some amped, some not) to 8 track reel to reel. The tracks are completely lo-fi with tons of room echo and some are tape overdriven, but mixed (on the Layla and mixer setup) it sounds wonderful. I'll have to post an mp3 somewhere.. I guess it's like remixing a garage rock recording w/ modern equipment.. Anyone heard Guided by Voices? :)

Yeah I'll give up on that idea. Anyway, could you provide a link to info on the Delta 44?
 
Is there such a thing as an "output only" card for the PC? I'm doing simple multitrack recordings using CoolEdit pro, but wish to send the 10 or so individual tracks out to a mixer. But I have no problem tracking my recordings with only the stereo-in/mic-in my sound card already has. Card manufacturers don't care about me, do they? :) I'm eyeing the Delta 1010 or Layla for the future, but for now I just want better mixdown power..
 
It probably doesn't cost much more for inputs than it does for outputs since they'll both require AD/DA conversion. So even if you did find a card with 10 outputs and no inputs...it'd still cost you an arm & a leg and would be less versitile and nobody in their right mind would buy it.

Forget about it and save up for the multi-in multi-out card of your dreams.

Slackmaster 2000
 
Plus, if you use the line-in of your card, I bet you're using a sub 100$ sound card (like a SB), invest in a multi-in multi-out 24-bit sound card you won't regret it. I just got a Delta 44 and it's like night and day. Where did the hiss go??? And the sweet resolution of 24bit

Also, why use 10 outs when you can mix in software?
 
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>I am not really convinced yet of software's ability to effectively mix a lot of tracks. It really comes down to processing power. Especially with real time automation of volume, compressors, effects, aux sends, etc..<HR></BLOCKQUOTE>

Just to support software mixing a little, it can have the advantage of mixing (at least in Cooledit w/ only volume and pan changes) NOT in real time, but in however-long-it-takes time. :) Even simple gain adjusting (multiplying by a constant), panning (multiplying by two constants--L and R side) and mixing (summing all the panned stereo pairs) many tracks of 44.1 audio alone is a real workout if you think about the calculations going on, but it wouldn't matter what processor you had if the time-sync constraint wasn't there.. Cooledit may take less or more time to create a mixdown .wav than the length of the song, but any errors are only due to glitches in the actual algorithms, not from having to estimate or drop calculations (I guess) in order to keep up with real time output.

Don't pro tools and the other big software packages have options to mix like this?

<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>I think you should direct this question to a design team staff at a manufacturer of audio cards.<HR></BLOCKQUOTE>

Or how about I develope the card myself and make millions! oh, wait, I'm not even A+ certified :)



[This message has been edited by mrclay (edited 04-12-2000).]
 
Ed, what I meant was the same cost to manufacture (roughly). That is, an 8-in 8-out card would probably be similar in cost to a 16 out card.

I see where you think it would be advantageous to have a card that could be used to only expand the number of outputs you can manage though....but I can't imagine something like this being sold as a standalone unit. I would imagine it as part of a large "expansion" package for an existing card.

What someone needs to do is externalize the converters, resulting in a system that's expandable without adding additional cards (which can be difficult as you know). If the card was designed such that it could handle a nearly unlimited number of input channels and a nearly unlimited number of output channels...then you could stick the converters into breakout boxes which can be digitally connected to the card (externally) Need to monitor 8 more tracks? Just plug another "breakout" box into your chain of output boxes. etc etc

This is definately possible...I wonder if anyone has tried it? Does it already exist perhaps?

BTW, CPU's don't make mistakes.

Slackmaster 2000
 
geez, I used to have this 233 Cyrix chip that made SOO many mistakes... :)

xoxo
 
That's because Cyrix doesn't make CPU's. They make little devices called "socket fillers" that serve to keep your CPU socket free from dust and debrees until you get a real CPU. Sort of like those cardboard computer cutouts that you get might see in furniture stores. Purely cosmetic with a bit of dust-fighting action.

:)

Slackmaster 2000
 
I will illustrate my point a little more clear here.

What good is any applied processing if you can't hear it in real time? It is not.

Also, stand alone digital processors provide up to 72 bits internal processing in real time on 2 channels only. But when I apply the processing in Wavelab (up to say 4 plug ins) it only takes about 2 mins to complete the processing on a 5 min stereo .wav file of 24 bit 48kHz sampling rate.

I am missing some math here or something. A as low as $600 dollar computer, with $500 worth of software is going to out perform $16,000 dollars worth of stand alone digital processors? These stand alone units can only work in real time.

So, you may have the real point that the algorithms may be flawed, but for the CPU to accomplish in half the time what takes top of the line stand alone boxes to do in real time suggests that these algorithms are not just flawed, but they are totally bogus! They do not work as advertised at all! I can't imagine that a $1200 or so unit can do the work of $16,000 worth of units in half the time and still provide the quality.

Uh, Slack, CPU's don't create errors? You are of course joking right? Take you fan off and see what happens. Have that puppy run at 100% capacity for 5 straight mins and listen to the results. No errors? How about error riddened..... :D

So, maybe I am missing something here, but, I know what I hear. And I know that I am on to something, even if I am on the slightly wrong path at this point. There is a very good reason why PC's and it's software are not used for hi end production.

Oh, Pro Tools. It provides it's own DSP in the box. The computer plays host. The extent of processing it does would blow up most computers.... :) Really though, it is it's own digital processor that has little to do with the computer that host's it.

Also, when you read that so and so recorded their CD with Pro Tools, you might want to take a little closer look and read between the lines. They never said all the mixing, and processing was done on Pro Tools, just the recording. Of course they also used it for cut and paste editing, but I can assure you that no competent engineer would use any real time processing from it, or use it's mixer for anything other than a way to get the recorded tracks back to a nice mixer. Yes, Pro Tools is a nice recording software and editor, but it is not a killer mixer and processor. It suffers from the same processing problems any computer based system suffers from. The price tag of a Pro Tools system suggests that it is not a serious mixing/processing platform for up to 24 tracks like they claim it is. If it was, do you think that mastering studios would be throwing tens of thousands of dollars at stand alone digital processors? Do you think a big studio would throw $500,000 at a nice digital console if they could just buy a Pro Tools system to do all of this in one convenient bundle? Hell no! They would sell off all the really expensive digital gear and just use Pro Tools for recording, mixing, and processing. But, Pro Tools is not all that as a mixing and processing software, so they keep buying the half million dollar digital mixers (for those who want digital mixing) and $4,000 stand alone digital processors because they are the only units capable of handling digital audio with any kind of accuracy. Even then, the engineers usually would like to just use an SSL console and a rack full of analog processors because they sound even better yet.

I am not saying that low end digital cannot deliver somewhat decent result's, but it is a long ways away from high end digital, or even medium quality analog. Good digital processing is every bit as expensive as good analog processing, and in some cases even more expensive.

But everyone getting into digital thinks that they are getting "state of the art" with cheapy digital systems. They just think that they need to know some magical, elusive recording/mixing techniques to make the same kind of killer sounding recordings that they hear from the big boy studios. Well, no matter what anyone chooses to believe, it is not going to happen that way. You want big time quality, you pay for it. There is no "cheap way" to get it.

So, I report a more detailed reason why software mixing is not all that great when I get to the bottom of the exact reasons why. I already know what my ears tell me, now I need to know what the data tells me. Of course, the real answers are going to be hard to come by because software developers have a nice tendency to not reveal why their product sucks (Microsoft ring a bell here?). Also, the issues I am sure are quite complex and hard to understand without some advanced degree. But rest assured, when sonusman get's the answers, he will make some simple sense of it and share with all.... :D

Anyway mrclay, start drawing up your plans. Slack has a good idea with the expandability thing too. So, incorporate that into your plan also and sell the idea to Lynx Studio (they seem to be heading in the right direction with their gear).

Ed
 
Oh boy, I have the feeling that this is gonna be a long one and it probably won't make sense.

Real time software effects have their own flavors and limitations, just like effect boxes. The order in which you apply the effects is also important, just like with real boxes. I would say that if you start to hear what you would consider "errors" after applying realtime effects that a) you need to play around with them more because you're doing something wrong or b) the effects you're choosing mold the sound in such a way as they do not sound good together.

You really have to watch your output levels in your DX effects or you'll get big time distortion and you'll have a hard time figuring out where it's coming from. I usually have problems applying reverbs, but a little tweeking and everything comes out fine.

CPU utilization has nothing to do with how your shit's going to sound unless the system is maxed out in which case you'll hear skips and jumps (but only while playing, if you mixdown the resulting wave will sound normal) There is no audible or "real" difference between applying a regular effect and applying a "realtime" effect. The result will be the same. The benefit to "realtime" is that you can change the effect settings as you are listening since it operates on the audio stream instead of just munching your wave itself.

Windows 32 bit processing....this is nothing to really think about. Current processors internally work with 32bit words (though the data bus to the CPU is 64bits wide). VERY basically, the more room the CPU has to work with the better. Since your audio applications utilize the services provided by the operating system, it would of course be beneficial that the operating system be able to take full advantage of the processor it is designed for. Hence 32bit Windows which is one of the most overused selling points in history. Win 3.1 was 16bit, Win95 and above is 32bit. There's no reason to think about it. Application developers do not do anything special to take advantage of "32 bit processing" as it is taken care of, naturally, by the compiler. Blah blah, this is nothing to concern yourself with.

Ok, on to errors and error correction.

There is no way for a AD converter to check for errors...and really it wouldn't want to. It samples your audio X times per second, and each time it could make a mistake but there's no way to RESAMPLE an analog source without building a time machine. Therefore you can't verify that it's doing its job correctly without listening! Perhaps this is the reason that all AD converters sound differently, and that Ed's AD converters cost $1000 and up while my AD converters cost $20. :)

Once you're in the digital realm, you can start doing some things to avoid errors. One of the simplest forms of error detection was "odd parity" used to verify that data passed between memory and CPU accurately. Odd parity is a means of counting the number of ones in a string of bits (typically eight at a time)...if the number of 1's is odd, then you set your parity bit to 1, if it is even then you set the parity bit to 0. For instance, if the following byte is to be written to memory, 01010010, then the parity generator would set the parity bit to 1, because three bits are set. Now the "byte", physically in memory, becomes 010100101 ... the original 8 bits with a 9th "parity bit". When the bits are requested by the CPU, the parity checker (same as the generator) verifies that indeed there are an odd number of set bits, and the parity bit is indeed set to 1. Good, probably no errors.

Big problems with odd (or even) parity, however. Can you guess? You're right! It can only detect single bit errors, and doesn't know which bit is in error! When a parity error does occur, the CPU is halted and you have to start over.

However, using parity you can rest assured that no erroneous data was written to the hard drive nore was any incorrect data reported to the user. Plus, parity checking does not slow the system down at all.

The "new" (the math is actually VERY old) way to detect memory errors is ECC, or Error Correction Code. From my understanding of it, it's very similar to Hamming codes if not the same. ECC can detect 1, 2, 3, & 4 bit errors in a 64bit word, AND correct single bit errors which is pretty cool. It requires quite a few extra bits, however, and also reduces system performance by 2-3%.

There are other means of detecting errors such as CRC (Cyclic Redundancy Check) which uses polynomial division and is rather complex and slow. It's used more for network data transfers where the chance of error is high.

Note that any device that is prone to frequent errors will have some means of detecting and correcting them on the fly. Hard drives and other magnetic media produce errors all the time, if the hardware did not include error detection and correction, we wouldn't be communicating with this fancy computers right now :)

Welp, gotta run. I hope I thoroughly confused you all. Hey, at least I didn't get into any specifics!

Slackmaster 2000
 
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Can you run 8 tracks out of a soundcard to an outboard mixer/Fx and back in to stereo?<HR></BLOCKQUOTE>
Oh yeah, you can do that with any card w/ at least 8 outs and 2 ins. I solved my own mystery when I went home right after posting the question and discovered the Darla 24 card..with 2 ins and 8 outs. Shame on me for not doing my homework 1st. :)

Steve
 
I'm just going to spew out some random facts here because I'm kinda suprised by some of these comments.

1) A powerfull enough computer can perfectly mimic ANY number of standalone fully digital devices. Note that many devices have digital connectors but still use some analog circuitry in which case a computer cannot perfectly mimic it, nor can another device of the same design as no two analog devices will sound exactly the same.

2) The conversion from analog to digital as well as the rate and bit depth at which this happens is the only place where your "errors" are going to occur. This is all "analog's" fault :)

3) Your CPU will never be the culprit of enough calculation errors such that you will hear them during audio processing. Either a) your application will crash or b) the operating system or some portion thereof will crash. I guess you can "hear" a crash...but not really. In other words, there are NO errors at the CPU level, EVER, unless the machine has become unstable...in which case you will not be able to "run that puppy at 100% capacity for 5 straight mins."

4) Hardware is faster than software. This is why one computer cannot yet take the place of 30 $1000 digital boxes, even with external acceleration.

5) There are two important fundamentals to consider during software algorithm design. 1) Time complexity (e.g. how many instructions am I going to burn per n data elements) and 2) Space complexity (e.g. how much memory is this thing going to snag given n data elements). Engineers constantly make tradeoffs in precision & functionality vs. time & space considerations. An absolutely "perfect" digital reverb algorithm, for instance, might be so complex that it cannot be implemented to run in real time without serious hardware support. In this case, the designer will choose to economize the algorithm (e.g. make it less perfect such that it still works, just not as well). This is not a bad or abnormal thing, it is business. Who wants an effect that can't be applied in real time? Not enough people.

6) "Precision" has nothing to do with "errors". When you're listening to 8 bit audio, for instance, you might at first jump to use the "error" word...but you're not hearing ERRORS, you're hearing a tradeoff in complexity vs. function.

7) Given time, future computers WILL be able to replace ALL of *today's* digital equipment. At the current rate things are moving, I wouldn't be suprised to see it within 5 years. Will this make things cheaper for the average joe? Yes. Will this make things cheaper for the professional joe? No. Business is business no matter what form your product takes.

On a side note, Ed, the leap between cheap analog and cheap digital recording is phenomenal to those of us who are just starting out.

Slackmaster 2000
 
Okay Slack, fair enough, and sensible too.... :D Basically, hardware limitations drive software quality. Even a computer dummy like me should have seen that coming.

So, why does my audio sound f*cked up when I apply enough real time effects then? This happens even at less then 60% CPU usage.

Oh, also, errors MIGHT occure in D/A conversions too. Quantaniztion errors it is called. This results from some kind of amplitude change in the audio. The idea of dithering is to add noise to cover the distortion that results from it figuring that a "shape" type of noise will sound better than the yuk created by Qauntaniztions errors on the D/A converters.

Anyway, not really point though.

So slack, how does windows 32 bit processing play into all of this? Does it at all?

(me taking my ass whoopin' from Slack in stride... :D)

Ed
 
Love a good post that started with a fairly basic question and degenerates to a technical gab-fest. :D

OK, having read the posts from Sonusman and Slack, I couldn't help but wonder where error correction figures into this entire thing. Every so often, data will get lost, and I was wondering just how this is handled internally in a computer. I had a look through some of my more technical pc architecture books, and couldn't find out much about it.

In networking, the packet is just resent (depending on the protocol), but what happens if the data goes walkabout between the CPU and the destination sound card (you know, catches the wrong 'bus', so to speak).

If there is no error correction, then the D/A would simply go 'OK, now the output is zero', and wait for the next block of data to appear. If there is error correction, then I assume that the D/A waits until the exact packet is sent properly. What kind of buffering does the D/A do?

I started to write a bit about the 32-bit processing thing, but I got myself confused with 32-bit hardware architecture vs 32-bit software. I never can remeber the difference.

- gaffa
 
Hoontech produces a couple of soundcards that can be expanded with a zillion different breakout boxes of various usefulness. Maybe you can find something there mrclay. E.g. DSP24 + DACIII will do the trick in balanced 24/94 but I'm sure they have cheaper 16/44.1 variants as well. The Echo Darla 24 has two in and eight out 24/96 just to mention one other option. There are a bunch of options, just look around.

Good luck

/Ola
 
Ok, so maybe he shouldn't mix in software :)
But still, he shouldn't record with his SB card :o)

So: 2in, multi out card is what the world needs! (and a good mixing board, why not a Mackie, I know you like those Sonusman :D )
 
Wow! This string or whatever it's called should be requiered reading for poeple thinking about getting into PC based recording. I already use my PC to record my stereo masters but was thinking of getting an 8 In/8 Out card and some multi-track software. Thing is, I never planned on using this stuff for real time FX. I like the real knobs too much. I just wanted to use it for things like mid-track volume changes, fade outs, cutting out drumstick clicks, and so on. The stuff that requires six hands and inhuman timing to do on the fly. My question is this: Can you run 8 tracks out of a soundcard to an outboard mixer/Fx and back in to stereo?
 
Ok Slack, you didn't really lose me there, and I even got a few clarifications. But, you didn't answer the question I posed, "Why does the audio sound messed up when real time effects are applied, but after processing, it sounds fine?".

Hopefully, you can take it as a given that I use WaveLab's very nice feature of monitoring each plug's output to assure that I am not digitally distorting the next plug's input. I can hear the results of this happening when I even let it go over .1db. So, I am not creating distortion for the next device, at least to the extent that the software can detect the amplitude of the audio at a plug's output (which in the case of WaveLab, I suspect that this is done with a pretty decent amount of accuracy)

Next, I cannot by the concept of how the plug's are working together. I would by this is if were not for the fact that the processed .wav file sounds different at playback time after processing then it does when I am just monitoring the audio in real time without the plug's actually applied to the .wav file. So, there is the mystery. Why do distortion artifacts go away after the processing is actually applied and a new .wav file is written?

This is a problem that needs to be addressed because I need to be able to make decisions about the processing in real time, not at playback. Of course, at this point, I must sometimes apply the processing, listen, make adjustments to the plug's parameters, create yet another new file, listen, adjust, etc....This is time consuming and creates the potential of hearing fatique because of the amount of time spent just trying to make some stupid adjustments! arrrrgggggghhhhhhhh!

So, why does the audio sound different during real time monitoring then after the processing in actually applied and the new .wav file is played? That is the real question here, and I suspect to answer will reveal the computers real limitations for processing audio in real time.

I suppose if one does not mind having to listen twice to actually hear what it sounds like, then it is no problem. But then we get into the "ah, f*ck it" realm when trying to get a better sound.

Yet another reason that I do not care for computer based processing. Let me explain.

I sometimes work out of a studio that has a Yamaha O2R digital mixer. I will not go into what I actually think of the quality of the processing of this mixer because that is for a different discussion.

Anyway. The problem with the O2R while mixing is the speed at which I can actually make adjustments. On my Soundcraft Ghost, with outboard dynamic processors, and effect processors, I have the whole deal laid out in front of me and only an arms reach away. Every control on the console is viewed as quickly as my eye's can search it out and tell show my brain so to speak. Okay, it is a givent the my eyes can search out any knob on the console much faster that it would take for me to find it on an O2R, because the O2R requires that I select the appropriate menu, then the channel, then look at the screen. If I am viewing the eq on channel 1 on the o2r, and then want to view channel 2's eq, I must reach over and select channel two on the console. On my Ghost, I simply look at the next channel strip with is about 2" away. Guess which search is faster?

Now, lets say that I need to compare several settings between two channels on the console, like with drum stereo overheads. I would want in this case for the two channels to share very similar settings, but this needs to be verified visually. I have to bounce between the two channels manually to compare setting, with also selecting different menus for different functions, like eq, effect sends, dynamic processing, etc....

With the tactile interface of my analog console, and analog processors, comparative search's happen much faster, also, access to parameter controls happen much faster. This is relavent in a major way when mixing music.

Physically speaking, we have certain limits. Phycologically speaking, we have limits pertaining to patience.... :) Really. The longer it takes you to do something, the less likely you are to want to go back to it to adjust minor things that you may find wrong. Also, attention span comes into play because it is really easy to forget what the hell you where trying to do in the first place. Inspiration is often where the best solutions come from.

Anyway, the longer it takes to accomplish minor tasks, the easier it is to just say "f*ck it, it is okay I guess". Call this a lazyness factor. But I don't because I don't believe that I am lazy at all in a mixing environment. But, there does come a point where ease of accomplishing something is a deciding factor if I will do it at all, or just leave a somewhat undesirable thing alone because I am getting tired and impatient spending all the time to fix it.

So, better tools that help ease accomplishing tasks leads to better end results because we can use our somewhat limited energy and attention span to fix most problems. The more extensive and easier to use these automated tools, the more we can fix in an given amount of time.

But really, when it comes to working with music, there just comes a point where disinterest comes into play. You hear the mix 100 times you start to get a little tired of it. Your mind tends to drift, or to not focus with the same intensity.

A good example is with me writing this right now. I started out with plans for a book! I was full of energy and had some very distinct ideas of what I wanted to right. But, I am limited to my 30-40 words per minute typing, so, I get tired of focusing after awhile and start retionalizing not clarifying points, or not pursuing details that in the beginning I thought to be petinent. But if the idea is complex, it would require a lot of time and focus to present. With my limits in typing, I would spend all day composing one post that relays the extend of what I wanted to say.

But, maybe I could use a better vocabulary. Maybe I could use someone to dictate for me. Maybe I could learn to type faster. These would all be powerfull tools to help me achieve a better, more detailed post here. But, they all come at some kind of investment over for tools that I do not currently posses.

Now, it would be easy for me to sit back and say "damn, that post is good enough". But, I have the benefit of knowing what is going through my mind, and can so to speak read between the lines. But to the person who is reading it, they may feel that my post lacks a lot of detail that I believe is there. You see, the read is fresh and can read much faster than I can type. But, I made decisions during this whole composition that comprimised my orginal intent based upon lacking the necessary tools to present my whole intent in a reasonable amount of time. Bacially, I ran out of gas and started to make comprimises to get it done, hoping or not even caring if others will read between the lines, or understand what is not there. But, when I start to get feedback concerning my presentation, I will more then likely wished that I has better tools available to fix the many problems that it has that are evident to others, and to me when I step away for awhile then come back.

Think of music this way. You are the composer, performer, producer, tracking engineer, mixing engineer, and mastering engineer. Basically, you are doing it all. In some cases, you are trying to wear two hats at once. Thus, you cannot devote all of your limited energy to any one task exclusively. At some point, you lose the edge and start letting little things go unfixed. Maybe you are not even catching the little thing because your attention is on another task. Let's say that each little oversite or "f*ck it, it is good enough" thing degrades the outcome by even a tenth of a percent of what is possible. Add all those little problems up over the hundreds of decisions you will need to make to write, perform, record, mix, master the song and you can h
 
Back
Top