Waveform start points for proper timing ?

  • Thread starter Thread starter kerriobrown
  • Start date Start date
HUGE +1 here. Maybe this is just me being a traditionalist because I'm a guitarist more than an engineer, but if your finished tracks need some sort of quantization to sound in time, then they're not finished takes. Digital manipulation is no replacement for a good performance.
Well said. Like I said, there is no substitute for practice.
 
HUGE +1 here. Maybe this is just me being a traditionalist because I'm a guitarist more than an engineer, but if your finished tracks need some sort of quantization to sound in time, then they're not finished takes. Digital manipulation is no replacement for a good performance.
+1 here too. It certainly helps to know the engineering when you're also the musician but nothing, nothing takes the place of a good performance.
 
I mean, is it a performance issue? Or is it simply trying to align tracks in a project where you don't have a reference time code?

It's a performance issue. Saying that, I don't think it sounds bad w/o tightening it up. Maybe I can perform better. I;ve been thinking over how I record and I think it may have concerns that if I do too many takes, I'll have too much stuff to sift through. Because I'm OCD I have trouble trashing anything that could sound better w/ a few edits so I try to record less. I see that this is irrational! It would be better to record more and be more liberal about what I cut.

Still, I am not absolutely convinced that even professional sounds couldn't be better w/ some tightening up. Even for takes that sound on time, after editing they seem to sound even better. I don't find that it gets robotic b/c there are enough nuances left. I am not exactly consistent in placements, just narrowing the window. But even if I were exact, each note has a different attack, release, amplitude, etc. Actually, I shouldn't say this but I've been tightening amplitudes too. It just seems to sound better, even for simple palm mutes that sound tight to begin with.

Playing to the metronome only is pretty much as useless as lining up your guitar parts with the eye. In the end everything will sound like perfectly lined up crap.

Ha ha. You may be right here. I chose to use the click alone, b/c I didn't want everything else to complicate where the beats occur with there distinct waveforms etc. But I get that these differences could be exactly what I want to define where the notes fall. Again, I am convinced by everyone here that my song may sound better if I scrap it all and just play with the rest the music. Maybe I'm losing feel, nuance etc. It just doesn't sound like it to me. I can't tell you how much I appreciate everyone's input and I have taken it to heart. If not for this song, I will definitely try it your way for the next one. If my current takes sound bland, I will even start over on this one and try your method.

I'll often track with some sort of SOP...but if I ever find that something just doesn't lay out very well with the other tracks...maybe becuase I recorded it at the wrong time, and didn't have enough "information" to play against...I'll just re-record the track(s).
After awhile, you will find and SOP that works for you.


SOP? Is that standard operating procedure? If so, I get what you're saying and agree. My next step is too take my last takes and see how well it falls with the rest. I usually record just a piece and rougly mix it with the rest to see how it sounds. I really feel like I can dedicate more attention to the overall sound offline, rather than with the guitar equipped.

Digital manipulation is no replacement for a good performance.

I think the argument is what we're considering replacement to mean. I feel like I am slightly enhancing decent takes. I say decent b/c they sound great to me, but then sound better after a little tightening. That is, when I record I can discern the quality of the rhythm only to a limit. These are takes that others would not consciously find faults with, but after a little digital touch up they overall sound better.

Well said. Like I said, there is no substitute for practice.

I actually stopped recording for 5 years to get my chops up, but with my first child on the way, I figured I better get these songs out now. As you practice, you accumulate ideas and eventually you just have to move forward. Like most everyone, I work an absurd # of hours and I have to make compromises. But I want to minimize how much this shows in the music. Of course, that's the whole point of this discussion for me and I'm convinced that there may be better methods.

Thanks again everyone! I'll be sticking around here and hopefully have some advice to provide myself.

Best,
Kerry
 
Digital manipulation is no replacement for a good performance.

I think the argument is what we're considering replacement to mean. I feel like I am slightly enhancing decent takes. I say decent b/c they sound great to me, but then sound better after a little tightening. That is, when I record I can discern the quality of the rhythm only to a limit. These are takes that others would not consciously find faults with, but after a little digital touch up they overall sound better.

Then if they sound "great" why do anything? You're on a slippery slope here, because there's no clear line between touching up a good performance, tightening a merely OK take, and fixing an outright sloppy job. Where's the line between what you're doing, and Paris Hilton releasing an album because while she couldn't sing, with autotune she didn't need to? And furthermore, if you need to correct your tracks in the studio to get them tight enough, how are you ever going to be that tight live?

Southside Glen and I have had a number of recent debates about how modern recording technology is causing the overall level of musicianship to suffer, because so many people are using quantization plugins and pitch correction to make recordings when they don't actually have the musical ability needed to play it on their own. I don't want to come down on you personally, but it's stories like this that make it very hard for me to argue my case that there's plenty of good musicianship out there, and the recent lack of, say, guitar solos in rock music has more to do with solos being out of fashion than no one being able to play.

I mean, I work an absurd amount of hours, too. I still don't record anything I can't actually play without the help of a computer.
 
I have no problem with using digital manipulation (or even good old-fashioned punching in) to fix an accident in an otherwise worthy performance for which there is otherwise no real reason to re-record. Whether the accident is accidentally bumping a mic, just plain missing one important note or chord, or a temporary goof behind the glass doesn't matter; if it's otherwise a "keeper", I see nothing wrong with fixing one or two small errors or variances.

Where I do start to have a moral or ethical (I'm not sure what to call it?) problem is when the editing is meant to *create* a performance that was never really there. If that's going to happen, then the engineer is actually creating more of the performance than the musician did, and the engineer should get the musician credit for that track, IMHO. Of course the loophole in that idea is when the "musician" and the "engineer" are one and the same self-recordist ;).

It just seems to me (totally IMHO YMMV MIRV and all that) that there is a line that should be drawn between the guy on the musical instrument who is capable of laying down a decent track, but just needs to make a small fix as described above, and someone who could do a hundred takes and not get it right without having to "tighten" the whole thing up in post, in which case I have to wonder why they are recording or being recorded to begin with.

As far as the metronome/click track thing, unless the click track is actually going to be a rhythm track in the song (don't laugh, it's been done on rare occasions), it can provide a false sense of security to have everyone play to the click separately to that rather than to each other. Because everyone is human and no one is perfect, each track will "dance around" the click in an acceptable, maybe even pleasant way. But they are all doing their own individual dance, and when you put them all together on the same dance floor, they just bump into each other and step on each other's toes. Tightening that mess up can be a real pain.

But when they play to each other, by starting with the rhythm tracks (drums or maybe drums and bass together), then playing the gits to that, and then the keyboards to that partial mix, and then vocals (or some strategy similar to that at least), they all tend to dance to the same groove. Yeah there will still be small timing issues here or there because we are not computers but flesh and blood, but - assuming the guys actually know how to play halfway decently - the timing differences will be more coherent, more "musical", and probably more pleasant, possibly to the point where digital "tightening up" may not even be necessary or wanted.

G.
 
Of course the loophole in that idea is when the "musician" and the "engineer" are one and the same self-recordist ;).

icon14.gif


I use that loophole. :D

But seriously, editing is not always about fabricating what is not possible...it is valid in modern studio production to fix screw-ups and polish things up (and no, you can't polish turds). :)
Unless you are into "purist" live, 2-track recording, there is a lot about modern production that can be viewed as "unnatural" when you comapre it to a live performance.

IMO, the OP's real issue isn't about cheating, it's more about making the performance sound too automated/robotic by making things too precise and all on the beat.
IOW...if you are going to edit...don't ruin the feel of the natural performance.

However, if/when you can't play anywhere near good enough to make it sound right...then editing it into fake perfection is the bad side of editing, and can be viewed as cheating.
 
However, if/when you can't play anywhere near good enough to make it sound right...then editing it into fake perfection is the bad side of editing, and can be viewed as cheating.
And that's when the S1000HD comes out... oh never mind... :D :D :p :eek:
 
Sorry I had an family emergency so I couldn't check in for awhile. I want to argue this cheating issue.

But let me first say once again that I completely agree that if the tightening makes it sound worse b/c it loses feel, then of course it is stupid to do. I also mentioned I was convinced that I should let what I am calling 'great' takes stand on there own, at least until everything is mixed down where I can do an A/B comparison. I say 'great' b/c I believe that if you were to listen to them and not conscious of this discussion you would not think they are off time. As you all say, they may sound best in this mode where they are in time w/ reasonable human deviation, so I will try them as is.

However, if I feel like in the end they sound better tightened up I am going to change them. Here is why I disagree w/ all of you:

1. I am not trying to make it robotic. This is what I mean above by keeping as is if it sounds worse. Robotic = worse and it may at least comparing a tightened up version since enough nuances could be left given that I manually tighten each note.

2. I disagree with what you call cheating. First, I am competent enough to play my songs. But I really don't care whether I can or not for song creation/output. I am not playing them live and if I ever wanted to I would be capable. But song writing, producing, and performing are 3 different things and I am currently more interested in the first two.

I hear the argument about autotuning capabilities on a lot of forums, and I think it's like saying people wearing glasses are cheating at seeing stuff (or drawing stuff if you want to keep to an art form). If it sounds the same then I don't care. In fact, I welcome new technology b/c it provides other potential songwriters an outlet and the possibility of making some incredible music they otherwise could not share with the world.

I think others tend to think of autotune etc. as been more analogous to steroids than glasses, but this is illogical. Steroids ruin competitions that are based on natural ability. If someone sneaks in a pitch correction tool under their shirt to use in the United States Natural Singing Competition, then it is cheating. But for songwriting and producing, and even playing live (if it works), then it is not cheating but instead providing better music for your audience. If it's totally about the music, which it is for me, then anything that makes the music better is welcomed.

Now if autotuning makes a song sound worse or fake etc. then we have a different story. I don't defend making songs worse, but this is not what you are arguing when you call it 'cheating'.

This isn't a game for me so I'm not sure what cheating would even refer to. I have songs I think are worth sharing with others and love to make them come to life so others can hopefully appreciate them.

That said, people listen to music for different reasons and if it is important to you to restrict certain technological methods of production then I absolutely respect that for you own musical purposes. But I wonder how many unbelievable songs have been created w/ a little tuning help in the past 5 years or so that wouldn't otherwise have been made. Given the taboo nature of these little helpers, I'm guessing we'll never know. We'll probably only know how many songs have been ruined abusing the methods since transparency is crucial to successful use.
 
I hear the argument about autotuning capabilities on a lot of forums, and I think it's like saying people wearing glasses are cheating at seeing stuff (or drawing stuff if you want to keep to an art form).
In a sense the use of glasses can be seen(:p) as cheating. For example when they are used to create an unfair advantage in a war or a competition. If you need to substitute technology for talent then I can see (:p) that as cheating. I think if you are changing beats and tunings while mixing you are not an audio technician, you're a disk jockey. Not that there is anything wrong with that :D

A Czechoslovakian came into my clinic one day. I asked him if he could read the bottom line and he said "Read it? I know him!":p
 
But song writing, producing, and performing are 3 different things and I am currently more interested in the first two.
I don't see it as much of a question of cheating (though that's one aspect of it)as it is a question of musicality and servicing the music.

You're right, it's all about the music. How can anyone be doing a service to the music when they consider the actual execution of it to be of minimal importance?

You are a songwriter and producer, which is very cool. Have you ever considered that the best way to get your songs to shine would be to produce them by having another musician perform them who not only cares about the performance end of it, but can enhance it by actually delivering a meaningful performance in the studio?

Melodyne-ing your babysitter's voice can make a version of "Rock-a-bye Baby" that will put your baby daughter to sleep. Get Aretha Franklin to sing it and every kid, parent and tweenie in your neighborhood will be wanting it on their mePods.

G.
 
Last edited:
I'd rather just retrack and play it better than go back and slice up my waveforms and time-aligning them. Its faster and sounds better.

STRICTLY AN OPINION! :)
 
I hear the argument about autotuning capabilities on a lot of forums, and I think it's like saying people wearing glasses are cheating at seeing stuff (or drawing stuff if you want to keep to an art form). If it sounds the same then I don't care. In fact, I welcome new technology b/c it provides other potential songwriters an outlet and the possibility of making some incredible music they otherwise could not share with the world.

I disagree. First off, I don't know too many people who have died because the person operating a motor vehicle was listening to a digitally corrected recording when they failed to see a pedestrian; however, people would definitely be dying if people were operating motor vehicles without corrective lenses. Equating what's a potentially dangerous medical condition with correcting a digital recording doesn't add up in my book.

Second, really what we're talking about here is correcting a rhythmic performance to keep it more "within the meter." To make it, in other words, artificially perfect. I think a better analogy would be using a really complicated series of stencils to paint the Mona Lisa. When we're talking about art, the end result is of course important, but part of the beauty and mystery lies in the act of creation - the fact that one day Michaelangelo looked at a chunk of marble and within it saw the figure of a man, or da Vinci, lost in thought in front of an oil canvas, began to "see" a woman with a mysterious smile.
 
I will often re-align (tighten-up) guitar audio to sit on the beats using either Cubase or Melodyne. Waveforms, however, typically have an attack where the largest amplitude is not at the beginning for the note (e.g. chord etc.).
K

Sheesh people, don't crucify the guy because he wants to nudge some tracks!!! :mad:

I'd recommend doing it in Cubase and not Melodyne. Turn off the snap function, zoom in close and drag the audio part to bring the peak closer to the beginning of the measure. Yes, you should use the first peak of the waveform. I find it is sometimes easier to find the note on the 2nd or 3rd beat of the measure and use those to align. And listen to make sure it's waht you want.

Hope this helps.
 
Sheesh people, don't crucify the guy because he wants to nudge some tracks!!! :mad:

I'd recommend doing it in Cubase and not Melodyne. Turn off the snap function, zoom in close and drag the audio part to bring the peak closer to the beginning of the measure. Yes, you should use the first peak of the waveform. I find it is sometimes easier to find the note on the 2nd or 3rd beat of the measure and use those to align. And listen to make sure it's waht you want.

Hope this helps.

Chili, he's not talking about shifting tracks to take care of lag issues, or phase-aligning multiple-mic performances; rather, he's talking about fixing intra-period timing hesitations, which is a bit different.
 
I'd recommend doing it in Cubase and not Melodyne. Turn off the snap function, zoom in close and drag the audio part to bring the peak closer to the beginning of the measure. Yes, you should use the first peak of the waveform. I find it is sometimes easier to find the note on the 2nd or 3rd beat of the measure and use those to align. And listen to make sure it's waht you want

Thank you! I am talking about intra-timing shifts, but this answers my initial question for this method too!
 
I disagree. First off, I don't know too many people who have died because the person operating a motor vehicle was listening to a digitally corrected recording when they failed to see a pedestrian; however, people would definitely be dying if people were operating motor vehicles without corrective lenses. Equating what's a potentially dangerous medical condition with correcting a digital recording doesn't add up in my book.

No, this isn't the brevity that matters here. For instance, if you can convince me that you are right about autotuning etc and I am wrong, and later you inform me that you used debating software it wouldn't change the logic of your point because I don't care how good of a natural debater you are, I just care about the point in this situation.

I think a better analogy would be using a really complicated series of stencils to paint the Mona Lisa. When we're talking about art, the end result is of course important, but part of the beauty and mystery lies in the act of creation - the fact that one day Michaelangelo looked at a chunk of marble and within it saw the figure of a man, or da Vinci, lost in thought in front of an oil canvas, began to "see" a woman with a mysterious smile.

This is a good analogy, as long as we assume that Da Vinci is using the stencils and is not making a copy of the Mona Lisa but the very original; and 2. the result of using the stencils creates, in Da Vinci's opinion, something as good if not better than the Mona Lisa. I won't reiterate why since I've said many times that my concern is the quality of the output.

If you care about how the point was achieved that's great. You listen to music for a certain reason. I have a friend who is huge into electro-style and doesn't like the sound of a guitar or any acoustic instruments b/c he actually doesn't like the idea of humans playing (He hasn't said this in so many words, but this is how I infer his tastes). Meshuggah is a band with one of the best metal drummers in the world and pretty much the captain of the band, and yet for their third (I think) album they used drum sampling software. I've always been confused as to why, but I believe it is because he wanted to use a new method to inspire new beats (assuming again).

I use to argue with my Dad b/c I didn't like Joe Cocker getting big off performing cover songs. His argument was that the way he performed them was crucial to what he provided. This is obvious considering his iconic moment is singing "Little Help From My Friends" at Woodstock. Kind of funny that I cared about the exact opposite thing. But I later realized that Cocker wasn't trying to fool anyone. He wasn't trying to prove he was an amazing songwriter and wasn't entering any competitions to do so. He had something to provide that my Dad and others enjoyed.

I wasn't wrong in my opinion either. If I care that the music performer is also the song writer, then that says something about what I get through music. But I know now that the Joe Cocker's out there are just as valid in their own art forms and if that's what people like, I don't have a legit reason why they shouldn't feel so.

I'd rather just retrack and play it better than go back and slice up my waveforms and time-aligning them. Its faster and sounds better.

Yup, I've been convinced here that this is the likely possibility and I changed what I'm doing to make sure I don't skip attempting everything with the natural takes. This is my 2nd song that I've recorded, and I didn't really shift anything around on the first one b/c I didn't know that could even possibly work.

I don't see it as much of a question of cheating (though that's one aspect of it)as it is a question of musicality and servicing the music.

How can anyone be doing a service to the music when they consider the actual execution of it to be of minimal importance?

You are a songwriter and producer, which is very cool. Have you ever considered that the best way to get your songs to shine would be to produce them by having another musician perform them who not only cares about the performance end of it, but can enhance it by actually delivering a meaningful performance in the studio?

Melodyne-ing your babysitter's voice can make a version of "Rock-a-bye Baby" that will put your baby daughter to sleep. Get Aretha Franklin to sing it and every kid, parent and tweenie in your neighborhood will be wanting it on their mePods.


Again, if it sounds robotic or less good in any shape or form I am not going to use it! My point is that I won't forbid simply because some people seem to care how it was made. This may be purely theoretical. If autotune make everything sounds worse then I will never use it (I am using autotune as a general term since I don't even own actual Autotune)!

There's nothing wrong with caring about that, but I don't personally care. Actually, I do care but in the other direction. I would feel bad if I lowered my musical quality simply to satisfy my ego that I did it all in # takes w/o touching anything. In the future, surgeons will use more precise robot helpers. As long as they don't turn on humans during my surgery and actually do a better job, I welcome them. I have no doubt that there will be some skilled surgeons out there who scoff at the robot help simply b/c they are concerned about their own hand skills, their students' hand skills, and even how fast they can suture on their own. But as the patient I won't care how it's done as much as what is done. Sorry, weird and hopefully last analogy on my part.

And I've have often considered getting others to perform my music, not for guitar which I really am good enough on but for singing. Problem is that working with others has not worked out. I don't rule it out in the future, but my reason for doing this isn't to satisfy that the music was played w/o offline intervention. It would only be to make the music better.

How do you guys feel about using reverb? Are you concerned that the people whose voices are enhanced by this artificial method are pretending to be places/rooms they really aren't at? What about compression, which changes the increases volumes for notes played to softly and vice versa? As long as it is just a touch right? Where's the line? I guarantee it is changing now and will continue to change. You have your own levels of acoustic enhancement you find acceptable and others may have no line at all. I completely understand if autotune turns you off to music b/c you have your very own reasons for listening, but there is not absolute wrong or right here and I think you'll find people letting their guard down to new types of FX just as it has always been.
 
By the way, it would be interesting to know if there were egg tempera purists that derided Da Vinci for using the pretty new oil painting methods. One could argue that it takes more skill to paint beautiful pieces with tempera. And the tempera painters would be even more awesome if they used their hands and not brushes, but it's up to you if you care about that.

I'm not being sarcastic here, and I think you all have a very good point which I would put like this: Some people like to know that music they listen to is performed in the same manner that they perceive it to be. I'm just letting you know that some people either don't care, or even don't perceive music as in performance mode, at least for some songs or styles. I remember in the early 90's my friends and I would criticize music that had overdubbed guitars b/c we cared about that. I also remember how disappointed I was to hear that Boston was really just Tom Schulze. For some reason I don't care about that anymore and I think it is because the culture changed. I am so glad it did, b/c I can enjoy a lot more music now.
 
Meshuggah is a band with one of the best metal drummers in the world and pretty much the captain of the band, and yet for their third (I think) album they used drum sampling software. I've always been confused as to why, but I believe it is because he wanted to use a new method to inspire new beats (assuming again).

I'm ignoring the rest of your post because obviously you aren't prepared to reconsider and I don't feel like wasting my time. I don't consider using a quantization plugin to fix rhythmic mistakes or an autotune plugin to fix pitch mistakes as using an "effect" in the same way using reverb to make a part sound more distant in a mix simply because it doesn't alter the fundamental performance, but your mileage (obviosuly) may vary.

This, however, I can address.

It wasn't their third album by a long shot - I think (but don't remember for sure) that it was Catch-33 where they used Drumkit from Hell samples. Two things are worth noting here - first, the drum parts weren't sequenced, but were actually recorded using drum triggers, and then "filled in" with samples.

Second, the samples on Drumkit from Hell were recorded, as the Toontrack website is proud to point out, by Thomas Haake of Meshuggah. ;)

So, yes, they used sampled drums. They used sampled drums on midi tracks recorded using triggers by their actual drummer, and samples OF their actual drummer. If I had to speculate, it was probably partly inspired by the fact that Meshuggah was recording their guitars direct at this time, and partly because, Meshuggah being Meshuggah, they thought the idea of their drummer performing his parts by using samples of himself was hilarious. ;)
 
I'm ignoring the rest of your post because obviously you aren't prepared to reconsider and I don't feel like wasting my time.

Great, well I guess I didn't convince you either. Thanks for the debate anyways.

Update: I was hoping to do an AB comparison for those who don't automatically reject the idea of moving audio around for better rhythm. Unfortunately, the software I use SUCKS at tempo alignment. I tried for both vox and guitar and it was just awful. For those who cautioned about ruining the natural feel, you were right on...I think. After choosing my best takes, I couldn't perceive any timing issues to begin with so I couldn't really manually fix up anything by ear. So I tried visually, which was what this thread was about to begin with, but it only made the rhythm worse.

So, if anyone was interested in my original question regarding where to visually align waveforms, here is my conclusion: It's relative to the given event and thus impossible to tell visually. As obvious as this sounds to others, it was worth testing for me since it was not a given that the amplitude peak wouldn't work.

So I'm not opposed to tightening things up theoretically, but I am practically. It seems Vocalign gets really good reviews for this sort of thing, but it is too expensive for me to try out just to satisfy my curiosity. I'm left wondering if the capability is out there and what flexibility these programs have to address the sterility of time correction.

Thanks again for all the advice,
K
 
Back
Top