Well...just to keep stirring the pot...
Sonar (version 5 and 6 anyhow) will speed up chipmunk wise and slow down (think Lurch) before it completely gives up. .
Yeah...but it's not much different the other way. If your deck is able to chase/lock...and you start messing with the DAW's speed...the deck will also attempt to keep up.
What's the difference?
I think a lot of the discussion about which way is "better" is based on assumptions that the deck is going to be spinning wildly and that the DAW will have to jump all around to keep up.
Maybe if you force the deck to operate wildly...but most do not. If they are under SMPTE control and have internal timing mechanism, they are usually stable enough not to ever throw the DAW out of time.
Maybe I'm in the minority...but my deck has a built-in, internal synchronizer....so it's not going to behave any differently if it's acting as a slave or a master. The deck's synchronizer will keep it on time...therefore the DAW also stays on time because the deck is on time.
Now here are a few things I found out on the Internet, worth checking out that talk some more about these exact things.
I’ll quote the relevant part, but read the entire articles if you want it in context:
http://www.sweetwater.com/expert-center/techtips/d--03/28/2003
A high quality master clock can be introduced into most digital studios and DAW systems by simply connecting it to the clock inputs on the devices in question. For example, if your DAW's audio interface has a word clock input you could connect it to the output of a high quality clocking device and possibly achieve better sounding results through the reduction of jitter and other clock related anomalies. (This is a deep subject that goes well beyond this simple statement or this Tech Tip - we're just trying to lay some groundwork here.)
This injected clock will take over the sample rate of the digital recording, and thus control its timing. You are correct in observing that this creates a situation where a system potentially has two different sources for timing information: the newly injected word clock, and the timing information coming in via MTC from the MIDI interface/synchronizer. The rest really depends on how the software handles things, which means there are potentially as many answers to this question as there are programs available for digital audio recording. With this in mind there are a few things to think about:
In many cases applications have some method of switching hardware between internal clocking and external clocking. Whether you are using the clock coming in from the interface/synchronizer or the high quality word clock box you will need to have your system set up to listen to external clock. When this is the case the sample rate (read recording speed) of your system will usually be dictated by this clock, whether it comes in over the word clock jack or through some digital audio input.
Since this clock may not be resolved to the frame rate of the MTC, this brings up the question of what happens inside the software when they begin to drift from one another? Understand that the software will usually not know they are drifting. There is no address or location information in the word clock data so the only concrete information present to tell the software there is drift between the speed of the clock and the speed of the time code is if it compares the relative speed or phase (see WFTD Phase Lock) of the incoming signals. How your program may react to this if it even measures it is something that could be investigated. Under most normal circumstances this is not a major problem because any drift is likely to be minimal for relatively short durations of time, such as five minute songs, etc. Many software apps can reconcile the two in a way that is transparent to the user (they aren't really reconciled, but graphically it can be made to appear so, assuming the discrepancy is very small). Situations where it could be a problem include working with time code that has a lot of drift or changes of speed, such as an old (analog) tape machine. In those cases it is often better to just resolve the DAW to the inconsistencies of the tape machine while they are synchronized together. Situations where MIDI parts must play along with the digital audio files with very tight synchronization can reveal problems as well. In the case we're examining here, MIDI will be following MTC for speed, while the audio is following the external word clock.
http://www.gearslutz.com/board/so-much-gear-so-little-time/432886-smpte-reader-windows.html
You'll have to slave the 1010 to the VS2480 wordclock if you use the VS to resolve the SMPTE signal. Without a clock signal, Cubase will be freerunning. MTC and most other time code signals are used as jam sync, meaning it syncs to the time code only on start, and then continues following the clock source - typically the audio card. So the audio card must sync to the device that resolves the SMPTE signal.
http://www.korromusic.com/docs/docs/docs2_assets/sync.pdf
Analog to Digital Sync
Synching an analog system to a digital system, either tape or a DAW, is
conceptually the same as synching two analog machines. The two systems need to start at the same point, and they must agree on the playback speed.
As with analog machines, time code, such as SMPTE or MIDI Time Code (MTC), is used for communicating the start point. Though continuous sync is achieved differently, because digital audio workstations have no tape motors to control. Then word clock must be used to set the playback speed.
And more from the Steinberg/Nuendo site:
http://www.steinbergusers.com/TechDocs/Nuendo4/NUENDO_Syncronization.pdf
**********
Again...I think there is much ado about nothing here *unless* you have one really wacked out tape deck that is NOT capable of outputting solid SMPTE/MTC...and even that can probably be fixed with a box that corrects/regens the unstable code.
But other than that...if anyone can actually hear and/or visually see in the DAW specific
audio anomalies caused by the deck being the master...
...please post them up and those of us in opposition will stand down.
Like I said...I've done it both ways, so I'm not really in opposition...
…I just don't think it's a big deal in many cases, and in my own current situation it just seemed to work better with the deck as master. When I tried the deck as a slave...it was just too "jumpy".
Heck...any time I touched the DAW to move its playback head...the damn deck would keep jumping around trying to keep pace...which was a MAJOR PITA and a LOT of needless wear-n-tear on the deck's transport!
With the deck as master...you only move the tape when you need to.