WDM or ASIO??

  • Thread starter Thread starter abolit66
  • Start date Start date
A

abolit66

New member
Yesterday I killed couple of hours experimenting with my Onyx connected to my Athlon XP thru FireWire Onyx card, and Sonar 4 playing with ASIO and WDM drivers trying the variation of latencies setup, etc.

I hooked my SM58 to Onyx mic input , attached the outboard delay processor input to Onyx AUX 1 and the output to Onyx stereo input 11/12 so I could record the vocal and delay effect separately , in different tracks.

What I noticed was when working with ASIO (starting 2.5 latency), whenever I tried to adjust the Gain knob on Onyx …or , maybe, just to change anything in the mixer control… , the sputtering sound was result, like somebody was firing a machine-gun . I closed the Sonar…then start it again… Did not touch anything…. Looks like it’s working smoothly again….
Reached to Gain knob and…. “Pf…Pf …Pf…Pf…” again!!!

Finally , I changed the driver to WDM… Everything started working fine…., no “Pf”, no sputtering, no "fire backs" sound….

Now I’ve got 48 K at 2.5 latency on my Onyx control panel… The driver option is WDM.

Does ASIO driver need some special handling or some CPU watching at task manager, or any other kind of cosideration?

Any comments on this, guys????
 
to use asio , you need asio compliant devices.

some history...on audio drivers ...in the beginning of the pc madness there were the MS audio drivers, and tekkies wrote audio programs to use them.
as per a MS API for programming. the problem was the time it took the windows OS to say things like "hello mister sound card - can you record
from the line input please". thus the concept of latency was born - as this took a time. also the computer processors were a lot slower.
so the enterprising folks at steinberg found a way to bypass this using asio.
i think the problem was in the kernel of win OS, as it was originally
developed for office apps rather than audio apps. asio thus by passing the kernel , the recording software could talk more directly to the sound card instead of waiting for a traffic cop to direct traffic.
now MS ,,after a while realised the problem and thus the wdm spec resulted.
to address the problem. in effect a competitor to asio.
also in the last few years with faster buses. processors and general architectural engineering improvements of the pc, latency is not the issue it once was. gradually i expect the need for asio will dim due to the adoption of wdm. which seems to work fine for lots of folks.
 
Manning,
Thanks for your input.
Is it possible that some soundcards disigned for both ASIO and WDM could practicly work with, let's say, only with ASIO and not work with WDM at all or Vice versa ?
Must be so many factors to be involved.
As far as I know Cubase uses their own ASIO drivers. What about Sonar 4?
I remember the times, if I'm not mistaken , Sonar 2 would not support ASIO at all. It started using ASIO in Sonar 2.2 I guess.
What I noticed that when Sonar utilizing ASIO it does not like any interference or complications. From my experience whenever I tried some tricks with tracks or some plug-in, ASIO always would hit me back with some surprises..., mostly unpleasant :)
 
whether sonar or cubase...and most multitrack recording packages in fact ..the allowed drivers normally show up in preferences.
it sounds to me that your firewire onyx card (i dont know much about it as its new) is built to comply with the new MS wdm spec.
you should really ask mackie about this. it would make sense .
maybe the onyx firewire card is not asio compliant.
thus the anomolies when trying to use asio with it.
remember WDM is the new protocol, and maybe manufacturers are choosing it so a wider range of recording software can use it.
this is pure conjecture however. but i suspect onyx dont work with asio.
 
nope - just checked. apparently onyx IS asio compliant !!!!
just google like i did. onyx needs an asio/compliant host according to my searching for info. in essence this means if your host isnt asio compatible you must use wdm. peace.
 
as a matter of interest whats your pc confign ?
just an idea...if its slow it might not be able to handle the data stream maybe.
any conflicts in device mgr ????
do you have any other devices operating at the same time ? as onyx is doing its thing ???
 
Manning,

This is my setup:

ECS K7S5A Pro MOBO
AMD Athlon XP 2100+
512 MB Mushkin DDR 2100
IDE HD WD 60G (windows XP)
IDE HD WD 80G (Audio)
Mackie Onyx 1220 with I/O firewire card.

Windows XP (service pack 1a)

SONAR 4
SAMPLITUDE 7 PRO

The firewire PC card is Belkin (texas instrument chipset) inserted into slot # 4 usding IRQ 5 (no sharing)
 
Manning,
I have nothing in my pci slots but firewire pc card which does not share IRQ with any other divices.
One more thing I forgot to mention desribing the hardware: It's my vido card
NVIDIA GeForce4 MX440.
Could it cause any problems?

Also, in device manager my PCI firewire card is showned under Network Adapter as "1394 Net Adapter" ...
I'm not sure if it's right or not...

Thanks for you help!
 
if your recording at 48k try 44.1 16 bit and24 bit.
tell me how it goes.
 
The Windows API's for sound still has a fair amount of latency even with a WDM driver. The original was called MME and was joined later with DirectX by a faster (but still too slow) mode called DirectSound. Microsoft now prefer to call the current equivalent of MME, "Wave".

WDM is a driver architecture, not an API. Wave, DirectSound, ASIO, GSIF etc are all API (the program writers interface to the driver). A well written WDM driver can support any and all of these API's. An API can be added to a WDM driver (eg. Asio4all).

The low-latency mode that Cakewalk exploit is WDM/KS or Direct Kernal Streaming. This has to have a WDM driver to work. Not all drivers are WDM. E-mu 0404, 1212 etc are not WDM.

With WDM/KS there isn't any easy API for the programmers to use, they have to provide the interface code, but it is the most direct and fastest connection to the audio interfaces Kernal Mode Driver there is. The others such as ASIO and GSIF are not far behind, as they are just an interface or adapter layer to the Kernal driver.

Wave and DirectSound use Microsoft code and include stream mixers and sample-rate converters. The added stuff keeps their latency relatively high despite the fact that they too, eventually get a direct path to the drivers kernal layer. Both add CPU load due to the extra work done in software, but the latency is now roughly equal (about 46ms at 44.1Khz). There are/were bugs/limitations/annoyances with these Microsoft sound API. Most were ironed out with XP.

Some WDM drivers use the fact that an API can be added, by building their own Wave(MME) in the same manner as ASIO (RME?), this bypasses the Windows stuff and can give just as low a latency as anything else.
 
Fireware is IEEE 1394, so your card is detected properly.
 
I kept playing with my system . Yesterday I opened the Sonar, loaded one audio track, attached 3 plug-ins (reverb, compressor, eq) and then I cloned this track starting at 5 tracks increasing by one. What I finally got was 8 tracks with 3 plug-ins (in each) loaded at CPU usage about %72 - %75 ! However those heavy loaded tracks were playing back smoothly without any noticeable clicks or distortions. The level of CPU usage in Sonar was still at green sector.
From your experience guys, what is the safe maximum level of CPU usage I can work with my Sonar not expecting accidental dropouts or some other system failure?
As I could notice the CPU level is jumping all the time while playing or recording, approximately plus minus %15.
I’m sorry if my questions sound kind of stupid , but in the past years I was just a bass-guitar player. Now I have to play the keyboard, and even being involved in writing some drum tracks on my Yamaha Motif 6 . I’m trying to get everything recorded on my PC with Mackie Onyx and that firewire card. I’m glad I found this forum. Looks like many of you guys qualify for PRO level and I appreciate any information coming from you.
God bless you all!

Peace.
 
A lot of CPU will be FX plugs. You need to start with no plugins at all to discover the baselevel CPU usage. Just muting/bypassing them doesn't necessarily stop them being processed.
The lower the latency, the higher cpu gets because each buffer refresh causes an interupt to service the driver. Note that you don't really benefit from a low-latency unless you're using input monitoring while recording thru effects, or playing a soft-synth from a keyboard. For editing, mixing and simple recording, a super low latency is no benefit at all.

As I said, it may be your plug-ins loading the CPU. But 70% isn't something I would want with such a modest track count. Test to see what CPU is like with no plug-ins and optimise that with the latency slider. I find a buffer of 256 samples a good compromise between CPU and latency.
 
I see you point Jim Y
The latency would not be a big deal really if I did not work with MIDI .
The way I record my tracks is I record MIDI (Piano, Sax, Strings, etc) on my Yamaha Motif 6 first. Nothing but clear MIDI. Then I insert a new Audio track and record the real Bass-guitar. When I record the bass I need to listen my already recorded MIDI and I would not like any latency at this point. Like I said I'm a one-play-all performer mostly, and I need to listen what it's written while recording the next audio track, bass, guitar, or vocal..
May be there is many ways to go around...

I guess I'm gonna figure them out with your help.

Thanks!
 
I usually run my buffers at 128 when i need to do live processing while recording. I usually only use UAD1 plugins though. But running a UAD1 plugin at 128 buffer has the same latency as a directx or vst at 256 buffer.

If the program has delay compensation though it doesnt really matter while mixing. I usually throw the buffering up to 1024 or even 2048 if i need the extra cpu power.

If your running at 64 buffers or even 32 is very inconsistant and less efficient even with the system running 0 plugs. And its generally not needed to be that low. Ive never met anyone who can tell the difference between 6 milliseconds and 3 milliseconds while they are recording.

Just remember that the delay is measured in both input and output. If you running 3 milliseconds thats both in and out so it is really 6 if you are running both. When you are just doing playback its only 3...

Danny
 
Oh, and btw. I use asio just because thats what steinberg uses and thats generally all i use for multitracking.

Danny
 
One other point re-driver performance. Cakewalks WDM/KS is managed by the program while ASIO is an external service. In WinXP or 2000, you can get a marked difference in performance depending on Windows Performance setting:-
System Properties/Advanced/Performance Settings/Advanced/Processor scheduling.
Programs for WDM/KS.
Background Services for ASIO.

Rendering your midi tracks to audio shouldn't require low-latency, not when the midi sounds come from external hardware. The software should know what the latency is and compensate for it in the time position of any new recording relative to existing audio and midi tracks. However, there is always some latency that is not accounted for from causes other than the interface drivers buffer. A midi synth itself will have a delay between getting a note-on message and making the sound.

If you've found subsequent takes are late by an amount of time depending on the interface buffer latency setting, then either the driver is reporting itself incorrectly or the program is misinterpreting it.

One particular advantage of WDM/KS is the latency slider. Set it to minimum with the interface buffer already on minimum and run the profiler. Now you can freely move the slider up and down in increments of the smallest interface buffer setting. You may have to develop a strategy with this, raising latency when you've mostly finished recording in order to get max cpu free for plug-ins.

Some plug-ins can be very greedy. A good reverb can be particularly needy and if at all possible, used in an aux send so only one instance of it is needed.
 
Back
Top