I addressed that. If it's purely a matter of analog to digital to analog (i.e. direct monitoring), higher sample rates have the potential for lower latency. A buffer of a given number of samples adds half the time at twice the rate. It's only when there's processing involved, especially if the processing requires a trip to the computer's CPU (rather than using processing onboard an interface, e.g. UAD), that higher sample rate have the potential for higher latency due to the greater amount of data. Interfaces that have onboard processing are generally optimized to handle audio at low latency, even when adding effects.
Hmm, well, the
same number of samples at twice the sample rate is, yes, sample half of the real-time audio, but the reality is that how fast it moves across the bus, into a buffer, and then to disk is fixed, and not half the time, i.e., it's the *same* time that 2x of real-time audio would require at 1/2 the sample rate. So, at 2x the sample rate, you are taking 2x bandwidth (maybe no more time there), but moving the data to disk, even SSD, is a multiple of some sorts. Does it choke there? I don't know - lots of variables, but it could.
Yes, if we assume direct monitoring, sure, a whole lot goes out the window, but if there's any round trip, then the data on the bus and driver and app cycles all get multiplied by 2x, or 4x in the OP's case. The driver stack, in particular, gets very busy and system interrupts increase at least arithmetically.
If your point is that with hardware/firmware DSP in the interface and direct monitoring, you can dial up sample rate, sure. But, I'd start with going back to a (historically typical) lower sample rate and using the plugins appropriately, and then see what happens.