Glen either you or I am right on this. There's no wiggle room here.
I see lots of it. So far there is only one thing (other than this statement

) you said that I disagree with.
What's happening here is a typical forest vs. trees perception issue. It is entirely possible for you to say correct things about the trees but overlook the larger nature of the forest. Neither the tree people or the forest people are necessarily wrong within the scope of their points and perceptions, yet don't agree upon the overall description of the whole thing.
Again, why is it possible on a pre to adjust the trim control and accomodate a huge range of inputs from mic levels to well beyond that? It doesnt compromise the performance.
The preamp is there on the mic side to accomodate a variety of microphones with a variety of differing output voltages and output impedances. Before that pre, the whole line level continuity thing in the signal path does not yet exist; this is where it starts. On the line-in side of the circuit, the "pre" is a simple trim control. meant to a) accommodate natural fluctuations in average signal strengths from device to device (e.g. the previous in-line EQ setting wound up dropping the signal strength a bit, but it did not have an output gain control to bring things back up to snuff), and b) to allow the user to take purposeful advantages, if they wish, in the differing personalities of the circuit "color" (e.g. I want to overdrive this pre because it sounds really k3wl to do so). It call purposely gaming the gain structure, and is a ley element in quality audio engineering.
You know that already, of course, I'm just setting up how that's different from being forced to negative pad the signal by 12dB just because your throwing a device not designed to work at +4 in the signal path (the fact is, the Delta 66 is a consumer-level device in it's core, the +4 switch notwithstanding.) It's the odd man out.
Now, if you have no choice in the matter, then you have no choice in the natter. But there is no good reason why one's interface should be at -10 unless they are working strictly with consumer or gaming gear. So there is no good reason why one should have to adjust for it. The Delta 66 does not belong in a pro chain.
Does it hurt technically or electronically for it to be in there? No, not much, if at all. On that we do agree, and I have already stipulated to that. But try to explain gain strategy to Ethan - let alone a newb - when they have gear that's swinging voltages all over the place. As this thread shows, it makes it next to impossible gain in a holistic understanding kind of way. And *that* itself is, IMHO, damaging enough. I say that based upon my experience with these very folks learning all about that stuff, and that when they do finally get it, amazingly enough, their tracks and mixes improve significantly in quality.
But it goes beyond that, even for those like you and I who do get this stuff. Why should you or I have to even worry about what gear we have in line and adjust our gain structure so *arbitrarily* to account for it? KISS - Keep It Simple, Stupid. I don't want to have to worry about whether my interface thinks that 0VU is the same thing that the rest of my gear thinks it is. 0VU has *meaning*. And it only makes sense (to me anyway) that that meaning should hold the same meaning all the way down the chain and not change with every piece of gear the signal passes through. this is not so much about +4dBu being the perfect signal voltage as it is 0VU holding the same meaning.
0VU just happens to play out as a solid reference all the way through the digital mixdown. Not because the digital domain cares about the level. As long as we keep it between the ditches of noise and clipping, we're OK, sure. But when you have a conversion of something like 0VU=-18dBFS (just for example), that just so happens to be just about the sweet spot (give or take a fudge factor of a few dB to taste, of course) on the digital canvas where it's about as high as one can get above the digital floor to accommodate the bulk of the signal without digitally adding to the analog noise floor, while still leaving enough room for peaks to stretch out before clipping. And when you consider that the true RMS of most analog signals is actually probably some handful of dBs below 0VU, that means they will convert to an actual digital RMS somewhere a few dB below that conversion level. Still room at the bottom, even more room for peak crest factor. These converter calibrations were purposely picked *to make sense*.
Then take those tracks to mixing; when the engineer knows all this stuff and knows enough to keep the general overall levels balanced more or less around this idea, two things happen: the less work there is for him because the less he has to worry about signal management (keeping stuff between the ditches) and the more he can concentrate on mixing the music unburdened (or at least less burdened) by such worries. It's a lot easier to keep the car on a rough and bumpy road when one picks the center lane to drive upon.
And when one realizes that for most pop/rock/country/blues/etc. mixes (I'm not necessarily always including the extremes like metal/core or or dance/trance) when they get to the mixdown/summing stage - digital or analog - that mixes that just naturally RMS out to somewhere around or just below the converter conversion level usually are the ones that wind up having the highest sonic quality *and* take to the mastering stage better, one realizes that that level still does have a similar meaning, even after all the digital manipulation.
So yeah, you guys are right when you make some localized technical points that padding a dozen dB here or gaming the digital input there does not in and of itself necessarily cause a mangling of the signal, but my counterpoints are three: a) there is no reason for it to be that way other than inefficient device design or operator error, 2) that understanding and working the system as an integrated drive train with 0VU as the calibrating value from beginning to end, (that does not mean that everything is recorded right at 0VU, only that it's a constant standard around which signal values should be judged), that it makes for a simpler and easier and more elegant process, and, III) That it makes for a robust, holistic that by it's very nature keeps the signal in a line that winds up optimizing the eventual mix quality with a minimum of conscious concern, and that in is not only better for veterans, but helps the newbs grasp the whole thing and make better mixes by a noticeable amount.
Is it necessary to look at it that way? No. I'd bet there are a large number of veteran engineers who never gave it that much conscious thought, frankly. But I'd also bet that they never had to explain stuff in this forum either

, and it's a very effective way of explaining to to those that don't already have their pre-conceived ideas. It's like the four dimensions approach to mixing. Many don't look at it that way, but that doesn't change the fact that they are indeed mixing in those four dimensions; the core truth of it remains even if they are not conscious of it.
The higher pro level voltages are conducive to the wider dynamic 'canvas' you speak of.
Unless you an I mean two different things by "canvas" (I'm referring to the 140-some-odd dB that a 32-bit recording affords us), they have nothing to do with each other. It's the converter that decides how any given input voltage conduces to the digital canvas, it's the converter that's the key to the whole thing.
G.