Well...you're looking at it from the perspective that it will simply be a stepping stone..."training wheels"...and then people will move away from it and do the mixing manually once they've learned.
I see totally the opposite.
The people who potentially can use this simply as a "reference check", just to see what the plugin comes up with, and then actually do the work themselves...are most likely people who already know enough about mixing to not need this plugin.
The rest will use this as their crutch...because it is going to be convenient and fast...two of the main "features" that most recording newbs look for. Also...you need to first understand what you are looking at when you see the actions of the plugin. It certainly can be overwhelming to a newb...which is another reason for them to just accept the preset results.
Sure...they can sell it as some sort of "educational" tool...but very few people want to really get into the technical shit...to learn...to understand...to develop their skills anymore.
It's just about instant gratification...AND...the belief that computers somehow always know e better, so if a programmed preset tells you this is the best it came up with, those people will accept it...or at most, punch up another preset.
I see a crutch as a good thing in this case, and I'll explain why. This particular crutch is going to start to fail the user. The AI isn't smart enough to mix an entire song, and its going to act very inconsistent across a range of diverse mixes. As the user starts to realize this, they'll be forced to move beyond what its capable of. Its not going to enable a lazy unmotivated person to make a stellar mix.
I don't blame people for not wanting to learn this stuff. Some people are musicians, not engineers. But they are forced to reconcile their careers and incomes with the reality that musicians live in the same digital world everyone else does. Thats almost like telling an engineer not to use BFD, Slate, or Addictive, and that they should go learn how to play the drums themselves, if they need to shoot a track once in a while. I really do think that for a spoken-word voiceover talent, or a musician who only occasionally needs to track something, to be expected to learn the ins/outs of engineering is impractical.
A crutch is designed to help someone get on their feet and start doing ~something~ rather than nothing at all. A kid with mediocre natural balance needs those training wheels at first. To me there is a very obvious limit to what the AI of this program can't do. Neutron
Even for an career engineer, why is 'fast' and 'convenient' a problem? Take the Waves Greg Wells piano plugin. You have a knob. You turn it left, or right. I can hear a ton of shit going on when you turn it. Your hi-low shelves rise and fall. Multi-band compression curves start to bend. Harmonic saturation and transient envelopes start to permeate the source. So...like...if most of that stuff is junk I'd be doing anyway, what the hell is the point in setting up 10 plugins across 4 parallel busses? The obvious answer is control. But at times the waves all-in-one plugins fit the source so dang well, there isn't anything I'd want to change, even if I could. At this point, the conversation is pretty un-related to the izotope AI discussion, but even with programmed presets, at minimum, the user has to have the intuition to know when the preset even sounds good in the first place.
If the whole premise of your response was that laziness = sucks, then I absolutely agree. Hands down. But I guess I see more potential in that AI than Toontracks EZ mix2.
ps...if someone made an AI that could automatically cut, edit, crossfade, color code, label, gain-stage, route, and trim up a pro tools session, I'd eat that software for lunch!