Using baseplug and neuroflow, I cobbled together a (GUI-less) VST that, with the help of a neural network, controls two aspects of the sound (left and right volume) with only one parameter exposed to the host. I tried to independently control resonance and cutoff of a lowpass filter to show that it was truly two parameters changing under the hood rather than one control for “panning” but I turned out to be even worse at DSP than I remembered lol. I think with some slight improvements this could streamline my VST dev; I spend a shocking amount of time right now just trying to get my parameter controls to be intuitive. I’ll note the code is still a mess, though. I couldn’t figure out how to load a file properly so I literally just re-train the network every time you load the plugin lol.
Here’s a github link to the code and some binaries: GitHub - audiodog301/mlplugin