Regression for time series/controller data?

Oooh awesome!!!

I tried training it up on a simplified version of my controller data (patch attached), so taking just a 9d controller vector (2 x/y pairs, 1 analog trigger, and 4 binary/buttons). I created 5000 data points as they came out of the controller (more-or-less), and included my simple visualizer patch to watch what’s happening.

I got through the training point of your patch (took me 20min to train, with a final loss of 33), but in the end I wasn’t sure how to adapt the output. Actually looking back I’m not sure I had the autoregressor set up correctly either, since I didn’t change anything in that subpatch. Also, I probably should have changed the @activations because my data is between 0. and 1.

A couple of (noobish ML) questions.

The chunking you do in the subpatch. Is that producing a chunk of 10 entries (out of 5000)? So you are then training on 10 entries at a time? So kind of a time series simplification? And the missing shuffling there would be in order to break things up a bit?

Next, with the settinsg for fluid.mlpregressor~, the hidden layers seem really big to me. So in the case of your data it goes from 2->128->64->32->2, if I’m reading things right. In my case it would be 9 and the start/end. Is the high amount of nodes there just a technical thing, in terms of making an autoregressor work well?

And in a more vanilla technical sense, you’re limiting the data sampling to 50ms here. Is that just to reduce computation time, or is it about overfitting? Like, if I have really fast/wiggly gestures and would want that level of detail/granularity, would that be bypassed? (in my test example, I did just that, removing all the time clamps)

And lastly, in terms of speed and timescales (this one is kind of aimed at @tremblap). I guess the smaller the training set, the more erratic or “inaccurate”(?) the autoregressor will become, but I guess the faster it computes. I don’t know if it works this way at all, but can you sort of macro-chunk (not using this correctly here I’m sure), where I take 30sec chunks of controller data, and computed them in a just-in-time manner, as opposed to feeding it a single longer string of data, with it somehow taking in the aggregate of what happened over that time.

And in terms of optimization, curious the scale of it as even if it’s 10x faster, that’s still 2min of pinwheeling.

Here’s my patch (which does nothing other than house the data and visualizer): (115.6 KB)

1 Like