Regression for time series/controller data?

that is the beauty of autoencoders: they can be used for error reduction. This post by @weefuzzy points at a few references but my favourite are those, in that order of explanations


Thanks a lot!! I’ll go through these now. This website together with are pure gold, most of my current knowledge comes from either of the two…


This crossed my mind the other day, that if the “memory” is just 10 steps, and each step is 1ms (at the moment), that a lot will be missed out. This also includes the spare button stuff as you’re seeing there. I guess I didn’t press all the buttons while setting it up, though I could have sworn I have.

That being said, the output does look “controller-y”, so that’s something!

That’s an interesting one because I guess if you filter out the “idle” stuff, you’re modelling back-to-back gesture, all the time. I guess that’s what this model wants and expects, and can produce, but that would then presumably mean it would then spit that out as well. I guess having another simpler network that is “play” and “don’t play” could be an interesting solution to that. That could maybe be problematic with sparse states, or changes in trajectories that span any given chunk.

Is this particular “problem” unique to this approach (faking memory via chunking), or is it a problematic feature of auto regression in general? Like, if there was ESN or LSTM, would the same be the case? Or does the feedback (or however it works) build that variability into the network structure?

It would likely go quite over my head, but it does sound promising.

I read a tonne of this guy’s material at the start of my PhD to learn more. Super good stuff

1 Like

Hello @balintlaczko

I’m looking at your gest.record~ and I wonder if you considered averaging, or getting the peak/trough of all the values you might get in each period of your ‘time resolution’ ? Do I understand well the current code which overwrites the given index until it is incremented by the clocker?

1 Like

Hello @tremblap ages later… :smiley: Yes, as you probably noticed it is a super primitive tool, and it just writes value X to index Y. The averaging is an interesting idea, for example if I have time res = 100 but my incoming data rate is at 1 ms, it should average all 100 values. Almost like a real-time downsampling? I will put it in the next release. :stuck_out_tongue:


check the real speed of your input to start with, with [change] and then timing. then you could use running average or running median filter to remove outliers…

fun stuff!