Hi, any idea about this? there was one in Meapsoft which wasn’t 100% but acceptable and fun. A fantastic weekend to all of you.
There’s nothing yet which would directly help with beat discovery and synchrony. I’m planning to have a poke at this, and if it seems nice, maybe port it:
If you’re happy with python, it’d be interesting to hear if it works for your purposes.
That looks good, seems to provide realtime and non-realtime (if that’s what they mean with online/offline) I’m unhappy with python yes.
I’m interested in using this as well. Wondering if either of you got it running in “online” mode. I don’t see any simple python functions for that on the github but perhaps it can process an audio stream rather than just a file?
EDIT: “online” mode seems to just change the algorithm and I don’t see any way to input an audio stream directly into this project. Maybe I can muck around in the python and change that though
Also wondering if flucoma actually has the functions needed to implement this? Would love to try my hand at it (unless it is impossible as it currently stands lol)
Hi @davispolito and welcome
Yeah, it’s pretty standard for research code, especially in Python / Matlab, not to really deal with actual audio processing, which is a shame for us of course ‘online’ then just means ‘amenable to processing streams of data’, which is of course a necessary part of doing real-time processing, but not the whole story.
No, FluCoMa doesn’t yet have the wherewithal to start implementing parts of this scheme. I plan to play with the Python a bit first and gauge whether it’s worth the time investment. In particular, I want to see if we can replace the big scary LSTM network with something simpler, and just use the particle filter from the paper (and still get useful / interesting behaviour).