I’m posting to describe a project that I’m working on in collaboration with @amgum. We want to “productionize” a ML workflow using the FluCoMa libraries (i.e. create a continuous training pipeline which runs in the cloud). The goal is to provide a recipe that can be shared and re-used by others, including infra, for research purposes. It’s early but I think this workflow will involve analyzing a corpus of sound in Google Colab and then providing a proof-of-concept to “ship it” using a platform like Vertex AI (when new sounds are uploaded, it triggers the training workflow). The full pipeline is a bit speculative and involves some reduction of scope to keep it realistic and feasible. We want a small-data version of something which could scale to big-data. Probably we would use @jamesbradbury’s Python bindings, to bring FluCoMa into the more traditional Python-based data science ecosystem. We could use some help with the technical scoping.
@amgum is a data scientist while I’m a DevOps by day and we’re working together within the structure of a professional peer mentorship. We want our work to serve as an example to others. In addition to sharing the notebooks and infra code, part of this project is to reflect on and showcase what makes, for us, a successful collaboration between Data Science and DevOps. @weefuzzy shared this video with me, which might serve as orientation.
This is a learning experiment for both of us. I’ve never done full-blown MLOps and @amgum has mainly worked with financial data and things like that, not spectral time series data. The specific inspiration for this was Alice Eldridge’s demonstration of her analysis of rainforest sounds.
I hope that’s enough context. I’m eager to share this with the FluCoMa community and hope we can get your support and encouragement.
Right now, specifically, we’re looking for public datasets (either on Kaggle or elsewhere), both of sounds and derivative spectral data of the natual environment, to get our bearing on the data itself. If anybody has any pointers, would like to help or know more, please get in touch in thread or in DM.
This is a great news. @tedmoore has done some back-and-forth between Python and FluCoMa, and @rodrigo.constanzo and Jordi Shier have something coming very soon doing something very very cool: using PyTorch to optimise and train networks, from FluCoMa made datasets and towards FluCoMa MLPs.
Looking forward to hearing/seeing what is going to happen and thanks for sharing!
I remember, towards the start of FluCoMa, that either you (@tremblap) or it may have been Hans (@tutschku) was really keen on the idea of having something like this that kept an up-to-date analysis of all your audio samples. Each time you added to it, it would get re-analyzed automatically overnight and be available the next day.
I was reminded of the usefulness of that again with this post.
Would node bindings be useful? Or would you rather keep it in Python? I ask this because I am working on native bindings to both, and I am learning a lot about pybind / napi in the process, but its probably too much effort to maintain both. I wonder what would be most useful, because it would be cool to support this project.
Could be a fun “proof of concept” for making lower-level language bindings. A small daemon that runs and is pointed to various folders, constantly updating a database which you can export to a dataset at any time.
At the moment, I would only use the Python bindings and I think there would be more users of that. What are you thinking with the Node bindings? Embedding FluCoMa stuff in websites?
Is what you’re working on a rework of the cli tools or also FluCoMa core?
I have a personal interest in using node, mostly because typescript is where I live now for scripting. I got frustrated with python tooling though it is becoming better with uv. [bun](https://bun.com/) is very very good though, and I’ve deployed big apps on this platform (e.g I worked on the new bela ide which bun and ts). I also think that a ts backend lends itself towards more ergonomic and safe code, because you can write your frontend and backend and share types between them. I really like types now, and Python always feels too weak for me. I do think Python has the massive upper hand in ML workflows. YMMV, but that’s just my experience I also think the semantics around async are better than Python, which has never really made sense to me beyond exploiting multiprocessing. Plus, in a kind of silly way, I like the idea of running node.script in Max with FluCoMa .
I think once stable Node bindings exist, it’d make sense to look at porting for Python. There are some interface questions which emerge when bending to text-based languages that I think can be overcome quite easily with glue code. For example, which is more ergonomic? For me it’s the first, but it requires paper mache at some layer.
const normalised = new FluidNormalise(sourceDs).fit()
let targetDs = new FluidDataSet()
const throwaway = new FluidNormalise(sourceDs, targetDs)
Syntax-wise, I think we should learn from Python and SuperCollider here, and create objects that are functions with states. So you create a FluidNormaliser instance in a variable and then call its methods.
This is what I imagine, and the above example tries to capture. We can have objects create their own data storage if necessary rather than passing containers as arguments, which is faffy. For a more full example this is what I imagine (and actually what is implemented so far):
import { FluidDataSet, FluidNormalise, FluidUMAP } from “@flucoma/node”
const jsonPath = “~/data.json” // assume this is made up elsewhere for now
const sourceDs = new FluidDataset().fromJSON(jsonPath)
const normalise = new FluidNormalise()
const normDs = normalise.fit(sourceDs)
const umap = new FluidUMAP({ components: 3, iterations 10000 })
const 3d = await umap.fitAsync(normDS) // can use a promise here to do other things while we wait
I’m curious to hear more of the use-case for porting to Python. From my perspective, Python already has (all?) the tools that exist in FluCoMa. In some ways, FluCoMa is a “port” of librosa and scikit-learn to Max/SC/PD.
I suppose if you want to eventually use analyses back in Max/SC/PD and you want to make sure the same C++ code was doing the analysis in all your places, that would be a good reason?
Well yeah exactly. The little I do use Python now, I can reach for a bunch of other stuff. I think the selling point would be parity across all of the environments. I think there are a lot of options in Librosa, for example, that are confusing unless you know exactly what they do and read all of the documentation and possibly some of the code.
indeed, FluCoMa did a fair share of interface research to make sure its interface is more musicianly otherwise the same argument from librosa towards lower level descriptor implementations…