Greetings! I’d love to use some flucoma tools on a bela board and have a few questions about doing so.
Has anyone successfully built flucoma-core for Bela platform using the arm flags provided in the core repo? I don’t have experience compiling C++ code so if there’s a hosted binary somewhere that would be a helpful start.
Can the flucoma-pd objects be compiled for ARM chips as well? Or would I need to use the C++ directly?
My hope is to use flucoma objects in PD to run on bela.
Hello @muddywires - your question is very timely We hope to push in the next day or so a revised readme which should help you compile, and some people have compiled for Pd on ARM indeed. We don’t plan to provide 32bit binaries, but the compilation read-me is quite simple enough for (even) me to follow and on github there are some pointers that should help… I remember @weefuzzy sorting a few people so that should be there.
Let us know if you get stuck, probably over there as it is quite technical. If you could let us know if you find anything that is missing on the read-me that would be ace as we want people like you and me to be able to compile. I say this and I’m sure my colleagues on the project remember with a smile how reticent I was to compile the whole thing the first times, hence the clear approachable read me files
I did once, long ago, test building the PD objects for Bela just to see if it worked or not (so no actual testing). My memory is that it was doable but there were some important steps along the way – I’ll see if I can find any record of this (it was a couple of computers ago, so maybe not )
thanks @weefuzzy and @tremblap. I will keep an eye out for the update on github and do a little homework on C++ compiler basics in the mean time.
I’m not sure how quickly I’d max out the Bela CPU, but I’m interested in building a realtime NMF/HPSS module which i think would be quite novel and fun to use in the studio.
It would be great if something like this is possible as it would be fun to make a standalone, super latency, version of some of my analysis/query stuff.
The Pd compile is on the autumn radar… it is not trivial since it needs the full refactor of the FluCoMa@Pd wrapping code, but it is definitely on our radar. When is your class running?
Hello all,
I haven’t forgotten about this – I’ve dug my Bela out, but need to update it (for which I need to find an SD card adaptor, etc etc) and then I’ll remind myself what the score is.
From returning memories though:
There are some objects that won’t yet work because they can’t yet be built for ARM
You might find that some real-time objects aren’t, um, real-time enough for Bela: so, some might perform ok, some might just fart
IIRC, I was able to build directly on the Bela (because it comes with CMake etc) but it was pretty slow (because we make the compiler work hard), and in the end got something working with cross compiling on my laptop. Not such an issue if you just want the odd object and won’t be recompiling a lot.
As for the newer objects like KD Tree etc, these are untested propositions on Bela, and as @tremblap says, not yet available to the PD wrapper. But soon.
Yea… I think that plenty will not be suitable for it, even simple processes seem to crunch the CPU (aside from non-real time). Might have a play around once I recover from the semester, now that it’s ‘summer’
I then copied the compiled output to Bela. However, Bela isn’t recognising the plugins, throwing e.g. exception in GraphDef_Recv: UGen 'FluidAmpSlice' not installed.
Should I in fact have tried to build Flucoma on Bela itself instead (using distcc)?
Installing SC extensions on Bela is still a bit of a mystery to me:
Me too, to be honest: what guidance I cobbled together for the readme was scraped from the Bela forums, but I definitely remember getting PD externals to work. I think you’ll either need to compile on the Bela itself or set up a separate toolchain to cross-compile from your machine. The latter is more fiddly, but builds will be quicker.
I’m back at work next week, so I’ll dig out my Bela and see what I can get working.