What do you mean by that? Installing the ARM embedded toolchain on your local machine and compiling there?
Yes, sorry.
Ok, I can confirm that I can get stuff to build and load by full cross compiling. I see from your thread above that there’s a new a Bela image with an updated CMake, so I’ll give that a shot and introduce myself to the world of distcc later in the week. If trying to build is just pissing you off and wasting time, I can just sling you the binaries I have for now.
Some caveats: the FluidTransient(Slice) objects and their Buffer processing cousins won’t currently build for ARM. Overall, too, performance is a bit shitty: I’m getting lots of underruns from the audio decomposition objects like FluidSines and FluidHPSS (which are the simplest to test in a Bela IDE context). My best guess is that it’s because Bela’s vector size is 16 samples, and something in our code is dealing poorly with that: I’ll confirm by profiling on my machine with smaller vector sizes and see what I see.
The cross compiling dance isn’t too awful (on recent macOS)
- You need
arm-linux-gnueabihf-binutils
andllvm
installed from homebrew - To get a viable sysroot, I followed the lead of this post and rsync’d a bunch of folders over from the Bela
see list of folders
/usr/lib/arm-linux-gnueabihf
/usr/lib/gcc/arm-linux-gnueabihf
/usr/include
/lib/arm-linux-gnueabihf
- Then I mashed the keyboard based on haphazard googling and came up with a CMake toolchain file (which could definitely be more portable, and probably more correct in some ways)
see arm-linux-gnueabihf.cmake
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR arm)
set(triple arm-linux-gnueabihf)
set(gcc_toolchain /usr/local/Cellar/arm-linux-gnueabihf-binutils)
set(bela-sysroot /Users/owen/bela-sysroot)
set(CMAKE_SYSROOT ${bela-sysroot})
set(CMAKE_C_COMPILER clang)
set(CMAKE_C_COMPILER_TARGET ${triple})
set(CMAKE_CXX_COMPILER clang++)
set(CMAKE_CXX_COMPILER_TARGET ${triple})
set(flags "-I/usr/local/opt/llvm/include -isystem=${bela-sysroot}/usr/include/c++/6.3.0 -isystem=${bela-sysroot}/usr/include/carm-linux-gnueabihf -B${bela-sysroot}/usr/lib/gcc/arm-linux-gnueabihf/6.3.0 -march=armv7-a -mtune=cortex-a8 -mfloat-abi=hard -mfpu=neon")
set(CMAKE_C_FLAGS ${flags} CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS ${flags} CACHE STRING "" FORCE)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
Then you add --toolchain <path to arm-linux-gneabihf.cmake>
to your CMake invocation. If it compiles but won’t link, then that’s generally a sign that it hasn’t got the message about the different compiler / sysroot. Because a lot of these are cache variables, srubbing the build folder or CMakeCache.txt and reconfiguring from clean can make debugging much less surprising. I updated my $PATH
to have the llvm bin folder at the start, which would be nice to avoid, but isn’t the end of the world.
After that, it seems happy enough (modulo the caveats above: the transient objects should be disabled by deleting or renaming their CMakeLists files).
Coming back to this now finally!
I wouldn’t at all mind having a copy of your binaries just to try things out while I try to get my toolchain sorted.
If I compile for arm64 on mac will that work for Bela or not? If yes I can provide some binaries!
sorry @jamesbradbury, should’ve said here that I DM’d an old bela build to @jarm to try out
no problem! if I had a bela I would try but alas I dont
hiya, would it be possible to get my hands on a build (or build guide) for SuperCollider on Bela pls?
re @weefuzzy’s post on Jan 8, you can embiggen the blocksize on Bela so those underruns might not be a problem (plenty of SC Ugens don’t play nicely with a blocksize of 16)
Hi @markhanslip, and welcome
I’ll dm you the same old build I have knocking about – I’m in the middle of breaking everything at the moment, so not immediately well placed to try another.
The general steps to cross compile can be found in the second half of this post above:
Meanwhile, I don’t know if @jarn got anywhere with using distcc
on the Bela itself in the end.
This is cromulent advice, thank you.
thanks so much @weefuzzy! It might be a couple of weeks before I get to it but I will be sure to document my pain when I do
Hi @weefuzzy, may I also ask you to DM me the bela build? Despite successfully updating cmake on the board I’m still struggling with the compilation issues… Thank you very much!
As an update to those watching this. Because we’ve bumped to C++17, it looks like I need a newer bela image than I have in order to build the newest flucoma stuff. I’ll try that in due course.
Hi, so thanks to the compiled Bela externals from @weefuzzy, it seems to be almost working. But I have been struggling with two crucial problems for several weeks now:
-
using a source audio buffer larger than, say, 10 MB. I have tried different memsize server options, but nothing helps to prevent failure (“exit code 0”).
-
running the following example code (which works on MacOS without any issues) successfully on Bela. There is just a single 16-bit mono wav file in the bespoke folder (same that works on my Mac).
s = Server.default;
s.options.blockSize = 256;
s.options.numInputBusChannels = 2;
s.options.numOutputBusChannels = 2;
s.options.numAnalogInChannels = 2;
s.options.numAnalogOutChannels = 2;
s.options.numDigitalChannels = 2;
s.options.numAudioBusChannels = 1024;
s.options.memSize = 8192 * 8;
s.waitForBoot{
// 1. Load a folder of sounds
~folder_path = "/usr/share/SuperCollider/Extensions/FluCoMaBela/AudioFiles/16bit-merged/";
~loader = FluidLoadFolder(~folder_path);
~loader.play(s,{ "loaded % soundfiles".format(~loader.index.size).postln });
s.sync;
// 2. Slice
~indices = Buffer(s);
FluidBufNoveltySlice.processBlocking(s,~loader.buffer,indices:~indices,threshold:0.5,action:{
"% slices found".format(~indices.numFrames).postln;
"average duration in seconds: %".format(~loader.buffer.duration/~indices.numFrames).postln;
});
s.sync;
// 3. Analyze
fork{
var feature_buf = Buffer(s);
var stats_buf = Buffer(s);
var point_buf = Buffer(s);
~ds = FluidDataSet(s);
~indices.loadToFloatArray(action:{
arg fa;
fa.doAdjacentPairs{
arg start, end, i;
var num = end - start;
FluidBufMFCC.processBlocking(s,~loader.buffer,start,num,features:feature_buf,numCoeffs:13,startCoeff:1);
FluidBufStats.processBlocking(s,feature_buf,stats:stats_buf);
FluidBufFlatten.processBlocking(s,stats_buf,numFrames: 1,destination:point_buf);
~ds.addPoint("slice-%".format(i),point_buf);
if(i % 100 == 1,{s.sync});
"% / % done".format(i+1,~indices.numFrames-1).postln;
};
~ds.print;
});
};
};
When it gets past slicing, I get this on Bela and the show is over:
70 slices found
average duration in seconds: 0.85736086815679
1 / 69 done
ERROR: bufnum -1 is invalid for global buffers
ERROR: bufnum -1 is invalid for global buffers
2 / 69 done
.
.
.
68 / 69 done
69 / 69 done
ERROR: bufnum -1 is invalid for global buffers
ERROR: bufnum -1 is invalid for global buffers
ERROR: bufnum -1 is invalid for global buffers
ERROR: bufnum -1 is invalid for global buffers
ERROR: bufnum -1 is invalid for global buffers
Server 'localhost' exited with exit code 0.
Any idea what the settings/issues could be, please?
The Bela image is v0.3.8e / SC 3.12.1.
Have you tried removing bits of the second script to see what you can get away with? Presumably the slicing works, but its the analysis loop which is causing the bufnum error. Which means maybe somewhere in that chain of FluidBufMFCC
, FluidBufStats
, FluidBufFLatten
and then finally adding the point to the DataSet some reference gets garbled for Buffer
, or maybe the server has a wobbly about it.
I would try removing lines from the ~ds.addPoint
upwards to see where it goes wrong exactly.
Ooh, this all sounds gnarly. In both cases, as @jamesbradbury pointed to, it will help to try and narrow down what’s going wrong.
For (1), perhaps doing some experiments without any Flucoma stuff at all will help isolate whether this is an us-problem or something else. For instance, do the built-in Buffer
operations seem to work with arbitrarily large files? Does readAlloc
work? Do commands that use the command thread work (e.g copyData
, gen
)?
For (2), I’m not sure what could cause the language to start using negative bufnums
but perhaps it’s running out of buffers? There’s a numBuffers
set in ServerOptions
. In any case, pinning down which Buffer
is getting a negative bufnum and why would seem to be key.
And I guess it makes sense on a platform like Bela to maybe restrict or only have very few buffers to begin with…
Thank you very much for your replies! After some more investigations it still seems to me that there might be some issue with the FluCoMa Ugens rather than with buffer limitations due to Bela specs…?
(1) It is indeed possible to increase the number of buffers in the server options and so I tried to set it to 4096 without issues. It loads the file(s) into the buffer, syncs the server and everything seems fine. But the issue then appears when FluidBufNoveltySlice starts its job - this is where it throws the error “Server ‘localhost’ exited with exit code 0.” with a longer buffer (loading 1 mono wav file of 11.5 MB, 44.1 kHz, 16-bit). With a shorter sample (or a bunch of shorter files) it behaves OK though. But that’s only about 2 minutes of audio which is certainly not a very large corpus. Could there be a solution to this? 512MB of RAM might not be that much to work with, but still I was hoping for more.
(2) It seems that the “ERROR: bufnum -1 is invalid for global buffers” appears for some reason always when there is an attempt to do s.sync, no matter if it is after 1 or more cycles of the FluidBufMFCC - FluidBufStats - FluidBufFlatten chain. However, when I tried to do s.sync after a lower number of cycles (e.g. 10), it seems to fill and print the FluidDataSet in the end, regardless of the errors. Not sure what else can be done, but I’m still a bit worried abour the error…
It occurs to me that with only 512MB of memory, you’ll probably need to be more parsimonious with the (seldom used) maximum sizes for certain parameters.
This is probably particularly true of novelty slice, because the kernel will need memory proportional to the square of the maximum kernel size. Try setting maxFFTSize
, and maxKernelSize
to whatever the fftsize and kernel size you need are (you seem to be using the defaults, so that would be 1024 and 3 respectively).
The most recent versions get rid of the need to do this, but I still need to see if I can get a working C++17 build for Bela.
For the buffer -1 thing, I’m still scratching my head a bit. I’m pretty sure that’s our error message, but I’m still not sure how that -1 is happening. How much scope do you have for spying on things in the Bela – are you able to run sclang in the shell and use stuff like s.dumpOSC so we can look at what’s actually being sent up to the server? (I’m assuming you can’t use the IDE…)
Thanks. So I have tried different windowSize and kernelSize settings (maxFFTSize is not available with this build I guess?), but same results. Also tried some other slicers but still getting same error with longer files.
But I am able to get the OSC dump and it looks like this (with s.sync after each analysis cycle):
SC_AudioDriver: sample rate = 44100.000000, driver's block size = 256
SuperCollider 3 server ready.
Requested notification messages from server 'localhost'
localhost: server process's maxLogins (1) matches with my options.
localhost: keeping clientID (0) as confirmed by server process.
Shared memory server interface initialized
[ "/b_alloc", 1, 2646673, 1, 0 ]
[ "#bundle", 1,
[ "/sync", 1007 ]
]
[ "/b_query", 1 ]
[ "/b_query", 1 ]
[ "#bundle", 1,
[ "/sync", 1008 ]
]
bufnum: 1
numFrames: 2646673
numChannels: 1
sampleRate: 44100.0
[ "/b_readChannel", 1, "/usr/share/SuperCollider/Extensions/FluCoMaBela/AudioFiles/16bit-merged/merged1min.wav", 0, -1, 0, 0, 0, DATA[20] ]
[ "#bundle", 1,
[ "/sync", 1009 ]
]
loaded 1 soundfiles
[ "/cmd", "FluidBufNoveltySlice/processNew", 0, 1, 0, -1, 0, -1, 0, 0, 3, 0.5, 1, 2, 1024, -1, -1, 1024, 3, 1, 1, DATA[40] ]
[ "#bundle", 1,
[ "/sync", 1010 ]
]
70 slices found
average duration in seconds: 0.85736086815679
[ "/cmd", "FluidBufNoveltySlice/free", 0 ]
[ "/cmd", "FluidDataSet/new", 1 ]
[ "/b_write", 0, "/tmp/-2093407472", "aiff", "float", -1, 0, 0, 0 ]
[ "#bundle", 1,
[ "/sync", 1011 ]
]
ERROR: bufnum -1 is invalid for global buffers
[ "/cmd", "FluidBufMFCC/processNew", 2, 1, 0, 3584, 0, -1, 2, 1, 13, 40, 1, 20, 20000, 13, 1024, -1, -1, 1024, 1, DATA[40] ]
[ "/cmd", "FluidBufStats/processNew", 3, 2, 0, -1, 0, -1, 3, 0, 0, 50, 100, -1, -1, 1, DATA[40] ]
[ "/cmd", "FluidBufFlatten/processNew", 4, 3, 0, 1, 0, -1, 4, 1, 1, DATA[40] ]
[ "/cmd", "FluidDataSet/addPoint", 1, "slice-0", 4 ]
[ "#bundle", 1,
[ "/sync", 1012 ]
]
[ "/cmd", "FluidBufMFCC/free", 2 ]
[ "/cmd", "FluidBufStats/free", 3 ]
[ "/cmd", "FluidBufFlatten/free", 4 ]
1 / 69 done
[ "/cmd", "FluidBufMFCC/processNew", 5, 1, 3584, 66048, 0, -1, 2, 1, 13, 40, 1, 20, 20000, 13, 1024, -1, -1, 1024, 1, DATA[40] ]
ERROR: bufnum -1 is invalid for global buffers
etc.
Looking at the OSC messages it seems that the “bufnum -1” error might be coming from the unused weights argument in FluidBufStats, more specifically from this line in the class definition (in case the weights argument is nil
):
weights = weights ? -1;
So I tried to put some real buffer filled with zeroes in the argument just to see what happens. And indeed, the error is gone. This is how the code looks now:
FluidBufMFCC.processBlocking(s,~loader.buffer,start,num,features:feature_buf,numCoeffs:13,startCoeff:1);
s.sync;
weights_buf = Buffer.alloc(s, numFrames: feature_buf.numFrames, numChannels: 1);
FluidBufStats.processBlocking(s,feature_buf,stats:stats_buf, weights: weights_buf);
(Not sure if this is the right way to proceed though and if a buffer full of zeroes will give good results.)