Hi all,
I decided to mock up a quick “inspired by” version of samplebrain, a concatenative synthesis application you may have seen about in the last week or few days by Aphex Twin and Dave Griffiths. While cool, it’s not very hackable and I saw the possibility to recreate the guts of it using FluCoMa and FrameLib. You can get both of these from the package manager which makes playing with it simple.
I made a tweet here which has two videos on usage: https://twitter.com/james_bradbury_/status/1575823008346038272
You can also get the patch here https://jamesbradbury.net/projects/sb/
Now… I think it’s also important to be cognisant of the limitations and choices I made that differ it from the original thing. It’s important from the perspective of recognition of their work, as well as where you might explore further. From the patch itself:
The brain is fixed atm. It analyses the target and brain using MFCCs with 20 coefficients and a reasonable FFT of windowsize 4096 hop size 1024 (samples). You can ignore the first MFCC coefficient (which correlates roughly to the perception of loudness) if you want to match, in theory, purely based on the timbre of the sound.
Variety increases the likelihood of picking matches other than the nearest. It is a percentage of all the possible matches so 100% means any match is on the cards. If for example you have 10% with 1000 different frames it can pick from the nearest 100 matches. I think this is a bit like both the “number of synapses” and “novelty” parameters all in one.
Dry/Wet controls how much you hear of the target and brain as a mix.
Envelope adds some spice by conforming the output of the brain with the target with a basic envelope follower. I think this goes quite far in making them match or feel coherent.