FluCoMa and Modular Synths

I know at least @tremblap (and perhaps @spluta and @leafcutterjohn? ) is using some FluCoMa stuff with modular synth stuff, so I figured it’d be good have a thread that was specific around that topic with code/examples/etc…

The first bit of code I want to share is an Analysis.amxd I made (which is now part of confetti, which I’ve linked to already on the forum):

(the scope~s aren’t part of the device they are there to show what’s going on)

I mainly made them to work in Max-land, so along the bottom is a bunch of diff outputs, all signal rate, with the light grey ones being s&h values (driven by the onset detection).

I’ll probably tweak the underlying code a little bit, but the core idea has been suuuper useful for my experiments.

This is my testing patch:

It uses the super useful AudioMix package for all the boring plumbing/mixer-y shit, but with this I have three analysis buses, which I can send out to my ES-8.

I’m also curious about the automatic scaling thing that @tremblap built/showed me a while back, since that would come in handy for certain things I’m building at the moment. (do you have a tidy/shareable version of that @tremblap?)

I also want to revisit the CV->NMF stuff I was playing with around the first plenary:

I’ve not played with it in a while, but I remember it working really well for ‘offline’ stuff, but I never managed to get a real-time version working due to buffer sizes, and downsampling funny-business.

Either way, curious what else others have been up to with this kind of thing.

1 Like

Interesting post! Let me try to make my answer not too long… first, a few other people who have openly been interested in this in the plenaries are @tedmoore and @rdevine1

Secondly, I have to say that I’ve done a few things along those lines in since 2009 but they usually in bits that are used ad-hoc in a piece or a project, then shelved until I can have access to more potent technology or that my brain catches up with what is available. They are usually in the same idea, which came around when I was exploring descriptors in a way to control descriptor browsing. When working with CataRT at the time, and modifying it for sandbox#3 the fascinating Diemo Schwarz told me of a project that was going on in the few years before, where they used a robot to iterated through complex physical modelling patches to describe them and then navigate them in descriptor spaces… that put me on fire, and I did a first few attempts with modular then, not super successful. That frustration led to the FluCoMa project :slight_smile: Since then, some public version of such iterative software have emerged, like this

In parallel, I did one that worked better for me in 2014, which I have reused since then a bit… it was based on @a.harker 's descriptor and entrymatcher. I was interested to control the pitch of a very chaotic synth, to resynthesize an audio stream. In the end, I found it more effective to do a single control via CV, and sample 1000 points iteratively. Then I would use a stream of 3 descriptors (pitch, centroid and amplitude) from my target(control) and match each to a different pitch through entrymatcher controlling the chaotic synth, then a simple mapping to an LPF and a VCA for the other 2, and the result was quite potent enough for me to have fun with a training time that was manageable… you can hear a local bird recording (a willow warbler) I’ve resynthesized here: https://soundcloud.com/pierre-alexandre-tremblay/oiseau-solo1 the patch is very ugly creative-coding-in-composition-time-messy but I can share it if someone wants to have a peak…

Now for the immediate future, I am interested in more than one dimension at a time, with machine-learning, both from a live stream of descriptors (bass) and from segmented targets, but this is work in progress and far from finished. I had a first poke at this idea with the fantastic Wekinator last year, then with a basic neural network my son has coded a few weeks back (proud dad moment here) for which I’ve done a quick demo with a virtual chaotic synth… I’m happy to go in more details on any of my plans for both ideas (real-time control and segmented targets) if that could interest anyone.

Ah yes, forgot about @tedmoore and @rdevine1. I imagine @rdevine1 in particular will be doing loads with that when the next concert (eventually!) happens.

You’ve shown me the patch a couple of times but I didn’t remember if it was based on discrete points (it is I guess) or if you mapped a continuous space. I remember the ‘scanning’ part of the patch being quite clever and efficient too. So are you using distance calculations to do some kind of interpolation between the sampled values or is it all quantized? (from the sounds of the soundcloud demo there’s some steppiness)

I was wanting to build/test something simpler where I would map something like centroid to a filter (or filter-y thing) response, so I was going to do some kind of one-to-one mapping just taking measurements along a point and just plotting it on a function object to interpolate between the rest.

Obviously this wouldn’t work in a one-to-many or many-to-many, not to mention getting all ML-y, but it seemed like a reasonable place to start.

I wanna see the patch!

1 Like

My version you saw is based on discrete points, and I take the nearest. The version I tried with Wekinator was testing various interpolation but for non-linear synths, that was not conclusive then. Then trying the MLP stuff with Édouard gave me something fun to embrace the non-linearity, but I have not tried the new plans, which are with 2 many to many mappings in an assisted learning fashion…

The MLP stuff you can see here https://huddersfield.box.com/s/rir81s7tsssv8iqq5e03808jbzyrf0tc and you can implement with Wekinator a similar idea: you train OSC messages from a series of presets off you go!

Be careful what you wish for…

1 Like

Yeah I remember the non-linear stuff really complicating things.

Did you ever try just plotting the points on separate function objects (one per dimension) to see if that gave you anything useful?

Would still be good to see your code, for the sake of the thread/discussion, but also to see about the jumping around the register and sampling cleverly (though for linear measurements, it doesn’t matter too much either way).

This kind of model (training sweet spots and then navigating a 2d space) doesn’t interest me too much, but this could be interesting with some of the self-organizing map stuff and/or doing some training via “resynthesis” models (i.e. feeding it sounds, and using those as training points).

I did something similar comparing the few nearest neighbours, but the thing was that they were at opposite places of the space…

You need to use your imagination a bit… if you can map to 2D you can map to anything. MLP is actually good as a classifier, you just pick whichever score you get on the data you get out… for instance look at the other classifiers we have done so far in the example folder, you can make, with many inputs, the bassdrumness, snareness, and highhatness values, train with 10 of each from a bunch of inputs, and you will get values out… I’ll do more example once we have something along those lines in our toolset, but the ideas are in the Kadenze class of Wekinator if you don’t want to wait :slight_smile:

I get the idea, it’s more of the interest and how I like to approach things creatively. And actually, to put it in the form of anecdote, when I was younger, my mom asked me what I liked to have as a snack. I told her I liked apple sauce, as I was into it at the time. And she bought a ton of it. Needless to say I got tired of it and didn’t like apple sauce anymore. So that’s all to say that I don’t like making decisions about what I like, because then I don’t like it anymore.

So this paradigm of “this is a specific sound I like, give me more like it” doesn’t interest or inspire me. Which is also why after watching all the ML Wekinator I didn’t install or test any of that stuff out because it’s all (pretty much) based on, map this spot to this, and this spot to this, and now train…

Yeah I can see that being the case.

I just put together a simple thing (manually) where I took 20 measurements off an LPG (LxD) while pumping some Disting pink noise through it (can you believe I don’t have normal noise in my skiff?!) and then ran the centroid analysis off the scratchy turntable stuff and have created a simple resynthesis of it! Granted the sounds here are fairly straightforward and linear-ish, but it works as a first initial test.

Here’s the response (visual scale is 0.-1. / 0.-127.):
Screenshot 2020-04-26 at 6.25.35 pm

So linear for a big chunk until the end.

And here’s a comparison (the “real” snare audio is highpassed 110Hz whereas the noise is unfiltered (other than the LPG)), which sounds pretty convincing! It “feels” right too, not to laggy or unrealistic, which is quite weird given it’s a synthetic sound. It sounds really badass if I hard pan them too since I get a faux-stereo effect going.

The crossfader is also being sent to a VCA with the same signal-rate control I use when chopping up the direct snare audio.

that is the main difference between our use. Yours is very linear. Imagine mine is something like 440Hz is at 0.1 and 445Hz is at 0.5, but in between you might have anything like 220 and 880 around 0.2 and 0.3 so interpolation is much harder. it is not random, so there might be a way, like the spirals in this fun learning toolbox but very hard to find out with the right combination of descriptors…

anyways, I’ll show you more when I get to something more convincing with my non-linear patches…

1 Like

ok, I completely forgot this since I got excited on other similar threads… here we go, very dirty and undocumented, as I like my private patches. I had to spend a few minutes to remember how it works, which is not so bad for a patch from 17 Nov 2014 :slight_smile:

The audio return is on the [adc~ 11] on the left, and came from the synth from the expert sleeper via adat. The single control out is done in the middle of the patch [dac~ 11] going again to the expertsleeper via adat. Just beside this you have a switch which is off at load time (make sure you doubleclick that loadmess if you copy/paste the code)

In mode 0 (analysis) you can play first manually the float object above the switch between 0 and 1000, where 0 is outputing DC -1 and 1000 is outputing DC +1. If that modulates your patch, then you press the bang at the top and the clever 3 pass FFT re-setting is happening, scanning 1000 points for pitch with @a.harker descriptors and entrymatcher.

Once the sample is done, flick to mode 1 (playback) and you can play the keyboard or the funny sequence on the bottom right.

You can see the interpolation questions I had at the bottom. They generated my desire to be able to do data processing in Max… which generated FluCoMa grant app. Here we go :slight_smile:


----------begin_max5_patcher----------
5083.3oc6cs0baiaE9YmeEXzjG1rwwA2AYmdY5a8+vN63gRBxgajHUIoRr2N
M+1KH.krjsnELtPxjzY2D6PJwCwGN3bGG7edyUylWdurdF3u.9MvUW8edyUW
ouT6Etp6ee0rMY2uXcVs9iMairtN6N4rqM2qQdei95Iz8WpX2lxcMqkM5u.p
6p0MOrVp+j6+blOTyCakF5OaF326t0pxhl57+TeCD9F3iO47h8OXb2EyWpep
ky+iOHRlczCnHaigf+yp7r06uy1rlEeJu3taqjKZLDFQnzaXWC3IDEo.Db6+
.qHK32a+N+227l1+5ZOAHAd7AHJ0Q.BwZQl3BPb33CPDja.DNgeiH5rPzjwG
gvNtFCyRCHBUH+p5k4Y.zxrEe67PDrWHxQbvMXfy0C9Tn9Gh1+1YLXQ4lMxh
lmAB0eNeK.9u9SvR4hL0XKF.B54.BywkNTrFFHHT6OPLMzfgAlwPtN6A.x6k
OyyJtyegLo9gTTMSDM0Ktmd.pZ4ZPOxgwuRX55PHO1YoMIZnhoAIBOFP05xr
ksRpACsdqyr3ifcBmnJCezBi0nk.GCX5WA3a7FfVo.6Fu4lPbmPIAVKShSho
tcxDv7PAyI7IQj1BIrDbLsNbB3eA2M9mDpVtcbwGDdBXbH2M+KRPz3y.gY7w
GfXtoOSXPlv.P8Hmda1hOCxA4S.uTcyRRFxHl1HsllDEiqUWeaVkDTTlWmWz
p8eordQU91lxp8e30parnbWg9aPFFquQXGMTJUuzKkgMXmF5DgV21DXkGB5H
.oz5GeO6wS.saHlvQHBNDPDBOEfH2r..gRfwO.Zvo..wbDfTBchNKzGlF7PN
FDVEST74g9vzPRDwMHBlvCGD0icRuO.96mWD.mYcMfaPitdALM.fTS4c2sV9
x.hxfI0KWir5VYQ1bC1.cDrtncPLnEicl4G3nDqQ0PsprMDTSinMhnIdwmz4
zAmEifCoMSVVAf.TOvE80wnzFzwt+7pWjwNG3I7B75bFQHhC3sdMnV9u2IKV
H6K31uB76Xvy6HRhnb+PNlNzjozGQttuT6v91kYMYcHVGjc0rC9bwut6J6+T
+l9yc3SptymkObjXn1q7kr065PBZhYZ5p8SRm86RN+2kay2k4AcS8ftHjGDF
c9QLCczWt6WUOkN1truHWdqhgPMAeaVSSU97cMlJN33oN4l4RMuCp6qGPW1y
akvrELW17Uor.js7OjYKjYEMfUs5kp+GfU4EKAYflprh5UJoQq1UrnIur.7Q
v1x0OTTtQwyBZ9TVCHa61px6y2j0H+GfMYeVB9Z9R0W4qpGQ4WqAe.joeVva
ZebaOaHAXCTHAHBuLzrKl.JedzhvnAVKYS4pwOuIHGSbhRJk1DyDi7cB0K46
etdcKSzKm1sWxJpsUxZEqeVKS6Quk52MyKnIEXsuzGfL60n1uEXmydiGqghW
x5SR5yduBGy0tBczLWMF4c5rbYtE7IdWrmRS0Few7KMlSNS0eLiEmK0jBCia
2fNJVp+9.TQ.AwUNra1JQ4lz2lRim6te.flLouE6VXAXbS3.HowCl9HfAgSG
fh5EPYp1OJJJEXR9ceyaXR8PJTiDukKiYdgShNcXQpPbTzqr5a.7n.WjyAWX
GSLmvvOYpxDnWvUuA+9FXJQ7Hi+IVbiG+PZhbqnu3XSYwAMEzDqSoXnSP2Tn
5sUtc3F6kQ8GBZJONVb3u3no.B4lGcrDMSzdyH8CfTpnTumt6qxpxpMYGGlF
aCM0qOZnmSjuqIJGRMN7YJ6xHwjsokvxpZvpJ4+F7W+3eWY7EjAdK5Zf9VfI
.SXhqocfB0NyjXXFQo9ESz46ZZJKBbH1uD2iP7REoCTOhRSngvikcal6SDAF
x0Tb2jaSSzVhmXJm4HshZcY4m2sUsBZBTckXufIyxFtmFa98DakiJ6H5JIHw
reBhDaUqSjLHmkB9.UbSJTfDh1HsAg3GMTXhYBpik2C2j+ODDYLyRwG1JFO3
1flWrTd+jXgpi0vxdFuN4YQYOGz.lCVORaOiysmef93TShIF5LVzJmUYQS0C
fbvp1+afYrNS1jcLS76gqNq3wQcyrz9V0Y+45xcK0E5pb7STC0MNMAGdDmFJ
IJgpQyk0Y5tRNIDBHdmK9040GJlg8+t9mduSgX9vCtOmfXzyRK+qMSu6S894
s2v8z.O0RywiAM6r6oOiWPLZ7xxwu9MkkInQIHhmyJXe12rTS8oGEIfYKW7M
.BMUhMsiU.oIGGXSjoYQQb2TJiP9DO5vjOno79UC45Fdjyz13yHr3kwLS0qk
sYqxMo6lbUuFxK0jIXMvkB8WKo6JB6iyrtrpA7Aj5+uFrb2lsSA9TpOnsvj.
fTVb73hM70HvY4HcLtqc0uEEabHk98WTWeL4TmUimoxjo7z3YBUS.RBaHXBR
bsAGI5x.aH3A5EiPSCLxwtqAwXsTTwn6xZj8wJgekp57UnaJzqdAE0rAaoQh
WRLI3kRI9gQcxjhDFk2+10l3TMddcfPMtWMsErA7Pnnzfad+jnh+REdAQzT7
9lkTT1ps.HHEjbMvDn3w17vzT+.KiD8jnTNayaSxyHHP+bVHBg9gSlfPQhVh
KVA3dKsZepKLwOHPBrPPjesYJSbkoQA49y0f6pJ2scpX2.Bh8Cr51FyQI1T6
K7+9y2C40Fipqe1u3O6FwOEjolc4bTzOpd0y2raCnuhYF6.9EjZjGR8isyDs
OQTX6dO.MQ1T3PlW8AFrwpdBMd6W0oZD+fdZ0pfN9g766AiT7yxWjoIE5oQJ
8VD4sF9l19mohsuB+3J2WZzn3zLrTWLeibfZvrBWU.XB9EmeTJwfAFILsZ2g
BHbs6nXhTpAH5R7bfCT5VPa4cuac1Gj2ucclRryxX.Jvyjlc2bRLoKUWTlop
jQ7mI+dqoxNdp75U4qkeQVUmWVbz7zUyx1t8nKexNoWAg+Qo9AIt9vkxKLW5
vlqeVk7K46+9GZT.yxpTu6MpW7cUlg18GRwX6ioborpXW9i6g9C6A+1G39gq
IuTFKG4ztFkG7wAqZZ+t0kK97SJQixsxh7hi2kwmb6kxUY6V2b64m5N89qxV
H68Ke1osqlcWU9xxh1WhSP51KumbJ1aSTiXGOXzehhrsm4KazG2yMqUCxc0y
ypZmH5pOD79a1TVt9zac36sVtpo61ayKJdBJ1Tts+aVke2mdgu67R0M27ROa
8cpuU4Aj9t2pVp1baqsGm94xVutas6oO96yJzc0.s.81gK7vMM0HympWToLf
6jwq4Ne4L2YohGdg7q4Ka9jlPGyLX5WhcLQyNLKuL+NYcyoWqI6t5SuxyDXn
tzt4cqQusQprtTMJN8Cbxgbwos1hGkxcx0eQw9Oo08cCCTKkapA6JT7MJ7PY
dBXwCKVKqAssJx4JKBNTy08VXv8Kirmgc+xJ6UIxyJeym9rN+5udDcR1GmVs
lDjo2QRNrS9ezH1d5pH9C862ijfRk69qZ.ZnX7fyC6CB2vSjvn6gKNpsa7nB
5A.OWWdmxr2ZI.2hnMeJudLQSgenoQ6Ftqp0iDZdxlgqe2qtTQGd9MF2KfsW
dWWDjo.tWSAcMYsN9Yy9wH3y.mXH5SpK262VAxKZ9EEe8u71Un2A9Hn8Ww27
t20+TFxAvOjfNiEBP2T4VHJ4+y1+5mAP9MC.MApfab89+OC75mAnIgXF.e71
cdH0itpF7KTJBBeG3WAUxs.kHGVqzG81p3uAT.fo2i8LKVlJVFRCwR.SOOtS
0avML7RB90S.2.ZO9QXJnWI+e5Jy+vIBjW76cgv.m9hL7O5cbKyVOtEoIhNp
wmchntbW0h8HwdkV.3IPhxOul7hCAK32dzjBfsLBu52BjkuE6ecixaQqvKqd
KZm0i1aw9G9kwBTuuEN1LGeUd3e25x4Yq67e+.W+K4t+iwDH3IJYpTmmX+Zy
7jtrb9CccdhYdUOzQEilR04I1u1MOExhWcdNwNs4RcqLf3TSaZrqAWEmBn5D
mYQDLAB+n1m1eEftgRw7T16F+soqvQDjCONKLXXTpRi7Uf2li.+c.Gz7IYgN
Cu.45ZIPAD3WHguCcRxEP+11VPSLuhyFhocMKBobrssDQAh9poOwqe8qs+70
guhyzYQR8iKUXJEAHLN5WoSB8qDte5MvQbeTLo51lN1be2uKTiYy1b5tGAbb
+UhfbzQkAzOz6Q.DwKHhZ5TcQaOBnqUJ3zYWBv8Bs5bGHk9C9lDvukcTSRbh
4VDfLM2h.X+3tLGvx+jrAAb8nVrCrvB1O8aPfT+PPSUN9y1FDHwOPyrqJh2F
DXZrCARcEkR0c2NLLhlqNo2f.o9o5DSPS2VBhwVN0elLa4SpWfcJ8mBi4R8y
pjtLEFGq4lua0JY02.sUjYYa0jFNsEAo6xl3p+BFVKiuUvjuC6lJrWpYCyLM
NLwOA8REGOLYfccetjXlhs4.DXwZYVEXt2ZA26OTWzazO1vsLxwlZsoL0Ri3
dxbRaLQhatHgMpzR5Nf9lj1Rbzw503Y+fiG9nBhNXYgou20G.gY7I..4lzOd
JIfcFv9PH1T3bZGCcNIQvH25De+XnB8bLQNgPjtRTVDwiiMcAXH7NyjmHy2Y
4+mqEy6l7IF0zqzQcaBZVz1Ez51XMBNU7GjQ8wLCjwdVDT7CtCgTlWvThHd9
C1ZIVcS4l5u4MT00gkCV3VwHejzyMbUhzenNbV4ujWhHSAPzkUCOcSrOoP6q
hzxpZv171C+pMxk4YE5ikfh1ipft+c8pMc+5DHIjtYVk4HiZeEA.iyYVCBhm
.8jZN1mV86dypDQAgvP5Dv2Etic07NlnnhPTX5Dv4EtiUbiomWDWDJAkNAbd
gk5XMIACGB888AEI0MDjRfGEdvHc5isZUiF3pAuEAd6DfciJ7gciaNPOhS5Z
lRm2zTuN+ji5wM8xrW3zOI3cvHG62+6OHooz3c5lz.P8Eyc7qtfIChjHla1T
R5LRmxiWG8aobc1C.DKvovxUkdXGAJSMZ0cfQFqZaaDvnyUU2tIotqMZHv33
k0qsUxsxhki+lXgP8I4LhtTnFkMegt30xT5hdnV1kZ81l08D3rMj3nTJSJz6
LA.EmsYV6gaXPhiYPVBh8IFAcc9cNLV4cNbaGufrMAR8gqhXXmhptuD9zP4G
JwKjBghWM21xVkOsXqbSKH0rWcvc8f1nj4JE0JqV1U2TihiKjyUR29bZDRLE
pPRRbVFdHLw96mWPNgU8RjE2TLLh3zeiqkMShSXaQpOq+DlXrDofqzUeTid4
GHbrBVD2fYGNS75Aizubm1+WLP1S6xHc.2y5tHomzcQ5oyhnK4cPeyNVSpDK
HUJKDTRXAknogfRscAG7kFSsuNnPPoKOQwC0fBcQREjYJlUipfLUQsg8CGBJ
QrfRrfLSgsXlBGLJcwIpfvRfrXLIPghRWbd5TAjJS5LcPbz.P5jySZnejFZy
5.RHvWnEyjIzPPIKFRDQTlHsgxzXLOlXCoSCgh6jjAiR1nNHIDxnSFLYz5W2
KYhPBLTThLJxyrizQQdVhM5.C2L4E4YBg7rDaTOPBg1VwqQ.leTJwJKHPAvV
Ygc1JKBwnxFIVzfLnvCFOgMbe7PXStvFKDdBDGHwFB3nY7mcjNIZi5K5ZGhG
kgsM51GSBiBMZqWifGE1aMoIiGooiGoYiGo4WhzgvLOMktzZXNOZCxKpYfFM
Reo0RbbLjeXGfihFouHfCiFouzZIJOZj9RRPnjnQ5KI2DGG.2FCjEgv4WtMZ
DYgvTRNenLOlaUjjCQXC4zAaLYSPJBgc3brsCovy0aCownnnRAMdiZz3Mpgi
2nFNdiZaj3EfURrzQCcshzwAcYICkHQlM4lLkFE7kagoAAIAhZJcI6eRRixL
oMjVviF9hGh0nVYTRvnzEyOKKZSj7AIFnZRwFjZHgwFpffxnCFSBwlXxkDJJ
MH1HyrJUIgHIyLqLmIITTBYCkv9RIajZPCg1Ta3GBwrD0pxLKDBinICkDBpX
f7NixGMaJshzwwlRpMqBRBByIanJWPJcvxAJkLdLMjwiowpxTCiCA.aUzCBh
HM3fIRyFzKI.fGIcvxPOwJEBoghRWttYCg8UDgskuj2yT7gpPozbvWLkkQoJ
9H1nAfGORewJmRDizSSrQCKh.igfZMswWzUjzPvAa0vD0SE.L.LVHTbldsQZ
dPJ7PxPkmBhcp2ChzHzfYpFwF86AQqgMIXLJB4dEwcHrKBvCWUGhECkYZXq7
JKDEsIlMXTxFoUDdnnzk0GDhEbXaX74gvGVrUUGZHHjMaOGTPlmFuDUhGuDU
ZS0rmFDdSaXMCDcvCg+CHqJkcRnnDZHhiNxJ0WgvzpmXCZOFGDk5gFYkyr3g
ZPFGGdPVED2n.ubaWWPC9flIFLyuPVkV2PQK1.RK5.RKx.RKq7vhDDwnVkkP
TPRSHxlvMjFLJMHMTAjUQqgDBtBqxeGJHwUGYUF7PzfXbB0doSdSK6BSBJHZ
TsKNIjvPKqVEGD2QQjgJmfH6BQQPBpExpHGfvAY1xpXGXnE1aZYkNRbPL92J
m5OL58jVVsRNHL7nwqJCri1wweajUZQYvf.wLaqfHuUihrZAQPZXKGTj7xIZ
HHZ1PngJaj5C+bKRQcPLjCZU5vQAgIDZi5lfn..ZUARwCFotXwBEl4J6Zt.A
QUCjMXM8KDzp.vGFHzJMnvvvDZkFTXPTgBsQ5TZZvH0fDmXCoFlH3Bsp2TIB
FoP1zTi7VWLb3hohMlv87kUltHY11seQVU28o0zX1lr+nr5vYs3rM4El+ot0
xNqR9k78edcmDeVV0hOk2HWzrqxzlKuma5A6y1TpLWqXWdm0hJJ+eey+CjQY
cR.
-----------end_max5_patcher-----------
1 Like

Nice!

Don’t know how much you tidied before posting, but it’s pretty readable for a PA patch. Obviously some confusing bits in the middle, but it looks like I may be able to get it to work (will report back).

None! This is as is, not a single cord was moved… even the creative notes to self left there at the bottom right!

Alright! I was able to get it kind of working with a Fourses so that’s something!

I initially thought it didn’t work as I was getting nothing, but after playing with the transpose on the keyboard/sequence I found some spots.

It found only a few areas of pitch along the sweep, plus it’s hard to get that module to spit out something that resembles semi-pitchy material.

I’ll test it on a more complex patch when I have something suitable up and running.

do I detect 2 almost real compliments in 2 posts in a row? Are you ok? :rofl:

yeah, if you are more patient, you could mod the patch to enter 10k points of which 100 at full resolution, that should give you more precision… but i’m never that patient :wink: replacing entrymatcher with knn in there would be good, so we could check clustering too (in relation to our other thread)

Hehe, I like to keep all my sass in one thread at the time… :stuck_out_tongue_winking_eye:

Yeah the pre-training time doesn’t bother me too much. That super crude stuff I did with a hardware patch bay a few years ago took around 20minutes to analyze a single patch. Faster is better obviously though.

then take more time by changing the multiplier of FFT size in the middle of the patch, before you change the number of points. that will allow unstable pitch to settle.

My next steps for this are along the lines of bringing statistics of time domain windows like in LPT and other patches like fluid corpus map, to allow even more chaotic material to be musaik’d) (wow that is an ugly past participle)

1 Like

Not using FluCoMa stuff at the moment, but in working on the (epic) Kaizo Snare blog I’m recording and added other bits of related media to the project.

This is expanding on some of the stuff I did in Kaizo Snare where I take a bunch of secondary controller streams from the fader (overall activity level, distance between direction changes, time between direction changes, “velocity” from that (distance/time), etc…) and use these to control the whole patch. So 8 streams from the ES-8 controlling everything from the single fader.

Would be interesting to gestural decomposition either in an NMF-y way like that CV stuff I did at the start (which I never got working in a real-time manner) or something like that Heretic thing from the other thread.

Even some “simple” novelty stuff to send triggers when new material is detected.

The funny thing is I set off to try to get as much from a single fader as I could, and that shit fills up real quick!

1 Like

What’s the synthesis? Concatenative or one big fat synth?

Don’t know how big it is, but it’s pretty fat:

So this is modular synth stuff, with the computer/patch-side of things just doing the sensor parsing and sending it out to the modular.

Is the fourses module the same as the gen~ patch?