Noobie questions

Hi kids,

This all looks fab but also rather daunting. I’m looking to do some realtime segmenting and resynthesising of instrumental (violin and flutes) input. Where do you recommend I start digging?

Cheers,
Dan

Hello!

This is a very good question indeed, and @weefuzzy and I are always careful to try to curate many ways in! For me to help you, it would be good to know what platform you are on? Max, Pd, SuperCollider? In all cases, there is an overview file which shows you the various ‘slicers’ we devised, with examples in each help files. In Max it is in the extra menu, in Pd it is a single file at the top, in SuperCollider you can search for Fluid Decomposition with cmd-shift-D.

There is also an overview of the whole idea on learn.flucoma.org but again if we know where you stall, or where it is not clear, we can improve the entry point for all of these, so any feedback welcome of where and when it all feels daunting!

p

@djr how nice to see you, albeit virtually. Thanks for the feedback – I shall think about good ways of reducing the daunt. Any suggestions welcome. The learn site is still very nascent, but the idea is that it will evolve into something more useful.

Meanwhile, things to consider with segmentation are how much stuff you want to capture in a slice, and how onset-ty the sounds are. I would assume, for example, that for violins and flutes, that fluid.ampslice~ might struggle unless the stuff is very staccato. You might have more luck with fluid.onsetslice~ or fluid.noveltyslice~.

If you want to slice at longer time scales than single notes, then novelty slice with a slightly bigger kernel size than the default can work well. Inevitably there’s a certain amount of sucking and seeing involved.

As for resynthesis, we don’t have anything Spear-like which will allow you to store and transform whole sets of partials, but things like fluid.sines~ and fluid.hpss~ can be used to make any subsequent analysis for that much easier (and, if you’re in Max-land, there’s stuff in the CNMAT and MuBu packages you are probably already familiar with for doing sinusoidal transformations).

Finally, using fluid.hpss~ can work well as a general preprocessor: even with something like a flute, the ‘percussive’ part might accentuate onsets enough that segmentation becomes much simpler (for example…)

1 Like

Many thanks both for your prompt and extremely helpful replies.

@tremblap , I’m using Max but certainly not afraid of the command line (and I think there is potentially some opportunity to work with that?). I’ll gladly try to give some constructive feedback where I can.

@weefuzzy, lovely to virtually chat to you too! These are excellent suggestions. I love the sound of fluid.noveltyslice~ and the bigger kernel is a hot tip.

I’ll have a play around and share some outcomes over the next week or so. Thanks very much to you both and the team. This is going to be fun!

1 Like

Perhaps I can chime in here. Through the tools I have become much cosier with the command line and have implemented it in a number of ways and for a number of applications, one being what I’ve named Reacoma (FluCoMa segmenting/layer seperation in reaper). I’ve also done a fair bit of batch processing too using Python as a wrapper which makes calls to the CLI interface. If you have any ideas I’d be happy to bounce them!

1 Like

Interesting, thanks @jamesbradbury.

Let me have a bit more of a dig first. I’m not yet precisely sure what flucoma can do for me artistically. I need to get the objects to make some sound rather than just clicks and beachballing…

2 Likes

that sounds like music to me :wink: Seriously, the examples include some basic musical examples, and there is also an example folder with some more advanced one. Segmentation wise, I like the selected conditional granular example of the fluid.bufpitch~ - you segment, then select potential ‘grains’ according to soem conditions on a descriptor… in this case, pitch confidence and range.