Max - Creating a buffer for activation seeding [fluid.bufnmf~]

Hello and forgive the terribly n00b question! Currently working with the Max objects on 2021 M1 MBP.

I am trying to provide a seeded activations buffer for a very long sound file, and I’m having trouble making a file from scratch that the object will accept.
I begin by reading my source file into a [buffer~] and processing it with [fluid.bufnmf~] for 2 components. This gives me a 2-channel Bases buffer (plenty of tutorials exist for how to seed these) and a 2-channel Activations buffer, the length of which is a few MS longer than the original file.
I write the activations buffer to a file (86 Hz sample rate(?), 16-bit), bring it into Reaper, and create a 2-channel file with my seeded activations. I set the render length to exactly the same as the original Activations buffer and render it. But when I load the new file into the Activations buffer and try to process with @actmode 1, I get the error message about buffer length and channels.
Do I need to duplicate the Bases seeding method and create a buffer with the exact number of samples as the original Activations buffer? Do I need to make sure my Reaper session is set to an 86 Hz sample rate? Any tips would be suuuper helpful.
Thank you!

  • Nathan

Hi Nathan,

This is true. To be more specific the number of samples in the Activations buffer needs to be the exact number of samples that will be used in the buffer for the decomposition process.

The object reference says that:

So check and see if that’s true!

I’m not sure if Reaper would play nice with this? Have you tried? Inserting Reaper into this process feels like a wildcard to me, since the data in the activations buffer isn’t really audio, so trying to edit it as audio in an audio editor feels like it might lead to some confusions in the human and/or data. What is the manipulation you’re doing in Reaper?

Also, if you share the source materials, it might help us suggest a solution quickly!

Best,

T

1 Like

Hi Ted,

Thank you so much for your help, your tutorials on FluCoMa (esp the Youtube videos) are really amazing.

Forgive my ignorance, I would like to check the hopsize but I’m not sure where that information is found in the Max FluCoMa objects… is that the buffer’s sample rate?
I was using Reaper before I noticed that the sample rate was 86Hz. It makes sense that the buffer I was loading was the wrong size (far too big).

Here is a sample of the material I’m working with, which is a custom robotic instrument playing piano strings using robotic arms:

We are trying to isolate the harmonics of the piano from any ambient sounds like voices, passing trucks, etc. as much as possible, for further analysis in another Machine Learning algorithm. The idea is to get bases from this process and then run the live audio through [fluid.nmffilter~].
If there are any suggestions about FluCoMa operations that might be helpful or more appropriate than [fluid.bufnmf~] and [fluid.nmffilter~] any advice would be much appreciated. Thanks!

No worries! It’s not the the sample rate. The hop size if one of the FFT Settings for a FluCoMa object. Many of the FluCoMa objects use an FFT analysis so you can adjust the parameters regarding how that FFT is computed (because it can have an important effect on the results!). In the BufNMF object reference (screen shot below) you can see more about these parameters as well as on the Fourier Transform page and the BufSTFT page. The best way to know the hopSize that an object is set to use is either to set it yourself (again see the reference and/or helpfiles) or to attach an attrui to the object and look at the fftSettings attribute.

Sounds cool!

I’m optimistic you can find some success with this–let us know how it goes and if there’s more to be desired, perhaps there are some more things to try.

2 Likes

Do you have a video of this? It looks amazing.

Sam

@tedmoore I really appreciate your help. I will play around with this today and see what I can come up with. Looking at your demo which isolates guitar pick sounds, I’m wondering if I should try to separate into more components and then perform some additional analysis to remove the more spectrally complex ones. I will report back, thank you!

@spluta Yes! Here’s a short video we shot last year that shows a little bit of what the machine can do.
Presently we’re in residence at University of Texas San Antonio’s School of Data Science, adding a generative AI brain to the machine. For now it’s all manual control, and it’s very fun to play.

4 Likes

This looks/sounds amazing!

And to echo what @tedmoore said, you should probably avoid Reaper altogether since it adds surface area for failure. Unless you need to do something really niche/specific, you shouldn’t really treat the “data” buffers as “audio”.

Welcome! There is no such thing as a terrible n00b question! If you stall and we can help, everyone wins.

It does indeed look and sounds amazing!

Now this is why I love this forum. This is a great musical question and you can use something else to do that: fluid.sines~ and fluid.hpss~ will give you real-time decomposition. with the former, you can specify how long a sine has to be to ‘register’. it is a real-time SPEAR equivalent, if you have known that software. I would use that instead of nmf because the signal is slow and unpredictable (so many potential harmonics!) and the noise (aka undesired signal) is going to be shorter and more chaotic… perfect case for fluid.sines~

If you are tempted, the helpfile of bufnmf has a ‘vocoder’ example in there. if it helps, let us know. if it doesn’t, let us know too, so I can make it better and clearer.