Mic input to navigate through fluid plotter-Max

Hello everyone I’m very new to max and flucoma, and was wondering the best way to have my mic input trigger the points on the fluid 2d corpus. [my 2d corpus has my voice recording]
For now I have managed to analyse the mic input in a similar manner and I have in a way made the mic input parameters make the highlighter move withing the 2d corpus (it’s not efficient and is stuck within one region)
1- Even though the highlighter is moving (however little) I dont hear the sounds- how do I make it work?
2- How do I ensure the highlighter moves accross the 2D corpus and is not stuck to one specific region.

I’ve attached an image with my patches for you to better understand.
I am quite the noob in this world of max+flucoma, and I’m certain I’m doing alot of things wrong here and would appreciate any and every help possible. Thank you

Hello !

This patch looks ok to me although it is partial. I presume you are querying the original high-dimension MFCCs dataset via audio?

A good way to test is to replace your mic input by various sounds from the original dataset. If you find the right spot then it means you have that part sorted… and that your problem is not one of patching.

Once you confirm this, we can go into the next problem, which is more advanced: non-overlapping descriptor spaces. There are solutions to that ‘problem/state of affairs’, but you need to decide where and how you will compromise.

1 Like

Hey @tremblap thanks for replying. I did replace the mic input with the sounds from the data set, It seems to be stuck in the same region - although maybe slightly broader but not significant. And I still couldn’t hear the matched data points that were hihlighted.

Here is the image of the mic input that is analysed. (Maybe I’m doing something wrong here)

And I also tried to then change my patching, instead of lookup from the mfcc query, I chose to get the peakamp from the mic input - that did solve the audio issues(that is now wherever the highlighter is I can here it. ) But it is still stuck in a new specific region and is very flickery , which is obvious considering the values I’m getting from the peak amp.

here is the image -

Any thoughts?

Thank you again !

I don’t understand your mfcc processing in the patch… are you trying to pick the maximum mfcc value for each slice?

First, try to query with a simple frame, or a basic average, for a slice (between silences if I understand your use of ampgate)

if that, on the original material, does not bring what you expect, you have to sort that before using a live mic. it means that you are not finding the material that is already in the database…