What is the best way to associate a corpus fluid.plotter point with its original file?

Hello! I am working on creating a sound effect library exploration patch in Max using the fluid.plotter to display slices of the sounds as points. The idea is to be able to select a point on the fluid.plotter and have the original file’s name, path, and start and stop points be output into messages. Additionally, I need the slice points produced by fluid.bufnoveltyslice~ to be associated with the original files rather than the concatenated buffer. With this information, I can drag-and-drop the file from Max into REAPER, where the file is trimmed to the specified slice points. My patch is fairly similar to James Bradbury’s corpus exploration tutorial patch. I considered having each imported audio file in its own buffer similar to how CataRT handles this process, but could not figure out the best way to go about doing that.

I can see that fluid.concataudiofiles outputs a dictionary that contains all of the imported files’ paths and bounds. I would need to somehow associate that information with the slices made by fluid.bufnoveltyslice~. Any ideas on the best way to go about doing this?

Welcome @agrabowska

You’ve got a couple of options here.

  1. Encode all this information into the points’ id, so that when you’re making your DataSet, the point IDs contain both buffer name and start-stop points for the slice.
  2. Keep a parallel LabelSet to associate IDs with buffer names, and a parallel DataSet to associate IDs with start-stop points.

(or some combination of these, like keeping the buffer name in the ID but still using a DataSet for slice points)

1 has the advantage of being conceptually compact – there’s only ever one DataSet to worry about. 2 has the advantage of being more scalable (IMO) and probably leading to more readable code. For instance, encoding the start-stop points into the IDs will probably require a regex to get them out again, vs just talking to a DataSet.

2 Likes

Echoing what Owen said, you can either encode it into the identifier or keep the data nearby such as in a labelset~. You might also use a coll to store the information as it has a similar interface (key, value storage).

The way that I stored, segmented, analysed and called back slices is just one way of doing it. You could also use a polybuffer~ and iterate over that to do do segmentation of each file and subsequent analysis. Do let us know how you go because it might take a few failed experiements for you to settle on a workflow and code structure that satisfies your needs.

1 Like

Hi @agrabowska,

I know you’re in Max, so this may not apply to you, but I thought I’d drop a note about SuperCollider here in case any SuperCollider users come looking for a similar workflow. In SuperCollider, everything that @weefuzzy said still applies:

And I have one additional option to add:

  1. I have used a parallel FluidDataSet to store slice start and stop points…and have also put in there an integer that is an index of which file the slice is a part of (of course this means I have the audio files in an array somewhere that can be indexed into).

This can be useful because if one is using the .kr method of FluidKDTree to keep everything on the server, then of course you can’t get the buffer name as a symbol on the server, so an index will let you switch to the correct buffer all on the server. (This index could even be the buffer’s index! but that would require that the buffer have the same index every time the code is run, which is not guaranteed depending on context.)