Segmentation by clustering

First, this is really cool!

I, too, have had a hard time finding meaningful segments “automatically” by running longer bits of audio through any of the slicers. The friction in testing changes also slows things down as it’s hard to see/hear the results without building a load of plumbing around the process.

This got me thinking that it would be super awesome to have a tweakable interface (like Reaper or Simpler) where you can adjust parameters and see the slices adjust in real-time. Obviously for some processes that’s not possible, but you could pre-render a load of settings. Somewhat like @tremblap’s automatic threshold finder, you can select a rough ballpark of slices you want, or a rough starting threshold, and then it would iterate through stuff (like in @tremblap’s patch), but what I’m suggesting would then cache all the results along the way, so you can scroll through some of the settings to find what you need. Or if you have time to spare, have it run in a way that generates a ton of outputs where you can then tweak the thresh in “realtime” and see the results of the slices in the material.

A super complex (and useful) variation of this would be doing the same kind of thing, but with intermediary processes (i.e. what you’re doing in Pyton), so you can tweak things and see/hear results to assess how well it’s working, rather than rendering, checking, rendering, checking, rendering, checking etc…