A patcher to better link descriptors and perception

Despite the great documentation provided around FluCoMa about sound descriptors, most of them still appear very abstract to me - and I think it’s much worse for my students. So I wanted to have a tool to help figuring out how a specific descriptor can be related to perception.

I have made a patcher based on the corpus explorer patch. Instead of organizing the sounds according to the 91 dimensions of the analysis, each dimension of the analysis can be selected individually in a menu. Then, all the segments are sorted according to this single descriptor and can be played sequentially.

Moreover, as segmentation can be a bit difficult to understand and master, I excluded it from the patcher and rather require the user to provide a folder containing sounds, each sound being considered as a segment.

Also, as the patcher was intended for my students at first, the original version is in French but I translated it into an English version - you get both in the .zip. Les deux versions sont largement commentées.

Please note that the player uses a poly~ object because I needed a polyphonic player for another patch - don’t forget to put it along the patcher. Also, a short 10ms fade-in and fade-out are applied on each sound in the player.

Please let me know of any issue.

I might program some other versions in a near future to help testing other sound descriptors.


Play sounds sorted according to a single descriptor - MFCC+stats - V1 - EN+FR.zip (26.0 KB)