Dear all
In the lab, we just finished and submitted a paper, which it is not reviewed, so please do not share any of this, but I am really excited to show it to you as team members!
On this forum, and in the lab, we had many discussions here since the plenary on how to make sense of the segments once they were produced. I posted a few descriptor+dada
-based visualization, mostly of the CataRT type, but that, plus my automatic orchestration via the nearest match, left me a bit short-changed, and some of you had similar feelings on similar tests.
Anyway, we sat and thought hard and as we are working slowly on establishing the foundations for the 2nd toolbox, ideas of visualization (2 or 3 D) and dimension reductions (since we can get many descriptors on various time scales as well) were floated, some experiments were done, and we got comparing different algorithms available for both tasks together. Iām quite encouraged by the early results so I wanted to share them with you PRIVATELY.
This video is made with the first prototype (no colour mapping, size is length, so x/y mostly) from a bunch of modular synth sounds, segmented via their amplitude. The first part is described through MFCCs in time, and data-reduced via isomap. I find the clusters to be significant, and the shape to be inspiring. The second part, on the same sample set, is described via an autoencoder on the spectrogram and data-reduced through t-sne. The form is different, and the localities are differently convincing.
On this alpha test devised by @groma, I made a deliberately non-musical video: I click to neighbors so you can appreciate their proximity The playback engine is simply playing back when I click, nothing exciting yet, but as research code, I find it inspiring. The idea for the paper was to make a comparison of different algo with indications on how they excel (and where they fail). We hope to port the most potent one(s) to help creative coders to make their own description, reduction, mapping and interface, relevant to their practice.
Iām sharing this with you all now because I thought it could be good that the alpha users felt weāre listening and are inspired by their proposals, questions, and queries. Feel free to feedback to us again on anything, and send ideas around. As soon as the paper is reviewed, we should be able to share the code should you want to try it and/or see the implementation details.
pa