Onset-based regression (JIT-MFCC example)

This is a vector of truth. My understanding is building candidly on this website and with the PA learning experiments folder. The other two actually know what they are doing. The dialogue between my ignorance (and also those of you who share questions like you) and their knowledge is the point of this project: trying to find a point where we can discuss and share knowledge and exploration and vision without one person doing all the tasks (coding, exploring, poking, implementing, bending) so we all win in this dialogue. Does it make sense? So the extent of my knowledge is shared here 100%, and it isn’t much, but it keeps our questions focused on what we want. So the playing/poking part is quite important, because it allows you to build a different bias in your knowledge and quest. Otherwise I can send you SciKit Learn tutorials (which I have not done myself for the reason I give you) and you can come at it from a data scientist perspective… an in-between could be Rebecca Fiebrink’s class on Kadenze, which I’ve a part of, but again she offers ways to poke at data.

Now if you want to see your data, you have a problem that we all have: visualising 96D in 2D. see below for some examples on how imperfect that is. Another approach is the clustering you get in our objects for now: it will tell you how many items per cluster and you can try to see which descriptor gives you 2 strong classes.

This thread is about the app I was talking about. This and this threads are about other visualisation possibilities.

Again, as we discussed, if you try more segregated values (further in the space, very different) as your training data, or another approach is to train the ambiguous space. I’ve never tried either but you can now.

1 Like