Descriptors comparison (oldschool vs newschool)

Morning!

This is a pretty cool patch! I’m sure @weefuzzy and @groma will appreciate it for its comparison of features and

You got what we’re in for! The whole point of more freedom is more responsibility. Or the other way round. A solution like @a.harker 's (or IRCAM’s or any other single-object) descriptor object is relative ease of use, at the cost of customization. If we rebuilt that, it is a loss of coding time, since they already exist. Alex is working in correcting the few values that are wrong in his, and you might be happy with a curated, single-object approach. But knowing you, you won’t, so you need more objects to divide what the blackbox approach does for you under the hood.

good to know you really don’t understand this sentence, as illustrated by the paragraph that follows. What you consider ‘normalized’ data is ambiguous in your sentence, but if you mean from 0 to 1 between 0Hz and SR, you are right. But that is more useless than just bin, which is converted in Hz in the helpfile. @weefuzzy might be happy to know that this is a source of confusion, as we were contemplating having a Hz output (but again, think about how irrlevant Hz are to musicians. MIDIcents would be probably more perceptually relevant, but then, you need log calculations, not just translation of the results, and then you start to see that the cutoff point of where to translate between units is a sliding scale problem that is in the eye of the (more advanced) user… )

as discussed: if you like the curated single object approach, use it. If you want to have finer control, you need all these. what would be even worse is a single object with all these powerful parameters: about 50 of them, imagine the box size and the helpfile! Actually, you don’t have to imagine: IRCAM’s has a lot of more config than Alex’s, yet let statistical power than our approach, and I can show you the ‘help’ file… and then, you are stuck with them in that context, which is one of very many sub-usage of the 9 said objects…

You are in Max-land for much longer (count the steps in scheduler on the left, there are a lot more). So that comes to a cost indeed. I think that this will pay when you start comparing description, and matching , but keep us posted with your experiments. One idea is to bufcompose before the stats. Another option is to match settings for real (I got down to 0.64 ms which is still worse but 40% better just by matching the pitch descriptor strand to 256 and 7 frames…) Also, you are getting errors with boundaries (check the max window) so there is something wrong there too… that might take a bit of CPU :slight_smile:

So as I said, more flexibility, more power, more responsibility… which mean that swapping code order and features and exploring their CPU cost over their qualitative improvement (for real, in matching, not just numbers, as it might not be worth it- or the improvement might be astronomical!) is the next step.

Also, don’t forget that some of our code will be optimized further in the future, but nowhere near a single blackbox object for sure!