Here we go, slightly late but with some cool stuff!
TB2-Alpha06
Download it here:
https://huddersfield.box.com/s/raoviadwr6lx7q66c32dm52m783nexpy
4 Likes
Looking through the examples now.
Don’t know if it’s because I’m getting a couple of:
fluid.dataset~: Point not found
fluid.dataset~: Point not found
errors for example 10, but I don’t see (or hear) different results when I select the weighted/unweighted examples.
The fact that there’s no gaps, and as @tutschku pointed out, there’s rhythm/harmony going on, makes it really hard to assess “timbre” over that period of time.
It’s not. Something’s loadbanging that shouldn’t be, but to no effect
For a lot of the points it turns out there isn’t any difference (perhaps the text at the top should be read as a hypothesis), and for those that there is, it’s only in the last couple of items almost. Perhaps a function of the smallness and homogeneity of the dataset, but clearly weighting the MFCCs isn’t guaranteed to have especially radical effect.
There’s a [delay 0]
above the counter in the playback subpatcher. This will allow you to intersperse with whatever gap you wish.
Trying a different sound (which might yield more radical results) should be as simple as putting something different in the message box at (1)
1 Like
Gotcha, managed to get it working with some other files and stuff.
I could be wrong here, but doesn’t this approach (segmentation into arbitrarily-sized segments and then running summary statistics on them) regress towards selecting shorter examples? Like, the means (weighted or otherwise) of long segments will kind of turn into mush, vs shorter segments which may have more clearly defined values.
It’s not immediately obvious how to input a (pre-segmented) folder of sounds and have it play from that folder (without overhauling the playback section of the patch), but I imagine that would probably be a handy way to compare. I’ll try and build the weighting into my existing comparison patch and see how that fares.
Example 11 also spams the Max window with:
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Negative weights clipped to 0
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Invalid weights
fluid.bufstats~: Negative weights clipped to 0
I think one for every segment in the analysis. Things seem to playback inside the subpatches, but I need to read/investigate to see if things are doing what they are meant to be doing.
When I ran it the first time I got that Negative weights clipped to 0
error when running the MFCC matching
subpatch, but not on the second time.
this is normal. This is the exacting stats we run on pitch that gives invalid frames that are zeroed out. I’ll explain more next week but the idea is that if you want to trust pitch, you need to be exacting, and that way the stats you get are invalid. In SC we can decide of verbosity. In Max we could, but it seems it is not happening here (the famous flag seems not to be behind this warning @weefuzzy what do you think?)
at the same time, maybe the user need to be told their stats are super hard to match…
the famous flag seems not to be behind this warning @weefuzzy what do you think?)
at the same time, maybe the user need to be told their stats are super hard to match…
Yeah, it turns out that code path doesn’t check the attribute. This can change.
As stated: I’m not sure how the creative coders would then know their query is very exacting
I need to play with it more, but it seems to me that if you’re putting something in the Max window, “something’s wrong”.
Again, I need to play with the patch/objects/context, but my gut tells me that either no returns are found or nothing is played.
Seems like a pretty canonical instance of when you’d want a warning to me (something is recoverably wrong, insofar bufstafs~ will return all 0s in this case). However, in batch processes like this, the volume of messages is a bit much, so it should probably be hidden behind the @warnings
attribute.
(IMO, only red in the console means something’s gone actually wrong)
Yeah, sorry. I meant specifically in context of “loads of message spam”. Not just a philosophical stance (though I do lean towards the “quiet” side of things).
Max on Windows64 people, I’ve added it to the same folder, thanks @weefuzzy for the compile (and myself for the tests
1 Like