(aah, it’s nice to be able to properly quote formatted text again)
Gotcha, and that makes sense.
Out of tangential curiosity, is the Nyquist/2 thing the norm when working with spectral descriptors and silence? It makes sense in that, it’s probably mathematically (?) correct, since all the frequencies are at equal loudness (none).
I should poke at this and see. It’s just hard because I can’t really see/access the actual zero-padding that fluid.spectralshape~
is doing, vs what I can feed it with a larger analysis window. So it could very well be the same results, just hard to see. In running patch above on general demosound
sounds, the larger window gives me visible/readable Nyquist/2 at the edges, since I can see them before throwing them away.
I’m still quite green at framelib, so it would take me longer to figure out how to start building something like that, than actually building something like that.
At the moment I’ve not yet done that. The tests I did a while back (above) were conceptually wrong, as I was mirroring the results I got from the analysis, rather than the audio I fed into the analysis. It still did something since there was a stats step that comes next anyways, but it would be (I imagine) very different for actual mirroring of audio.
I think what would take me the least amount of faffing will be to hand craft a couple audio examples and just analyze those static things, rather than working out the parallel/reverse JIT audio analysis which has to concatenate the perfect bits of audio for this all to work.