SP-Tools - Machine Learning tools for low latency real-time performance

@johannes depending how fluent you are in Max, it could be good to try to install the tip of the nightlies of flucoma instead of the package manager one, since we are working on fixes and all stuff between versions and do not have anyone on M1 on the team anymore… it would be good to see if the crash is Rod’s or our fault :slight_smile:

If not happy to install outside of the package manager, no worries, I think we are due a release soon ish

Thats good to know rodrigo. Thanks for your feedback.
I first open the overview patch from extra menue and from there i open the subpatchers. It only crashes when i click within the patchers while they are running. I tried running examples patchers without any mouse activity and worked without any crashes.

I get crashes too - on a M1 machine
I experience max crashes while using sp.concatcreat. Approx every fourth time (with the same file).
In the console there are about 60 “fluid.dataset~: Invalid buffer” errors before the process starts.
It crashes after some time processing.
here is the crash report: https://goonlinetools.com/snapshot/code/#ubegfqnguiqzpagxrxqkw

it also crashes while using the concatsynth~ sometimes.

I wanted to explore the behavior a little bit more but saw that I am not alone and thought I might share it too…

Still exploring your abstractions - so good!

All of that is consistent with that threading issue/bug as the conat-based stuff is what is exhibiting the error message at the moment.

Do either of you have any crash reports from the process?

I’ve just taken a quick look at this against a debug build to see if I could work out what was going on. So the warning you were seeing (process() called on empty queue) wasn’t something that you’d ever ought to see. It was happening because the result of queueing a job wasn’t being checked in the C++, and then it’s trying to process that (non-existent job anyway). So you what you should see is an error (already processing).

That’s happening because the objects are being banged from the high priority thread before they’ve finished. This is the gamble with using the buf* objects as though they were real-time. They still take time to complete. Really the only way to deal with this (if you don’t want the errors) is to guard against re-triggering before objects are done processing.

I’m not certain yet where the crashiness comes in.

I now know where the crashiness comes in, I think.

1 Like

As in, a vanilla onebang 1 loop kind of thing?
This wouldn’t/shouldn’t impact more onset-based things as those don’t fire very quickly/often, so it would just lead to “dropped frames” in the realtime stuff, which is definitely preferable to crashes.

Exciting!

Yeah, a onebang at the top should do it.

The crashes are related, but not necessarily going to be fixed without some C++ing. My thinking is that the queue isn’t guarded against simultaneous access from both Max threads, so it’s possible that objects could get banged in close succession from the main and high-priority threads, and a job’s state gets effectively orphaned but then something tries to read off it anyway (I need to go and sit in a cave and commune with the Threading Gods before it’s clear in my head).

Anyway, it would explain why your crashes seem to be correlated with clicky-mousey stuff in these particular patches: the objects end up getting hit from multiple threads.

I’m not yet clear on the least icky way to fix it yet (hence needing the cave). The possibility of having the scheduler in the audio thread complicates things, because it means I can’t just put a lock around the queue :cry:

2 Likes

I should add that the crashes can be avoided from Max (pending a C++ fix), but it’s slightly tedious albeit Good Practice: enforce thread discipline in your abstractions. I see most (all) give you an option to change between @blocking 1 and 2 . It’s probably an idea to accompany that switch with a choice of pathways at the top of an abstraction that coerce onto the appropriate thread.

Kinda like:

Hmm, how would this then go with the onebang stuff as well (like at a route bang level above the gate 2 (for this specific example)?

If I understand this correctly, this would just force the high-priority bangs from above to kindly respect what stuff downstream wants?

For the most part the @blocking 1 happens for literally only the first hit, just to size/resize buffers and such (uness I’m running offline analysis), so I guess this would just sanitize that initial hit, with the onebang thing preventing re-bang-ing while another process is happening.

For this bit from above, when I’m getting the errors/crashing, I’m not manually sending bangs as well. I’m doing unrelated UI stuff like tabbing to another app, or switching to another tab in the help file etc… As in, not interacting with the relevant code at all.

onebang emits on the thread it was called from afaik.

Yes. defer (or qlim) will force messages on to the main thread if they’re not already there. [delay 0] does the same for bangs with respect to the high priority thread.

Something is, or you wouldn’t get the crashes you do. It doesn’t even need to be bangs (though this would explain it best).

`

I’ve implemented both of @weefuzzy’s suggestions here. So far no yellow warnings (or crashes), but I no longer have/use the old massive overview patch.

edit: I accidentally left it running for a couple hours while I went out and ran errands, and not a single yellow error the whole time! Also tested it with the big chonky overview patch and no issues there either.

@johannes and @MartinMartin , can you try replacing these files in your 0.6 install and see if you still get crashes on the overview patch? (The M4L devices won’t change until I manually change their guts) If this works I’ll tidy up the changes for the next release and propagate it out to the M4L devices as well.

Archive.zip (59.7 KB)

I replaced the files and was testing sp.corpusplayer~ helpppatcher for
about 10 min and had no crashes. so yeah, that seems to solve it – at least for that particular helppatcher! thanks Rodrigo.

1 Like

Ok, downloaded the archive.zip and replaced the files - only 4 maxhelp files no abstractions (?)

I still got these 4 yellow Errors while using the concatsynth: (also just letting it play in the background)

fluid.bufloudness~: Process() called on empty queue
fluid.bufmelbands~: Process() called on empty queue
fluid.bufmfcc~: Process() called on empty queue
fluid.bufloudness~: Process() called on empty queue

I didn’t got errors using concatcreate.

Also the m4l concat match was working without fuss.

Max crashed while using concatcreate and concatsynth simultaneously.
Here is the crash report:
https://goonlinetools.com/snapshot/code/#no8d95sywl8spf9z1lk7

I was working for about 2hrs. So in my experience no crashes when using the two processes not in parallel.

Potentially bit of a silly question, but did you quit/reopen max after replacing the files? (the helpfiles would be unchanged either way) Otherwise it keeps on with the ones loaded into memory.

The M4L devices are, as of now, still unchanged too.

yes, I replaced them first and opened max afterwards.

But yeah, still exploring your abstractions and through them the flucoma objects. If I can be more precise the following days I will come back here…

Hmm.

From (my limited understanding of) the crash report, it does seem to point to the same threading funny business. I would have thought the changes made it safe across the board.

Here’s the v0.7 update:

v0.7 - SP-Tools v0.7 - Ramps, Data Processing, Novelty, and Timestretching

  • BREAKING CHANGES - all objects that had a separate control inlet, now take those messages in the left-most inlet
  • added new “ramp” objects for structural and gestural changes (sp.ramp, sp.ramp~)
  • added new “data” objects for transforming, looping, and delaying descriptors (sp.databending, sp.datadelay, sp.datagranular, sp.datalooper~, sp.datatranspose)
  • added novelty-based segmentation for determining changes in material type (sp.novelty~)
  • added timestretching functionality to sp.corpusplayer~ and the Corpus Match M4L device

/////////////////////////////////////////////

Thanks to @weefuzzy there’s some nice bugfixes and workarounds to make things safer/nicer.

5 Likes

Rod you are on fire :fire: This looks like a fantastic update.

1 Like

Really pleased with the data stuff. Not really seen descriptors tweaked in that way before.

So much cool stuff going on here, thanks for sharing.

To what extent do you think some of your tools/approaches work for a non-SP hardware setup?

For example I’m currently messing around with a KeyTam but using piezos and other more simple/commodity sensors, and I would love to play around with your patches!