To resynchronize asynchronous processes (bangs and buffers)

So now that asynchronous stuff is on the horizon, there’s going to need to be some conventions around dealing with this problem.

For example, things like this:

…won’t work in a “traditionally Max-y” manner. In that I can’t count on the left-most bang letting me know the process has finished.

So ok, I can use something like buddy instead, and when all three (or N) things have banged, I am good to go, right?

BUT once there’s a FIFO system, that no longer becomes the case. So if I query two of these things in a row, the fluid.bufloudness~ process might finish both before the fluid.bufspectralshape~ has finished its first. So one would need to keep track of outputs and stick them back together in the order that they arrived.

Not a massive problem by any stretch, but one that will become the “norm” for querying for multiple descriptor types and the corresponding plumbing around that (i.e. fluid.bufcompose~-ing a temp analysis buffer and then fluid.bufstats~-ing that).

In talking to @weefuzzy about this yesterday, he suggested some kind of abstraction for this, but I can see it being problematic for an infinite amount of stacked querries as well as things one is querying (which almost seems like you’d need a FIFO per stack).

Now in my example above, I can just make the whole process serial, and create a single processing chain. It would mean needing 3x fluid.bufcompose~ which isn’t the end of the world, but it’s not difficult to imagine a version of that patch that is many more steps and or things it is querying for, making a serial approach not feasible. Also becomes tricky in a FIFO context if some things taking longer than others.

/////////////////////////////////////////////////////////////////////////////////////////////////////////////

There are also parallel concerns for objects that output buffers in a FIFO context, in that there needs to be a system to manage that as well.

Creating a dynamically growing buffer as the output of a FIFO stack is a problem as it can grow to an infinite amount. At the same time, if you do want all those outputs, you can make them and stack your FIFO requests with different [numDestOutBuffPlaces](http://discourse.flucoma.org/t/naming-numing-conventions/202) arguments.

A different problem, but definitely interrelated with the (a)synchronicity problem that is in the more immediate future.

Thanks for the detailed thoughts @rodrigo.constanzo.

Indeed, asynchronous things bring some changes in how we have to approach stuff.

Yes:


----------begin_max5_patcher----------
1183.3oc6YE0jihBD9YyuBKeN2TBpwj6s82wVWkhnjLrqBVJNaxs09e+P.cz
DMgYhyMo1kGRrraft6OZZ3C+4BGucri3JO2+18qtNN+bgiiTTi.G86Nd4niI
YnJYy71Uy4Lp2RkpBDO4YB8v1RbBWMLAP+m7W5Bh1z7.F19u6+n6CIUNPrce
6u.gsCTE+TFVJuUBqlmg47SEX035sCQO30MJEk3JLki3DFsu0iTVOV9n8utd
QqyITwvJiEvqBUFSIsQ3uVrn4ukyEn.CfOAiLEUhlaTAtRa9GLXADF9jwnxp
4FU.wJq+flqDs1.PA9tAkOqvSWe.F5e6va8rWd.rQhsvv3Gx4biPk3Yu7fv5
Riu5Qr5fQXxpGs0AT7OD9UqOvwG45fNM8jafA0D2HqIB1nv.33g8aY+Suk5e
iG5AiF5AyWnW3hpNQSbqH4Fb7AXTyiX0j9DQevrWdn03gu6EB8iLboFkzvji
2dRF9EbYkvr8ZsiGpnnmXmdcoAa+FSNPwK6DQnJQAchJwuPZ6eXmTTo.e4h3
qtTgOGW0lxzLLrTbIslzkiKmkWzNfcYjApsmAxZlqTKGCW2CYDYDGxXIeGKm
a7aExJvTBsOVOPcJdOpNiucOixqH+qzAAho5wzuGkfmryTTtJ59RIAk0FedG
JIoLZiSL.qaD2ZNQvEoNsZ+fQ1BJpXjNKR6D3xDJqDAYc0NTYyTwNUFIrUIm
wxFppqeY38bs5BBkdFJxYESqrjb34qz2cLgx7qM1RMUaqoJsaEqh4aqPuLDs
4nrL8x5gC+QDkji3XNQME.86ThoHQf9bURIKKaP7pz7xHZREYwI3ePR4OKMT
+jAQyIEsIQdcyxojC3J9PYbzgpgRtnBgPT8N8pzsbbdQlHJF1fAbi5ujreAv
AxuVgvgECSwYnS80L1NAw.05N41x5Eg8JEN0lASDv2tr3kE4fCUb9dhNsaNn
KcL+3TIhlxxcC788uEZsQhVfUxpTqA2BsBdGnEgx+zAK0vZXpyZYRSfe6+SA
FvaCFis42HgpeOMIr7bLkewPQno3iyMtHcNyfkvHyPEv7ufxe5bj6E3dc+qL
BcpBWRPoQ+3HZEqtLoMTz0TbGhOhBwbBsa27u1k.0zNilOeq9.vPeH3h1kSR
KXhEsZv.HNyRyQqgxGhsp5daPuXkopSt4+oFNWB8cNF3CwwBdCNFXz4Zsv1C
Q50bNhzspyLsEw4kDAoOUpY+SE+l1aWbPycnL8N2cm76Zaz+5oAV7p+9+Iml
ta+yDRMv4mcefkTikTikTikTikTikTikTikTye3jZt+C59aLKfeyXac+DXsj
Zt52oxDNMfOzuNmkchkchkchkchkchkchkchkch8StXIAX+jKOrrSjg6vELJ
xJmCdZJKWNatpOpc8oxQYEYrghLzPf60PgFZH3cZn.CrCXNfNnIFJZNliLwP
qlCCYRHMDeOqVHLD7Trn72lMMEACfxGMuz0ggkA+XcUn4tZ+2tzWAe79Jvbe
EDIuIhwcU3c4p5EfW0SWOCYZ5JJW0NwyjctExe4JGU46yt8jFSb1slb1Mlb4
skL8Mkb9sjHr7uV7eOgcWw.
-----------end_max5_patcher-----------

There’s at least two possibilities here, and which one is more desirable will vary on a case by case basis. In some cases you will want to group the results of processes launched simultaneously, and in others it will be ok to have some things updating more quickly than others.

If an internal queue happens, then enqueued jobs will take a snapshot of the parameters (including buffer references) at the time that the job was requested, and will work with these. Anything else would be confusing, I think. This means that bundling stuff together neededn’t be too hard (because you can reliably set an output buffer tied to that particular invocation; combined with using vertical pipelines for serial processes, I think this should be workable).

1 Like

Two things:

1 - you can run synchronous if you wish - if it is fast then that might be perfectly acceptable to lock the low priority for a minuscule amount of time - likely that will also be default (that is not a commitment, just the current state of things)

2 - When not this?


----------begin_max5_patcher----------
1029.3oc6YsrjahCEcM9qfh0dbgDz1smcY47MjJkKAH6VIBIJgni6IU92G8B
LfA2zS5NcREsAJtW83dO2GRmhusJHJieFWGE92geLLH3aqBBLhzBBbeGDUhN
mSQ0lgEkyKKwLYzZqNI9rzH+eBQkgEbFtUSERl+.gc5f.mKs6.XObC7t0gfc
fMwqC2lpeBi2DG9I2jHElEim84+B.ZWIVSIuQRwRiED6jVKehhMit23Hr1gA
zx99pU5GqWnuk0Hkb1sbfzzMZ6e6Nikm19bJ6e+zlOvI0JR9TE1tzQYH1ont
E58v6fI1vyc2+7dG.9ql6wveUYXWkWVEhpehkGVSJugqmnyBUAV3c5W6rtNb
ZWO4M0y6aiXgyecNbPzQBE+HVTSTAxKiNHBUU0SbPuonQoOyMKzt0chHLqnj
NQB7ij14usSJRnPJoBlZDVi971znKKCu.KXMjtPlIdspcA6pZR1avSvNaUuA
pSuuWcuJ1dhxy+BtnWEtBJqvLBqRfqUcbPRm00ot.eD0PkGNxYxZx+ZLPfJn
Mk9inb7rSlgJsd2GDDDs0+hNIHpFZZiX.VqE2tcJmyjwnedwYLifgplXxpDH
EtLixZkS1TmgD5PQlMMA1pTx4zgp5lGEeT5TWQXrQnnjWMuRA4zC2XtYbkxx
as1FM0GZXVsGT0ixC0nGGh1RDk5JPGt7mQLRIRhkDaH.F2oDyPJG8g5bAmRG
3uVMONglBUVbN9qjB4ClMpexfZ3jp1jnntnbA4DtVNTlDcpdnjqJaUhZxbUo
Gj3xJpxKFNfAms1ujreqrAxuUKsgs0JvTzS80L0oU1iYgfs5WthvdM0F2XKs
+5MYysEzfaFz5pFcvV4l17tVGu93j.wJ3kgIwwwOGZs2fVty1uG7bnUx+Szh
nt7z6IXYMnEl5buIoIIt84bfAbdvH9k4p8wt16ZNdBDVA9buCcdUvEiIrLXI
8tkgJfelETw+n.2kyunD1bMtLfhV+zHZMuQj25Ftpjvg3ipQrjv5NM+ic8dB
AgeZQwyWpM.dA1vvwwEE16fAdWMrqAwRRQEW0IwEk.pKSo4k.MuTmg180ztS
7ah6jtP2AZG2jofq5eIxH88HJNXuyzAjTJHJRL1Ty92J9Ec1t5hlYHp6j6ta
9cqC5ubafUWr2elrSZIlsH5IPO8DO8DO8DO8DO8DO8DO8DO8DO8j2xqr5YA7
6Baqebpnd5I27uhsD1I.O6DO6DO6DO6DO6DO6DO6DO6DO6D+OOwSa5Oyedhw
cGVvXIqLF7bTVtcNvL.19K.1UjhV59.Wv9.dE1mjErOvWg8ArDGZnwLpnAlB
1rSUmreutZIAZdo+XfkYCvi3WoMpQ7pFwo5Z9TyykZLOJ0N+8U+WP2Hhf
-----------end_max5_patcher-----------

I didn’t know if you were going to add it as a flag or not, if it’s a flag I’d leave it synchronous for stuff like this since it might be cheaper/faster too(?).

The fully serial thing would be good but means multiple object for repeated processes. Not a problem really, but not quite as elegant.