Something that has come up in a couple of threads has been the area between JIT processing and real-time processing.
At the moment the .buf
-based objects are perfect for JIT processing, but because of issues relating to buffer~
writing/reading/manipulating/querying (having to do with the everything-is-a-buffer paradigm), it means that JIT is really JIT+buffer-defer+MaxSlop (minimum 10-15ms, maximum 500ms).
This is a bummer.
Of course there are the real-time objects. Those output data as quick as they can, so no (functional) latency involved! BUT (and a big but(t) it is), it is not possible to correlate the output of these (.descriptor~
-based) objects with the sample-accurate onset detection algorithms.
There’s a quantum problem of knowing when something happened, or what was in the frame. Not both.
Something I suggested ages ago was having some kind of @onset
flag for the real-time objects where they don’t output a frame until they are banged (or receive a 0->1 transition to keep it signal rate). That would solve this problem, enabling cake having and eating for this in-between JIT/real-time world that I (seem to exclusively) occupy.
So other than prompting a discussion around this idea/problem, some specific-ish questions.
-
As far as I understand (from the threads/discussions up to this point), the JIT approach will always be “slow” because of having to read/write/transport
buffers~
, is that correct? (As in, although it may be possible to optimize the underlying algorithms, the functional time the process will take will always be orders of magnitude greater because of the framework/approach involved) -
Will it be possible, at some point, to be able to accurately correlate the output of the real-time objects…with prompts from the real-time (onset) objects? (either via an
@onset
flag, or some other mechanism)