yes it is. like in jitter, you have to decide who is the boss. When we go public with the multithread implementation I’ll put examples in there.
to a certain extent. FrameLib allows to do frame accurate at (multi) audio rate. What you do is ‘slow’ in computer terms, and easy to sync when disciplined. I’ll do examples. The main hurdle here is that you still think in parallel. Computers are mostly serial machines, so you could recode your example and take the chain of events in total control.
and interface. FFTYin is more expensive, but better, than the pitch in descriptor~ For info, ircam.descriptor has it too, and is more expensive than Alexs, but for many other reasons too. Implementation, interface, quality, are all intertwined.
For instance, forking in code has a cost. Every option you give to a user implies forking in the code somewhere, at best. Architecture is everything in code optimisation. For instance, asking for different ffts in alex’s would raise the cost significantly, since one thing that makes it efficient is Alex’s clever order of manipulations to compute as less as possible. This comes at the cost of flexibility. That is why he spent so many years on FrameLib
Then use something else if the cost is not worth it. it’s that simple. But first, try to compare for real, i.e. how better you describe (and more importantly, match) when you have what you want for real. Then, like plugins, pick your best cost-to-feature ratio.
if you do that you need:
- description
- matching
- matching with what you gain with the 2nd interface, which should yield better results
then you can compare implementation cost vs results on original task and aims, not on using a ferrari to plough a field… which is an analogy used in other threads. It is an important one: there is no free ride, not object/architecture that is better, only some better for some tasks. Here we try to help you find the knowledge to actually compare that -what you think you need to do a task you think you want, then by having access to the tools and knowledge to explore all your dream implementations, finding which one is best for the task at hand.
In other words: in some of my composition patches, I will still use descriptors~ (when it is corrected ) when it is the best for the job at hands. What I learnt through the fluid* version (and the python prototypes, and the audioguide software) is that what I thought was better is at times worse… and at times has better results, at higher cost.