Large arrays and NRT

Hello all,

one question regarding the usage of FluCoMa within NRT context: As it is likely one might use rather large arrays within SynthDefs e.g. for slice points or other collections that could exceed a SynthDefs capacity, would it generally be advisable to load all larger collections into buffers?
Im asking this because the allocation of buffers with NRT tends to be more prone to complications (and less flexible) than arrays, and maybe theres another approach im not aware of!
Thanks,

Jan

1 Like

Are you talking about running the NRT mode of scsynth? I haven’t done much with that lately.

However I can say that I do often opt to load data into buffers for use on the server rather than declare big arrays in synthdefs. It works quite well for me using UGens like Index with PulseCount or Stepper. You might also checkout FluidBufToKr and FluidKrToBuf for converting to and from kr streams.

Yes exactly, i meant scsynth in NRT mode. The most straightforward way for me so far is actually to write the buffers as files and load them as such on the NRT server, as the alternative with loading collections & adding them to the score etc. is not very practical or working well. The downside is loss of immediate language operation on arrays, which will have to be done before writing the file in this case.
I mostly use DbufRd but ill check the Ugens you suggest as well, thanks!

1 Like