Multiple FluidMLPRegressors not asynchronous?


I have this code (below), which is a variation on some code @alicee posted in the slack. This whole block you see is inside a .do loop, so it’s defining this function (run_fit) anew for each do iteration (this function runs .fit on this autoencoder). The different datasets it’s processing have anywhere from 125 to 27000 data points.

The .do loop fires off all these at essentially the same time, so the autoencoders all start training at the same time, just using the differently sized datasets.

I expect to see the autoencoder using the dataset with 125 points train much faster–posting to the post window frequently, where the dataset with 27000 points would train much more slowly, posting less frequently, however what I find is that all the autoencoders are posting at the same time, as if they’re all waiting for each other to finish their .fit before any of them go on to their next .fit.

Am I missing something about how these operate? Is my recursion wrong? Am I correct that the FluidMLPRegressor.write has an action as it’s second argument?

Thanks all!!


run_fit = {
				arg counter;,analysis_norm_ds/*fm_norm_ds*/,{
					arg error;
					"------- n steps:  %".format(nSteps).postln;
					"------- analysis: %".format(analysis_name).postln;
					"------- counter:  %".format(counter).postln;
					"------- n iters:  %".format(counter * maxIter).postln;
					"------- loss:     %".format(error).postln;

						counter * maxIter,
						if(error > 0.005,{


@weefuzzy will confirm but I think indeed the dataset stuff is not async yet in neither SC nor Max.

1 Like

Yes. All these messages just execute directly in the scsynth command FIFO. So, whilst asynchronous from the point of view of the language, syncrhonous with respect to each other.

1 Like