Greetings,
I have this code (below), which is a variation on some code @alicee posted in the slack. This whole block you see is inside a .do
loop, so it’s defining this function (run_fit
) anew for each do
iteration (this function runs .fit
on this autoencoder). The different datasets it’s processing have anywhere from 125 to 27000 data points.
The .do
loop fires off all these at essentially the same time, so the autoencoders all start training at the same time, just using the differently sized datasets.
I expect to see the autoencoder using the dataset with 125 points train much faster–posting to the post window frequently, where the dataset with 27000 points would train much more slowly, posting less frequently, however what I find is that all the autoencoders are posting at the same time, as if they’re all waiting for each other to finish their .fit
before any of them go on to their next .fit
.
Am I missing something about how these operate? Is my recursion wrong? Am I correct that the FluidMLPRegressor.write
has an action as it’s second argument?
Thanks all!!
T
run_fit = {
arg counter;
net.fit(analysis_norm_ds,analysis_norm_ds/*fm_norm_ds*/,{
arg error;
"".postln;
"------- n steps: %".format(nSteps).postln;
"------- analysis: %".format(analysis_name).postln;
"------- counter: %".format(counter).postln;
"------- n iters: %".format(counter * maxIter).postln;
"------- loss: %".format(error).postln;
"".postln;
net.write("%/%_autoencoder_%_nSteps=%_shape=%_hiddenAct=%_outAct=%_nEpochs=%_loss=%.json".format(
dir,
timestamp,
analysis_name,
nSteps,
shape,
hidden_act,
output_act,
counter * maxIter,
error.round(0.0001).asString.padRight(6,"0")
),{
if(error > 0.005,{
run_fit.(counter+1);
});
});
});
};
run_fit.(1);