Hi there,
While adapting a patch using fluid.mlpregressor~ to predict output with a different number of parameters, I get the error “Ouput tap should be > 0 or -1”. (Also, the object doesn’t output any predictions.)
What does this error mean, and does it help me to debug why the object is not predicting?
When training, I also get a much higher fit value than usual (~15 mostly). I’ve sent the clear message to fluid.mlpregressor~…
Perhaps the clear message also causes the object to be reinitialised with some default values other than @arguments passed in the object box. After I sent “reset”, it did what I expected.
Hi @leoauri!
Welcome to the Discourse! I would need to hear a bit more about what you’re hoping the MLPRegressor will do for you to know how to answer your question best. Can you explain the goal a bit? tapOut
changes which layer (where in the architecture) of the neural network the outputs will come from. The number of parameters it outputs at a tapOut
will be set by a hidden layer size, so the number of parameters not directly changeable by tapOut
, just which layer. Changing tapOut
doesn’t reset the neural network. Check out this article, it may be helpful!
MLP Parameters
Cheers,
Ted
Hi Ted, thanks!
I got it to do what I wanted, so let’s just leave this here as reference for anyone getting this same error.
I was just taking a patch using MLPregressor and adapting it to predict outputs of a different dimensionality. I didn’t touch tapOut
while doing this.
I was trying to figure out what I need to send the object in order to reset it into a state where I could train it on new datasets, I got this error, and couldn’t make sense of it. I got it to work by sending first “clear” and then “reset” to the object.
Is there an API reference for the object? I couldn’t find one.
Cheers
Leo
From what I could tell “reset” didn’t reset the parameters of the network, and I didn’t want a pretrained network. “clear” alone seemed to leave the object in a weird state (that error, no output, weird fit values).
Hi @leoauri
Just sending clear
should be sufficient: this will throw away what the network has learned, including the inferred input and output sizes. reset
is a message common to all the flucoma objects which will reset the object’s attributes to whatever was set in the box with @<whatever>
.
I suspect you could end up with this error message if you fit
with one size and then try to predict
or fit
again with different size outputs, but I agree that this could be clearer. I’ll have a look to see how this can improve.
There should be yes. Are you using the pre-built package, or building from source?