MLPRegressor training versus validation loss

thank you very much @tremblap! great!
and yes, my bad, the loop keeps it on, so the question would be how to know when it stops (to output some indication from the MLPR object maybe, to set ~continuously_train to false would be great?!

one could also implement the early stopping test in the train-loop (compare last 10 test loss values, if they are all going up then set ~cont-train to false)
i need to safe a model today so i keep what i have for now! but for upcoming work…

and i will also test your high batches approach very soon!! ; )

this looks like a very elegant solution, i am curious!!
merci to all

I think a combination of the 2 solutions you propose is what is a rich way forward here. for instance:

run my long slow settings once, write the reported error
run it again (it’ll stop early if it needs to) and compare the error
continue until you find the error is low or the time of running is short twice in a row?

but then, there are many many more ways of exploring that, for instance you could generate many random mlp and see which one gets to the best error sooner… or you could do a few my way then test your way, keeping mlp states in between in a Dictionary so you can revert back when it starts to overfit.

this is fun, you can explore all of these until you get the music you want and/or the mega neural net structure and/or your understanding is maximal and/or you are bored and/or …

so much fun!

1 Like

yes that is what i want, i mean its fun sitting the whole day and watching things get drawn, haha… i mean it, but i will build something automatic soon (a friend pointed me at https://docs.wandb.ai/), i want to input lets say a list of hyperparms and some model structs etc. then let it run some days and it gives me the best thing out of 100s for my data…

this is really fun, indeed, i enjoy it a lot also - still to discover and understand lots of ml things at the same time…

1 Like