Riiiight. Ok, that wasn’t clear to me as I just assumed that some of that would persist over.
So the intended UX here is to just use a big chonky maxiter
once and call it a day? (unless one is purposefully trying to cross-validate or do something special).
Yes please!
For stuff in those other threads about computing class means, or removing outliers, etc… it’s an absolute nightmare to code the manually separating/dumping/organizing of the sub-datasets, much more so than the actual computation being done (means/outliers).
Interesting!
I’ve done this kind of stuff manually, which is a big pain in the ass. I do wonder how generalizable this would be using an arbitrary amount of classes etc… but being able to get a % correct out of the validation data would be super handy.
(I do wish the reported error was easier to tie to some kind of normalized range so it intrinsically communicated more)