FR: Probability output of mlpclassifier

Bit of a late-in-the-game FR for adding probability output to fluid.mlpclassifier~.

Basically the predict_proba from scikit-learn (link).

I don’t know if this is, essentially, what @a.harker did for his custom build of fluid.mlpclassifier~ as discussed in this thread when he said:

But having a flag to (somehow) output this information would be useful for all sorts of things (cascaded classifiers, or double-checking onset detection etc…).

I imagine that the value is being computed internally, but it would be a matter of adding an attribute and/or way to output it.

there is already a thread where I show how to retrieve that by loading the model in the mlpregressor… check it out :wink:

1 Like

In case you didn’t find it: Using MLPregressor to get the confidence vector of MLPclassifier

2 Likes

Very handy.

I (super) vaguely remember that.

Sadly the syntax has changed, and not knowing exactly what it should be doing, it’s not clear what goes where.

quickly because on the road:

  1. train your classifier
  2. dump
  3. extract the “mlp” key
  4. load that in a regressor

done!

1 Like

Also, would be handy to have an actual probability vector where the values sum up to 1.

For my intended purposes the workaround is useful, but having an actual probability vector would be handy, as an FR. (I can add it as an issue to git if it’s easier to maintain a history there).

it is a little more complicated than that. There is a thread with @danieleghisi and @weefuzzy about providing (eventually) a tool to get softMax on arrays, maybe buffer-style à la FluCoMa

2 Likes

Just to chime in and say

  1. This stuff will eventually happen.
  2. This thread is in danger of conflating some different stuff. tl;dr be wary of too much belief that something that looks like a probability or a distribution can be reliably treated as such. There be dragons, as ever.
  3. If you’re on Max 9, then you can use arrays sum the contents of a buffer to 1 relatively painlessly. Untested:
2 Likes

or this with good old vexpr and lists


----------begin_max5_patcher----------
778.3oc0WtsbaBCDF9Z6mBML8h1NTWsRHNzq56QSlLXaUWxfAOBQpaxj28Js
BG61XCJNjjIWfDnCr5a0p+Eta5jf40akMAjuQ9AYxj6lNYB1jsgIcOOIXc91
Ek4M3vBV2VpKZJKVJUAgt9qa0kRs9OajtWTPPn4hbYW2EKwIVO+5u.Y6lylb
8heUTs5Jkbg1MMVJcFMjvR31JNOwVABS0CupFodcQkY83lANd3f9qZMcaVK3
RE1MohawEFPev1p70RsTckrJedI1Ic+avgC9JX1Fue5TaQ3qhCJsGGDjEYAN
lxde6fjMM4qj6.UK2h7E.yNs+5n9pjd7UhXK5hXzGIbAVrS4HXGkM3LXqR9a
yR6QncaIooc8YDNH5AwzN1RsUQ7yAwwb6iNCxnzrDaPGMMii2jjPEb6MLdFX
uAho7zXbLzDlv1RDUjh0I7HbrBHJJy1RVlX+lrmAEQC5whh.zwwEuwQE2H2t
QQ9vOAxWMkLx2aVjWlqVWuTRfmH1rAwlKv.EHg8d8v.zm1HjNSXfDhdcOMbB
F0jRR4YfHcvsQVpisLKtmlPXzHbdqVWWcRVlmWs5nrzm5Li4xjgQjt6sk8ih
+YkfQUVKIEkxPAMNXElnY1x33HTGypYIRY1V.SAC04RYwOsSvCqzyXnfEOAd
6O.uRU2tgrOb0+Pb9vgEfKqcWzwa2gXTdVkWsrd8GoWbQHXxY8IxmwM8mn9L
yCpchVP7.5yvK81a6sE8syhG3C2WWToOtVlGHCYnZ8YEPyOfXbdAkEU+++xf
1119+5FZpaUK1AT2oOxdquT1nKpx0EFcu8iw9yKGLnZk8K7sZPG006qksevx
vlN8EwzdQsQGm.jKeEfTbbHgmmoY9X5nw.RpOVpa477rD3qkFElfgrDLBL4U
rH8kHBIwGKOBDx8vNhQXOym.d9XDE5gcdbztSpNeylajpltAilvja55Zb6LM
Dernx8XL9nRdSwtwKvVxUlzJZSNkVEtrB1F694u.6ePopZK5hMLvYLIl2qx7
EjMaxcbfoGmd+z+BiZJSkC
-----------end_max5_patcher-----------
1 Like

Yes. Weirdly easier to do the arithmetic with list objects, but much easier to scan over the buffer with arrays. I guess C74s thinking is that by giving us map, reduce and filter we can roll whatever ops we need into relatively zippy abstractions.

1 Like