TB2-Alpha05 is here!

Loads of small things, all of which broke everything before getting to a better place :slight_smile:

https://huddersfield.box.com/s/zoy9ueew4k3pxlqg4ce4p8oxs19682db

Enjoy!


Change Log

New examples:

  • Max: working autoencoder as dataredux and interpolation up
  • Max: tutorial on simple regression feature comparison
  • SC: working autoencoder as dataredux (the other part will come soon here)

Breaking changes:

MLPregressor
  • outputLayer is now tapOut. Its numbering is now different: 0 counting from the input layer, so 1 is the first hidden layer and -1 is the default (the last, like all our interface).
  • it has a new friend, tapIn, which allows you to feed the data for predict and predictpoint in the middle somewhere. 0 is the input, and 1 is the first hidden layer
Normalize and Standardize
  • now have an β€˜inverse’ parameters for transform and transformPoint to allow query from the transformed space to the original.
PCA

Now returns/passes the variance, aka the fidelity of the new representation for a given number of dimensions.

SC-kr

Most TB1 objects have a blocking mode in KR which allow to keep them on the main buffer thread of the server (faster for small jobs avoiding large memory copying)

Bugfixes:

  • all json load/save/states bugs/oddities reported
  • Max: cluttering when buffer resizing
  • bufnmf parameter check order
  • general buf resize sanity check
  • Max: buf resize of dataset is done just in low priority thread (faking mode 2 otherwise)
  • SC: most dataset objects in KR are much more efficient
  • SC: kr bus assignation

KnowBugs

Should mostly be edge cases. Feel free to test against your reports and update the issues.

2 Likes

@spluta that should help you gain 45% of cpu :slight_smile: