TB2-Alpha05 is here!

Loads of small things, all of which broke everything before getting to a better place :slight_smile:



Change Log

New examples:

  • Max: working autoencoder as dataredux and interpolation up
  • Max: tutorial on simple regression feature comparison
  • SC: working autoencoder as dataredux (the other part will come soon here)

Breaking changes:

  • outputLayer is now tapOut. Its numbering is now different: 0 counting from the input layer, so 1 is the first hidden layer and -1 is the default (the last, like all our interface).
  • it has a new friend, tapIn, which allows you to feed the data for predict and predictpoint in the middle somewhere. 0 is the input, and 1 is the first hidden layer
Normalize and Standardize
  • now have an β€˜inverse’ parameters for transform and transformPoint to allow query from the transformed space to the original.

Now returns/passes the variance, aka the fidelity of the new representation for a given number of dimensions.


Most TB1 objects have a blocking mode in KR which allow to keep them on the main buffer thread of the server (faster for small jobs avoiding large memory copying)


  • all json load/save/states bugs/oddities reported
  • Max: cluttering when buffer resizing
  • bufnmf parameter check order
  • general buf resize sanity check
  • Max: buf resize of dataset is done just in low priority thread (faking mode 2 otherwise)
  • SC: most dataset objects in KR are much more efficient
  • SC: kr bus assignation


Should mostly be edge cases. Feel free to test against your reports and update the issues.


@spluta that should help you gain 45% of cpu :slight_smile: