Fluid.mlpregressor to predict a four binary number sequence

I know it’s more of a general ML problem but in the end I’m using FluCoMa for this.

Here’s my problem:

I’m aiming to use a Multilayer Perceptron to predict a sequence of four binary numbers based on three input sequences, each also consisting of four binary numbers.

For training, ChatGPT has suggested that the input sequences are converted from binary to decimal format and then scaled to a range of [0,1]. And also that the output sequence is one-hot encoded, representing one of the 16 possible 4-bit binary combinations.

Im not sure if ChatGPT is being misleading, it also recommended to encode the output with the same scaling I’m doing for the input.

Im aiming for an algorithm that doesn’t require a lot of training data (I need to input the data by hand) and also not super fuzzy about accuracy (there can be some ‘mistakes’ as long as the general logic is preserved).

What’s the best input-output encoding you would recommend in this case?

Also what do you think would be the optimal configuration for the MLP?

Hope you can help,

Thank you.

I would probably hot-encode 4 dim x 3 time input so 12 dim (where t is time and b is state)

t0b0 t0b1 t0b2 t0b3 t1b0 … t2b3

and hot encode the output as 4 dims b0 b1 b2 b3. Then pick the output as a rough probability curve.

(I’ve had some (fun) success with this flattening of time for other things as the learn platform might tell :slight_smile:

@lewardo and @weefuzzy might have another solution - please stick to MLP for now :wink:

p

Thanks a lot for your prompt reply, Pierre. :slight_smile:

sorry because my understanding on this is pretty limited,
I assumed fluid.mlpregressor could only work with floats.

let’s se if I got this right.

so lets say I have 0 0 1 0 - 0 1 0 0 - 1 1 0 0 outputs 1 1 1 1

I would need to format my training data like this:

t0b0 t0b0 t0b1 t1b0 t1b1 t1b0 t1b0 t1b0 t2b1 t2b1 t2b0 t2b0 outputs b1 b1 b1 b1

then the probability curve for each of the 4 outputs would be the probability for each position to be active? am I right? should I do some kind of threshold? or you would still use the probabilities?

again, I am looking for the logic of it to be preserved and to find interesting variations on the sequences, so it doesn’t need to be ultra tight as long as it keeps producing nice transitions. so maybe the probabilistic approach is interesting for my purpose.

just for the sake of giving more context, these transitions are made in order to arrange four ‘‘instruments’’ or voices in a generative composition.

(yes, its indeed a time problem that im trying to solve :wink: I had a lot of luck with a much more simple problem and it worked like a charm with only 10 datapoints training. )

for your input, a buffer of 12 samples with the values as the input point

0 0 1 0 0 2 0 0 2 2 0 0

and a buffer with 4 samples for the output point

1 1 1 1

==
many points to points (NtoM mappings) later, you regress. if you get a good fit then you enter the same format and it will regress to a 4 point buffer that would likely give you

0.1 0.12 0.8 0.03 (or whatever)

and that means you are in luck that class (0 0 1 0) is the one

(if you are trying to classify, you could use the same input (flatten time series as dimensions) and 4 classes as name and you’d get the same result but different interface

try that and let me know how it goes

thanks a lot,

the txbx took me out of the buffer-FluCoMa domain and I found myself thinking how im going to get these data into a dataset.

will let you know.

I need to give it more training but it feels like it’s working!

im going to go for your version and another one with x16 output (all classes), this way I can play with choosing the most probable…

thanks

1 Like