A question about the "Classifying sounds using a Neural Network" example

Hello,

In the " Classifying sounds using a Neural Network" example, data are added manually by pushing on a button to send bangs to a counter.

Of course it could be automated with a metro. However, wouldn’t it be better to sync the triggering with the underlying FFT settings from fluid.mfcc~? Could such a sync be triggered by fluid.mfcc~ itself? Otherwise which time interval would you use?

Hello @amundsen and welcome,

I’m assuming you’re using Max or Pd. The main obstacle to doing this is that the output of fluid.mfcc~ is a list not a signal, and so dispatched on the scheduler / main thread rather than the audio thread. As such, the list output of the object is already as synchronised with the analysis as it can be. We did consider adding a trigger signal early on, but discounted it on the grounds that it would only give the impression of accuracy because there’s almost always a thread change.

(In this specific example another thing to consider is that a more robust implementation would probably aggregate a bunch of frames together, e.g. using stats, to give less noisy training data.)

1 Like

That, or if you wanted even tighter timing at the expense of working in non-realtime you might just process all of the buffer with fluid.bufmfcc~ to build up your training data.

Sure. Thanks for the suggestion!