Minslicelength unit

Hi all,

I just noticed the unit specification for the @minslicelength attribute is different depending on the object (e.g. hopsizes in bufonsetslice~ vs samples in bufampslice~. Is there a particular reason for this? I personally find it confusing, so I’m just curious.

Thanks,

F

I suppose the choice (it predates me) is because the granularity of the slices is determined by those settings, and thus the minslicelength is always a multiple of that granularity. The non-uniformity is confusing at first I agree but once you know the basis of the algo it becomes slightly more meaningful.

I suppose an alternative to that would be to have all minslicelength set in samples, and it snaps internally to the most sensible value based on the algorithm.

1 Like

Yeah, I can relate to both approaches — somewhat similar to the decision of assigning different attribute names to the output buffers depending on the object (e.g. @features, @stats, @indices, etc.). I’m still completely new to using FluCoMa, so there’s a lot that might seem confusing at first but hopefully will become intuitive over time.

Hehe, that’s another one where it’s up to the person. I might prefer @destination for everything :male_detective:

1 Like

We keep the units in what makes sense yet does not lie. A good blend of responsibility and knowledge transfer :slight_smile:

That is again to try to get people to think of what they will get out - numbers are numbers, but if they represent stats, time series of features, or indicies, they are referring to very different things… autocomplete should help and the name is clear on what comes out. A small learning curve that is likely to be made worse by a common name…

Exactly - the benefit is you are altering the algorithm and not a layer above the algorithm that makes a hidden choice for you.

1 Like

That depends on what you want to do with it.

Sometimes STFT magnitudes are a feature and sometimes they are not for example.

They are still a ‘feature’ in our spiel - one value per hop as time series. Better feature than time-series…

Being a newbie to FluCoMa, I don’t really have a strong opinion on this — I can see a strong argument for both views, but for the sake of argument here’s my reasoning at the moment, favoring James’ point:

  • The choosing of a specific object/algorithm should already be strong a sign of the user understanding what they’re supposed to get out of it. It may not be clear what the parameters or limitations are, but the knowledge about what input and output will be is likely to be there already.
  • If the user does not know what the output is supposed to be, then I doubt having knowledge of the unit will make a difference for them. This would be more likely the case for users who are not very familiar with ML/MIR techniques in general, which means @features, @indices, @stats, etc. won’t mean much. This is where introductory tutorials are extremely helpful.
  • What I think will be less obvious to a new user (or at least it was in my case) is the architecture of the package. An example of this type of concern is the prepending of buf in all buffer-based processing, which helps elucidate the organization of the tools.
  • In that regard, keeping @destination consistent, just like it’s being done with @source, would make the learning process of the package a bit faster, in my opinion. Then, when in doubt, you can always check the reference to make sure what the nature of the output is, but you know you’re supposed to give an input and output buffer in all cases.
  • I think the interest in helping the user be more conscious of what the output is would also be taken care of by the naming of the destination buffers, I think. This would be more an issue of promoting ‘best practice’ conventions through tutorials and help files, rather than through the object’s attributes and general design.

But again, I haven’t played around with FluCoMa enough to hold to this view too strongly. Just food for thought, if anything.

F

1 Like

I agree, the user already makes a conscious choice on the nature of an algorithm’s output by making the algorithm’s object. As a result, remembering how to set the output is not functionally different between objects and can feel frustrating. I am used to it now, but I often have to go look these arguments up even though I am confident in what the things they produce are. That said, I am definitely in too deep to comment on how a new user will find this. Another counterpoint is that some of the first toolbox algorithms are clearer in this paradigm e.g fluid.hpss~ @harmonic hbuf.