So in looking at @jamesbradbury’s patch in this thread, as well as some of my own experiments, it appears that fluid.buftransientslice~ always returns 0. as a transient location. I guess this makes sense as it is the initial boundary of time (same goes for returning the end of the file as a transient location).
It dumping out the contents of the bufslice buffer, it is easy enough to ignore the first entry (and last two), but I can see there being a potential problem if you have a transient occur between the time of 0. and the first debounce timer.
Say I have @debounce 4410 and I have an audio file that has 50ms of silence at the start, and then the first transient (so the first transient happens at 50ms in), this first transient would be swallowed by the debounce lockout. Is that correct?
So with that being said, can either fluid.buftransientslice~not return the start/end times as transient positions, or have the @debounce attribute ignore the initial time window?
The latter option would have the unintended side effect of there being a recording with two transients start the very start slipping through the debounce.
Actually, is there a reason (I’m missing) as to why the start/end times are reported? I guess for the sake of the helpfile, you can have the. “last” (second to last actually) slice play by doing the peek~ and peek~ + 1 trick, but otherwise it seems that it is not a real value since there isn’t a transient happening on the last sample of the file (same goes for the first).
This is easy to justify as this was carefully though You need to know the boundaries of your analysis. The best is to remove them yourself, which is much simpler than asking for them (imagine when multithreading will be implemented and stuff can come back to you much later, you will need to know what it was all about, no?)
no it wouldn’t - the first entry will always be the start, and the last the end, since I cannot find another way to make more sense of boundary issues. Think of starting to play at a given time which has DC, you are creating a transient there. Same thing with stopping.
Boundary issues are always hard but this one I’m pretty confident we found the most inclusive interface. Easier to remove things you know are there and valid, than put them there later.
Isn’t that exactly the kind of thing that would be easier to keep track of elsewhere? As otherwise you are using a single dimension of data to store two discrete things (the location of transiets AND the boundaries of analysis (which further confuses the “everything is a buffer” problem outlined here))
I don’t understand this part. So does the initial boundary (0.) trigger the debounce timer or not? What do you do, if like in your example, there is DC offset at the start? Is that counted as a transient? If so, does it trigger the debounce timer?
To clarify my speculative problem:
I have a buffer with a transient that occurs at 1000 samples into the file, before that is absolute silence (no DC/hiss). I have the debounce set to @debounce 2000.
So does that transient get tracked, or is it locked out by the debounce timer?
I have a buffer with a transient that occurs at 1000 samples into the file, before that is relative silence (maybe DC and/or hiss). I have the debounce set to @debounce 2000.
Does the DC/hiss get tracked as a transient? Would it be at position 0 samples? If so, does that mean that there are two entries that say 0 samples? (one for the boundary, and one for the first transient), and in this case, is the “actual” transient at 1000 samples locked out?
not if the calculation takes a lot of time, and you pile up many form different processes (think a few nmf in parallel… or a few hardcore segmentation (superprecise) whist a longer one draw vague boundaries)
it will when it works, if the begining is DC (aka a real transient) but not if it is not (aka the start of the anal window)
We will have to check boundary cases (like the one you declare) once we have the debounce ‘fixed’ but this is one of the reason why it is hard to release quickly - we discover, just between us, a lot of such cases, so we move carefully and take (mostly) informed decisions.
The list of conditions a syntax has to answer to for the whole thing to make sense starts to be long - hence building the help files which are diverse and case study, and reading with pleasure your additions and requests - keep us on our toes
Sure, I think? From a quick look at the code, I don’t think I stop this happening at the moment: we glue on the analysis extents, but don’t make that conditional on checking whether or not there’s already a reported spike at t=0. Unless I’ve misunderstood what you’re saying.