The overall "register" of TB2

Not really a bug report, but just a post with some thoughts and feedback.

So far the tools, and more importantly, workflows for TB2 stuff feel pitched in a way more math-y/science-y way. Granted, we’re more in ML-land now and there’s not really a way to do that that avoids science-ing it on up, but even with the more complex aspects of TB1 (NMF, HPSS, stats/derivatives/etc…), things tended to be grounded in musicianly concerns. The algorithms, concepts, and interfaces were setup in a way that presented heavy concepts in a way that a creative coder can use them musically/creatively. And as a result, even some of the heavier stuff (NMF) prompted a lot of really creative examples and uses from people.

All of the TB2 stuff I’ve been going through so far is built from the ground up with a different paradigm. From the object-per-function-ness of it, to the use of scikit terminology everywhere, all of TB2 feels like machine learning tools, for engineers who make a bit of music on the side. This is a bit of a hyperbolic exaggeration, but I did want to illustrate that it feels very different to use.

I wouldn’t describe myself as disengaged, and even though I’m not the best coder (particularly as we end up in the mathier/CSier end of the pool), I am curious and like to poke at things. So it’s hard for me to chalk this up to lack of effort, engagement, or general laziness on my part.

Granted, the first time around (TB1), it took me a bit to get going and get my head around things, but by the time we were at TB1alpha03, I had a sense of what was happening, what the tools did, how they worked, and what I could possibly do with them (speaking very roughly here). With the TB2alpha03 I’m only marginally less confused than I was when we got TB2alpha01, or the TB2alpha0.05 we got just before the concert.

Here’s a more concrete example.

In the current TB2 alpha03 package there is an Examples folder. This folder has a total of roughly 32 patches (depends on how you count (I tried to count only top-level patchers)). Of the ones from the TB1 era (roughly 13 of them) ten of them produce sound, or are sound-based examples. Of the TB2 ones (19 of them), only four produce sound, with two of them being ‘play the nearest segment’-type patches, and one being a reworking of the NMF-classifier patch from TB1. The main two examples we have from TB2 are the LPT patch (which is great) and the JIT-KNN patch (again, great, but just a KNN reworking of the NMF version of the patch). Both of these patches, largely due to the structure and workflow of TB2, are very difficult to pick apart and rework (with a choke point being the legibility of descriptor/statistic manipulation and dataset creation).

Granted, we’re still “in the middle” of TB2, but by this point in TB1 we had all the sound-producing examples that we presently have for TB1.

What I mean to point out by that is that the examples, materials, patches/helpfiles seemed to be more oriented around musicianly concerns. I know I keep coming back to this example, but even now I have little to no idea what NMF is doing, but I have a pretty good idea of how it sounds, and how it can be used. And more importantly, I could set up a patch that does that fairly quickly. I can’t say the same for even the most tame aspects of TB2.

I don’t expect for any of this to make any kind of impact, particularly since the focus now is (and should be) on the second batch of composers, but I wanted to articulate this feeling shift moving into TB2 and what that may mean for future “generic creative coder” users, who perhaps won’t have the direct line to the team, interest, or importantly time to engage with the material in a way that will allow them to use and apply these concepts.

That seems to be the nub of the issue. Remember, we’re not in a service-provision relationship, especially at the alpha stage: the point is to discover musicianly perspectives on the tools together, which is one reason it’s important to have people working with them ‘properly’ from the get go.

The key differnces with TB1 are that we’re dealing with more generic algorithms that have fewer (successful) precedents for how to present these in a musical context. The idea would be that this quite low-level foundation can be built on with musically suggestive wrappers once we’ve co-discovered what they are through practice.

FWIW, I seem to remember almost exactly the same exchanges with respect to TB1 and its interprebility, so at least we’re all consistent…

2 Likes

I appreciate that, and hopefully my post did/does not come across as negative at all. I just wanted to voice some feedback on what it feels like as one of the users/“creatives” involved (on the dumber end of the spectrum!).

I could be because I saw the Kadenze course, so even though the specifics of how to implement and creatively use ML-y stuff from TB2 still escapes me, the overall concepts aren’t nearly as alien as something like NMF, which is a literal black box of voodoo that spits things out.

There’s obviously no sense in recreating or doing anything along likes of the Kadenze thing, but she does a great job of communicating the concepts in musicianly/musical ways. The details are sparse and fuzzy, but the sensation and understanding (for my brain/learning type at least) is greater. The workflow from those videos just doesn’t apply to the TB2 paradigm as the interfaces are radically different. And the interface is the place where I, personally, find the most confusion in TB2.

So to avoid souring the punch any, I will conclude by saying that I, too, look forward to co-discovering what some of these creative implementations will be.