IAMC Highlights

I’ve been at the AI Music Conference the last couple of days. The presentations were all over the map, which as nice. (There seems to be a whole world of AI generated folk music, which couldn’t be more different from what we all have been doing). Papers are here:

https://aimc2021.iem.at/papers/

I found:

David Kant: Measuring Infinity: Autonomy in David Dunn’s Thresholds and Fragile States
and
Jérôme Nika and Jean Bresson: Composing Structured Music Generation Processes with Creative Agents

particularly excellent.

Also, on the music side, Hunter’s piece was, of course, excellent. But you’ve heard that! I also really enjoyed two works that were on the same concert as mine:

Nicola Leonard Hein: Tertiary Protentions
and
Douglas McCausland: Convergence

All concerts are here:

https://aimc2021.iem.at/musical-works/

Sam

4 Likes

I’ll have to check all of this out. There is definitely some work in here which will be inspiring to say the least. Particularly interested in the paper session one stuff!

Been reading some papers this evening and thought I would note the papers which stood out to me as interesting.

This paper was interesting. I think the general vibe that I can sense is that GANs are super cool and have been used liberally for generative image stuff that is actually aesthetically fascinating. The tools are there for such creatives (evidenced by things like runwayML and several colab notebooks allowing you train on GPUs for free) while sound has lagged behind, with a few esoteric implementations out there. sampleRNN and DDSP are by far the most accessible (I know @danieleghisi worked with it in a piece, and WaveNet is cool but i’ve never managed to get it to work ever.

To me it seemed liked the authors have their fingers on the pulse about what people might actually want to do with this tech toward creative ends. The code they refer to is open and maintained, and the code they contribute on top of this (some addendum helper scripts) are super useful. I think that in best case scenario we might see some cool interactive stuff for generative audio synthesis. I for one would be fascinated to start training my own models on my esoteric samples, and entirely forego the notion of pitch being important in my deep learning model :slight_smile: Definitely going to try out some of their code.

This was a really nice paper. The style is super clear and its nice to see the perspective of someone creating music and using tools from our shared research community talking about how they fit into their compositional aims and aesthetic interests. In particular, section 3.0 was a nice way into thinking about the dialogues that emerge between what we want, and how we can get it from technology and its interactions with material.

I liked this paper simply because I learned about a practice and piece I hadn’t come across before :slight_smile: Also, the post-facto process of digging into the work and unpicking at with machine listening was a bit like how I think I think I compose :wink: @tremblap would appreciate the meta going on there…

Thanks for reminding me about these @spluta !

1 Like

@spluta agreed, there was a wonderful range of topics - I’m glad that they took quite a broadband approach to paper selection.

Chris Kiefer’s presentation was excellent, there is some brilliant untapped potential in sticking ML stuff into tiny boxes that can be carted about without a laptop (*cough*guitar pedals*cough*):

Georgie Born was grand as usual (name dropped Owen and all :wink:): some really inspiring provocations for AI music practice moving forward. She mentioned that she’s starting(/started?) a project on the social/musical/political dimensions of AI, which should share some decent overlap with the less-techy aims of FluCoMa (and features my pal Artemis Gioti, who is a wonderful musician/researcher/person in general; her Inter_agency project also correlates nicely with FluCoMa). I think Born’s keynote will be made public soon, give me a shout if not. It’s password protected atm but I dunno how gatekeepy they are being about it.

@jamesbradbury thanks for that, really glad you enjoyed it!

3 Likes

Yeah Jack. I thought our session was the best one, but I am biased. I think they also clumped things, so naturally all of the papers on our session were things I was interested in.

As always, good papers, good talks, good results, good sounds. These are not all overlapping areas of goodness. Some ideas sounded so promising…then the music sounded like a chicken with emphysema. And not a musical chicken at that. But I appreciate how nascent this field is, and how people are just throwing things at the wall to see what sticks.

1 Like

Yes, I agree about our one (similar biases maybe…). Very relaxed, I really enjoyed the talks, it was lovely meeting the three of you, and Anna did a great job hosting.

I know what you mean. I think maybe the prob with the more explicit interdisciplinary music stuff is that sometimes the tech is grand but the music is a bit rudimentary, or the theory is grand but the tech isn’t developed enough, or whatever. This is why collaboration is dead important I reckon, get people with different approaches making crap together.

My new favourite musical invective!

1 Like