So, you need like 3 hours worth of samples to analyze enough to make something useful ? Or am I wrong? Are there links to sample libraries people are accessing? All I have are some drum libraries and Emulator II sampler sounds I converted to wav files. I don’t have a mic or nice sound device to go around town and make up a big library . Not yet.
Hi and welcome @sleestack808
No, you certainly don’t need 3 hours worth of material to do anything useful with the package. It would help to know what you want to do!
I think most people are using their own audio, generally in relatively small quantities (less than 3 hours ).
That said, there are a lot of datasets out there. you found a few, and on the more classical side there is the well-documented (free or paying depending on size) SOL:
https://forum.ircam.fr/topics/detail/334-Sol/
but as @weefuzzy says, using one own’s dataset programmatically is exciting - it gives another view on one’s old decisions/tastes…
I need to get a microphone, but I should make my own sounds. My sound device is a scarellet ,
It’s meh. I’d love to take things like an instrument like a tambourine and have it totally malleable. Like a real tambourine. If I could do anything with FluCoMa , I’d love that. But I’m open to all kinds of ideas, Is there a list of possible things you can do?
Dear @sleestack808
A scarlet is a good interface to start with, if you have a good microphone. I don’t know if you can borrow one to start with, but on a percussive instrument, mic proximity can yield interesting and diverse results. I don’t know what type of music you enjoy either, so let me know a bit more what inspires you, from the straightest to the craziest, and I’ll try to find examples.
As for what can be done with FluCoMa… a list is almost impossible to draw, as it adds features to other software that are already quite powerful.
max/pd/sc were missing integrated segmenters, splitters, descriptors, and dataset tools. We made them, to complement what is already in those very rich environments.
Let me pick 2 very different examples that are taken from a few recent interviews on our podcast - I might be wrong too, as I am shooting from the hip!
- Marco Donnarumma makes agents he dances with - he started wayyyy before flucoma but has used some of our tools to make embedded version (@marco.donnarumma please correct me if I’m wrong)
- Helen Bledsoe explores ways of augmenting her virtuoso flute performance practice - again, FluCoMa was part of a way in, but she is also working with other more advanced neural nets (@bledsoeflute please correct me if I’m wrong)
And the podcast list is also full of people doing corpus driven musicking (or musicking-driven corpus poking?) not with FluCoMa, but definitely possible with the toolset - actually this was my motivation 8 years ago (yikes!) to get that toolset going and to get a grant application together: finding ways of doing all of that within Max/SC/Pd and not in bespoke, multi-applications, not-musician-ready setups. For instance, I’d recommend Tomomi Adachi and Artemi-Maria Gioti for two very different approaches to a very similar question.
I hope this helps. It is rambly a bit, sorry for that!
what are those? I guess it’s in the docs.
I like Aphex. Kraftwerk, Joy division. Ramones, bernard parmegiani. Endless experimental stuff. But like I said. If I could make a percussive instrument feel real. Like change in natural ways Id love it
I own a bunch of lovely gear. I have an EMS Synthi getting repaired. Which is why Im into supercolllider again. I had to fiddle with something. And I enjoy it despite it being so difficult.
Family issues made buying anymore gear an issue. Id have to settle for an sm57 for now. Hope to get more money soon to finish up my little studio.
Hello
We have similar tastes As for software use, since FluCoMa enables you to code (almost) what you want, you need to know what it looks like… so I would suggest to start with playing with a finished, free product. I can think of 2, but the first one requires max / live or pd I think
- @rodrigo.constanzo SP-Tools – Machine Learning Tools for Drums and Percussion « Rodrigo Constanzo
- AudioStellar
This way you can dump in your sounds and see where these interfaces (and their assumptions) work, and irritate, you while you make music with them. Then, come back to this thread, and tell us more. That way, we can help you and point you in a direction that is clearer.
I hope this helps.
p
That really helps me understand this. Audiostellar is amazing. I love it. PD is free maybe I can get SP-tools going. So this is kind of about mapping similar sounds together into families, and allowing to seamlessly transition between them? Is that right? Or is that just a little of it? So far I love it, thank you
That is a good description of the original project, yes:
- how to split sound, in time and in archetypes
- how to then (numerically) describe the resulting segments
- then with said descriptions, how to organise/query/find similarities
- finally, how to make new sounds from a bunch of sounds considered similar
because in closed software (like CatarRT and AudioStellar) there are often many assumptions on what makes each of these steps ‘successful’, we wanted to create a slightly more open space (through code affordances in creative coding environments, but at least as importantly through this forum discussing them, and therefore sharing knowledge in learn.flucoma.org too) to engage with custom music coding approaches.
sp.tools and mosaique are both more opened than AudioStellar and CataRT, but have been coded to a higher level of assumptions than FluCoMa, in FluCoMa.
So for people entering the world of programmatic data mining of soundbanks for music making, I usually suggest
- catart and/or audiostellar to start poking at fun stuff that works and build a list of things that work, and things to change
- then something more modular, like sptools or mosaique, to play with the moving parts of the process and clarify the problem
- (if machine listening queries are still a mystery, i suggest audioguide for training that part too - it is a fantastic way to see how querying is non trivial)
- then if all fails (and usually there is a lot of cool music done already) embarking on a FluCoMa driven coding approach.
now, I also recommend the other, very opposite way, aka starting with bits of open code and poking at it and see how to bend it, but I don’t have a clear path yet for it. I am planning a research project on that specific task, for flucoma. more to come in the next years.
I only use supercollider, now audio stellar. I left SC and came back years later. It’s a whole new world now. I was focused on analog equipment mostly, My synth is in the shop. So I got back into SC and really love using it. So far Ive seen a lot of dots and I can drag my mouse over the dots and it plays sounds from a folder. Same in FLucoma in SC. It slices audio, like recycle used to. It does the dots thing. You can control many parameters of a synth with a single slider. Like a joystick on a Synthi, I guess. Stuff like Rave seems to play sounds that are almost pitch and amplitude following input audio, Thats all I can tell right now. Maybe a list of possible things you can do and hope to do would go a long way. Simple plain English examples . Also, the wording Is new, Like descriptors… Regressors, Classifiers,Corpus must mean body. A body of sounds? Just a “for example” of what all these terms mean would also be enlightening. Thank you for everything.
I have a draft for the last 2 years of an ‘is flucoma for me’ page. I think your ideas are very good, including a glossary in plain English indeed. I will hope to action this in the next weeks (loads of moving parts here in my professional life )
I do have a question. The little apps with all the dots like audiostellar . How is that different than round robin sample type accessing of a folder of samples to play?
It is a good question.
A round robin is a curated set of sample that will alternate to avoid the repetition of 2 samples in a row, which sounds very artificial. If you go in circle playing in a spot in that 2d graph, it has a similar effect indeed.
But what FluCoMa is about, is to not have to curate them manually. Using the tools, you can find ways of making the list of those ‘neigbours sounds’ that are personal, original, surprising… by digging your database of sound programmatically.
Does that makes sense?
personal in what way? Or original in what way? What is the difference than switching to different sounds ? Maybe an example would help. An idea of what you would do.
I don’t know if machine learning can potentially do this, but since sound is just air pressure waves being moved by a speaker. Why can’t we manipulate air pressure waves on a speaker in software without using standard Audio objects like filters, oscillators, etc…Im talking about direct manipulation of the instructions to move the speaker cone. So that it maybe accidentally approximate acoustic events?Wood creaking, Cello timbre, Metal being Bent., directly manipulating the instructions to move the speaker cone with these familiar waveforms at the single sample level. In a photoshop style app with paint brushes for different textures of real sounds that have been mapped, with a UPIC style pallet.
I don’t mean metasynth, that’s app just sounds like sine waves. I don’t know, it’s a thought.
Is this a stupid Idea? Is this even in the scope of ML synthesis?
Actually, it is not, but not likely to be easy to control with FluCoMa.
I recommend looking at the DDSP project as an example of something very advanced working directly on audio.
In FluCoMa, some people have used our MLP to do ‘interpolating’ oscillators. That is noisy and fun, but not remotely near a cello…
I admit it’s probably something in the future.
Im really into FluComa actually. I was going to us the computer to trigger samples and play external gear. . I think I might use it instead of the standard Playbuf stuff in sc. I have to figure out how to play corpus with patterns.