Chroma, finding things in stuff, flucoma and pypi

finding things in stuff

In the last flesh flucoma plenary I presented some work that I did combining FluCoMa, Python and composition. The title of that talk “Finding things in stuff” has permeated my interests since then, and I sought to develop a set of tools that would last forever for me, drawing together all of the techniques and implementations I found useful into one streamlined workflow. Last night I uploaded the first stable version of that software ‘ftis’ which I have resigned will likely just be interesting and useful for me :slight_smile: but nonetheless is now public and relatively usable by anyone interested. Publishing it to the public repository of pip installable modules is a way of closing off that project for a while now so I thought I would share it here.

example

In light of the recent geek-out session on Thursday afternoon I thought I would use an example of what ftis can do and how it pulls together various technologies to provide a fast, and convenient way of scripting out the data analysis portion of what many people seem to be doing right now (I’m thinking of you @spluta and your frustrations). In this case, I’m going to produce a flucoma compatible dataset from the statistics of chroma analysis.

# import analysers
from ftis.analyser.descriptor import Chroma
from ftis.analyser.audio import CollapseAudio
from ftis.analyser.stats import Stats

# import scaffolding
from ftis.corpus import Corpus
from ftis.process import FTISProcess
from ftis.common.io import write_json
from pathlib import Path
# import the necessary flucoma materials
from flucoma import dataset

src = Corpus("~/corpus-folder/corpus1")
out = "~/corpus-folder/chroma-dataset"

process = FTISProcess(source=src, folder=out)

stats = Stats(numderivs=2, spec=["stddev", "mean"]) # use a non-anonymous class
process.add(
    CollapseAudio(),
    Chroma(fmin=40),
    stats
)

if __name__ == "__main__":
    process.run()

    # Now that ftis has completed lets pack the data into a fluid dataset
    dataset = dataset.pack(stats.output) # use the pack function to marshall it to the right format
    dataset_path = Path(out) / "dataset.json" # create an output path
    write_json(dataset_path.expanduser(), dataset) # write to disk

This script would analyse every audio file inside the src folder and produce a dataset containing statistics on the Chroma analysis. This uses librosa for the Chroma analysis,flucoma for the slicing, pydub for the exploding/collapsing of audio and scipy for the statistics. All of this is multi-threaded and pretty fast. For example, running the above script on a corpus of 1000 items completes for me in about 3 minutes. ftis also implements both high level and low level caching, so individual analyses are stored in between runs. This means you can modify the original code or add additional processes after the analysis, re-run the script and you don’t have to wait on analysis that was already done just before. It also stores all steps of the analysis as simple .json files and if they exist it uses that to immediately load all the analysis as a chunk.

For example, the output of running that script looks like this in the file system:

image

That dataset.json file can be read into SC, pd, Max or wherever you prefer and all of the analysis files in .json format can be read back into new ftis processes.

I’ve also incorporated more novel dimension reduction algorithms, clustering and even a web server that boots to visually browse your data once made (very very alpha still though). The main interface however is currently terminal based and looks a little like this:

ezgif-6-e92af00f80a1

If anyone with some python chops wants to try it out, that would be awesome. I found with ReaCoMa that when other people used it, it became much more robust and useful for both myself and a wider community. My hope is that although ftis is very specialised right now that it could reach a wider audience of people who want to work in a robust way on big datasets that doesn’t break their existing workflow and comfortable ways of musicking.

If you do want to use it, its requires you to run pip install ftis and then look at the examples directory in the source code. I’d be more than up for working with anyone who had a specific goal in mind that they think ftis might help you achieve.

5 Likes

Thinking speculatively about chromagram here, I am always a bit sad about the equal temperament implications, and the octaviations too. Imagine a chromagram where classes were more discrete… I’m not sure yet, I think there is much potential there more than expectations of octaves, fundamental, and equal ratio, no?

I wrap up Librosa inside a ftis.analyser class. This gives you the option of specifiying:

  1. number of chroma
  2. number of octaves
  3. number of bins per octave (I think mostly what you are looking for) :slight_smile:

Perhaps I should have mentioned the other analysers I’ve implemented so far too:

  • spectral flux
  • EBUR128 loudness
  • MFCC (librosa and flucoma implementations)
  • CQT
  • Agglomerative Clustering
  • HDBSCAN
  • KDTree
  • Uniform Manifold Approximation Projection
  • Standardisation
  • Normalisation
  • Robust Scaling with Interquantile Range
  • FluidOnsetslice
  • FluidNoveltyslice
  • fluid.bufstats~ like analysis machine

Actually writing this out shows my biases for certain algos and techniques, quite fun

1 Like

You should be able to tune the Chroma how you like, n’est pas? If you know the piece has a G fundamental, why not tune the steps? Why not do a 19-tet analysis?

All I know is tritaves should not be accommodated. That is a garbage idea. Sorry Max Matthews!

Yep you can also set the tuning and the fmin.

Not to beat a dead 12-tet horse, but I was doing some pitch analysis on noisy fm synths. After the tiniest bit of noise in the synth, FluidPitch (or any pitch analysis) has no idea what to do and starts spitting out questionable results. But a combination of chroma and centroid or mel bands would actually tell me something, while keeping harmony/pitch in the equation.

I was more thinking of 2 things (in a way to challenge @a.harker accusations of me being a equal-temperament police on the fretless)


As for pitch, I get good results with IQR and confidence weighing… There is even a tutorial coming!

More soon.

Another idea that emerged overnight is that in our paper on mapping we had good results with an autoencoder instead of MFCCs, especially it seemed to behave well with the pitch material as well as timbral. At the time I did not understand what AE was…

@groma I re-read our paper and it says that we can customise the 2 hidden layers but does not say much more on the innards. I went to the python code and if I understand well it runs though all FFT frames (at 1024) taking the magnitudes (513) and runs a first hidden layer of 513 then one of 13 (then back to 513) but as my python is not super good I’m not 100% certain… maybe @jamesbradbury will want to have a look there too!

I’ve tried implementing that part of the paper in FluCoMa land when we had the last plenary but not in Python. It could be good to have a dimension reduction auto encoder example somewhere in ftis perhaps. I honestly have such good results with UMAP that I haven’t strayed :slight_smile:

1 Like

Having discussed with @groma it seems that one could use for now a large MelBand and run an autoencoder on it… the structure was 513(-513-13-)513 and one could use, let’s say, 240 melbands and do something similar instead of MFCC. I remember that in the research for the paper the harmonic material was nearer each other, then in fifths/fourths, which was exciting…

anyway, another trail to explore

Can I ask for help here @jamesbradbury ? I tried to install but it seems it is polluting my drive with stuff all over… was there a way to install in a folder all the stuff I need, or at least the example code? Old people like me don’t like harddrive being filled with stuff :slight_smile:

1 Like

In what sense is it polluting the drive? Do you mean by running the examples or installing the module itself?

If it’s polluting your drive at install I hate to say it but that is a problem only you can fix by correctly configuring your environments for Python but I can also address this by being more inclusive in the documentation and ensuring that people are aware of this and how to do it.

A typical workflow would go something like this:

python3 -m venv tutorial-env to create a new env called tutorial-env.

cd tutorial-env to get into that folder.

You then need to ‘activate’ the environment like so:

source tutorial-env/bin/activate

You would then install ftis here rather than in your system install (which will throw red to say you dont have permission). Inside the virtual environment you have full permissions and can control any dependency resolutions and interpreter versions.

conda has its own set of methods for working with environments and from what I understand it makes very large ones because it overzealously includs things like QT which is what you might be experiencing.

For conda though it looks like:

conda create --name learn-ftis python=3.8 to create the env

and then activate by running source activate learn-ftis

followed by pip install ftis

Sorry thats not the cleanest answer but I will update the docs to reflect the necessary steps to have a clean and functional install without expecting as much prior knowledge.

1 Like

I just meant that dumb python users like me will pollute their drive at install.

So as a conda (dumb) user, I’ll try it with a temp environment and report back. This is so excitingly full of potential that I am eager to get it to work :slight_smile:

1 Like

Exactly. Python is dumb and dependency management is a nightmare… Other languages have it very right while py is confusing. Your experience is not singular so now I have it in my mind to make installation supersmooth and perhaps add some scripts for getting around it easily too for those who have been baptised for the command line but not necessarily the woes of environments.

1 Like

You’re so kind… the adjective was for users, not Python… I was expecting more “ok boomer” :smiley: The fun to explain to my kids that I’m nearer them than to the boomers in years at least is quite funny!

1 Like