Hi there -
I’ve been working with the example from Part 4 of @tedmoore’s 2D Corpus Explorer video series and had a few small questions…
The first: is there a standard way to add a new set of points to the currently existing plot? I’m sure someone has asked this before, but I’ve been having a hard time finding it documented anywhere…
The second, maybe less pressing, question: I’d like to manipulate the FluidKDTree at audio-rate. Since KDTree generally uses a buffer to compare coordinates, I’m not sure if there is a typical way of doing this. The code below is how I’ve been generating random points - but it would be nice to use waveforms to modulate this data.
See FluidPlotter’s addPoint_ and setPoint_ methods. Note that this doesn’t affect the FluidDataSet from which the plotter may have been initially created.
Currently there is no way to do this. Can you describe in more detail what you’re trying to achieve? There may be a different strategy that would be more feasible at the audio rate.
So, if I run the same code again, it will generate a new FluidDataSet, populated with new points, referring to a separate concatenated reference buffer. I guess I’m wondering if there is some kind of systematic way to blend the two sets together. I guess there might be some way to wrangle an iterative loop for .addPoint? I’m just a little uncertain about where to dig in.
I guess the thinking is that using an audio signal to control the location of the X and Y coordinates on the plotter would open up a lot of different possibilities. If it were at audio rate, I could imagine some kind of modulation across very small granules - but I suppose I would be interested in simply using an audio signal at control-rate to move the point around, as well.
When you say “run the same code again”, are you changing something? Changing the audio file? You could load more audio into the one concatenated buffer and do all the analysis on all the audio at once. Or you could use FluidDataSet’s .merge method? To combine multiple FluidDataSets – but check the documentation on the IDs, you’d need to make sure they don’t overlap in the two DataSets
Audio rate would imply that the granules are 1 sample each which conceptually seems like a stretch here. Using a waveform as a look-up is like a transfer function of a distortion algorithm. Maybe something that would be interesting to check out.
If you want to use the audio waveform values at the control rate, check out A2K.kr, however, it will just sample (and hold, or maybe linear interpolation) the waveform once every ControlDur.ir, which will have very little (or no) relation to our perception of the waveform (other than maybe a statistical distribution of the general loudness of the sound). So the relationship to the plot is tenuous. Have you seen this?
It’s in Max, but here’s some SuperCollider code. It might be more what you’re after?