What should it be like to make music with loads of recorded sound with no plausibility hindrance.
Live performance is to me a process of ‘sound sculpting’ where the performer navigates through a system/instrument/assemblage of materials/technologies/relationships. So a corpus of samples would exist as a virtual 3-D mass/entity which I could navigate through during the course of performance using either physical manipulation (sculpting like clay); or perhaps sound descriptor matching in more than 2 dimensions with analysis of vocal input. Actually, having more than 3 dimensions would be interesting as there would be the ability to move along many different planes at once based on a variety of input. This input would not be something I would be thinking about (who thinks about the spectral centroid of their voice while singing?) but would follow a ? model (intuitive, enactive, non-linear).
– lazy ideas –
I want to be able to resynthesize all of the work of Elvis Presley only using samples of Mahler Symphonies.
timbre matching of input to output
control a “synthesis” engine using cv
gesture mapping to timbre mapping space
press some buttons…sound comes out
- Negotiating various performances of “similar music” (i.e. performances of the same piece, or using the same piece of equipment, etc…), being able to calculate and extract interference patterns. (e.g. being able to extract/listen to only the sections that were “similar”, or sections that were “different”). Being able to do this in (non)realtime.
- Be able to generate predictive versions of “the future” from a sufficiently large set of audio. (multi-level and multi-dimensional (microdetail-to-gestural-to-formal) generation and resynthesis).
The ultimate sound data set is just one tiny part of a whole cultural background. No need for sounds afterwards, just a personal unique control; a huge wavenet including your emails for the past 30 years.
I work with a lot of filed recordings and it takes hours to listen to and catalogue things. I’d like software to do the leg work so I can do something more fun.
Recorded sound often feels like a solid chunk of stone - can we make it fluid.
I want them to be organized in a city. I want to enter their doors, I want to meet their neighbors, I want to have them walk, I want to have them mate and have children, I want them to…
listen to as many as you can, pick the ones you choose
lose all of them, record new ones.
Sounds suggest themselves for selection in real-time, as I play, presented in some sensible spatial relationship, based on what else is going on, what has happened. They make themselves grabbable.
Transformations adapt to the qualities of my touch, in relation to the sound(s) at hand. They become malleable.
I work easily and fluidly with high-level abstractions – computer, make it dirgier – and the machine works out how to turn this into sensible DSP
I’d like to come and start using a combination of ( in no preferred order ):
– gestures ( moving hands/arms, making shapes, pointing at stuff );
– verbal descriptions ( talked over and/or written ) of what I want;
– onomatopoeia ( screaming, mimicking what I hear in my head );
– drawing shapes ( in conjunction with any of the others ): “a kind of tonic and fresh vibe doing this and that ( me drawing stupid curves and shapes in the air or on screen ), and then blabla…”
And then seeing suggestions in order of relevance appearing on a timeline following my actual rendering of the idea; me clicking on each chop and seeing other suggestions.
** Subtitle: “Real sound modelling” **
- The true sonic analog of procedural graphic- that’s sort of like procedural sound, but not
- timbre copying but with variation
- timbral time shaping (morphology) (true divorce of timbral and temporal characteristics with finely mapped temporal events)
- modelling of temporal trajectories (multi-dimensional analysis) through analysis and gestural resynthesis
- what’s in-between gaps in the corpus)
- meaningful (musical) control of temporal and timbral manipulation
- no artefacts
- musical segmentation of long sound sequences
- all sounds would be analyzed and their aural features be available as mean values, as well as continuous data flows
- the sounds would be represented in a 3-dimential space I could navigate with some VR system
- I could instantly change the way, how the three dimensions are ordered and the ‘similar’ sounds would cluster in the 3D space
- by approaching a cloud of sounds, I could trigger them with many methods (chords, arpeggio, rhythmical synced onsets, grain swarms) etc.
- building sound streams from many disparate, but aurally related sources, while keeping a trace of their origin - this will become important in later steps of the process
- output would be written as symbolic notation, not yet as audio streams to allow for post-editing and modification of sounds
- ability to search for sounds in the result and replace with sounds of other ‘similarities’
- rendering into high-order, high-density ambisonics images with precise movement patterns for each sound family
I personally prefer technology to offer some sort of resistance. Without some limitations, I would not find it interesting to play with technology. It may be interesting to distinguish between “interface” limitations and “time-space” limitations. I still want interface limitations, self-imposed if necessary. We can assume we will soon reach the point (or we have already) where we have more storage than we can make use of. If we have an infinite amount of data, we should make it fun to explore it. Like a walk in the park where you find all sorts of interesting sounds. We should be allowed to create our own way to travel. Make our own maps. And our own satnav.
Like this !! Or maybe no need for organization anymore. And no audience neither.
I designed an instrument called GrainCube that does this for me
GrainCube is a four-part granular processing instrument with numerous randomizing functions and modulation capabilities that allow for indescribable sonic mischief. The heart of GrainCube is a 400mb sample map of exclusive sample material or any sample content you want.
Its free try it here if you have Reaktor of course:
But I like no artefacts !! One also needs transgression for novelty. Isn’t it ?
How cubic your grains can be without clics ?
I gonna try strait away.
I want to know more about the “interference patterns”.
musical segmentation of long sound sequences
< https://www.sononym.net ??