What should it be like to make music with loads of recorded sound with no plausibility hindrance.
Dimensional Space
Live performance is to me a process of āsound sculptingā where the performer navigates through a system/instrument/assemblage of materials/technologies/relationships. So a corpus of samples would exist as a virtual 3-D mass/entity which I could navigate through during the course of performance using either physical manipulation (sculpting like clay); or perhaps sound descriptor matching in more than 2 dimensions with analysis of vocal input. Actually, having more than 3 dimensions would be interesting as there would be the ability to move along many different planes at once based on a variety of input. This input would not be something I would be thinking about (who thinks about the spectral centroid of their voice while singing?) but would follow a ? model (intuitive, enactive, non-linear).
ā lazy ideas ā
I want to be able to resynthesize all of the work of Elvis Presley only using samples of Mahler Symphonies.
timbre matching of input to output
control a āsynthesisā engine using cv
gesture mapping to timbre mapping space
press some buttonsā¦sound comes out
FutureNow
- Negotiating various performances of āsimilar musicā (i.e. performances of the same piece, or using the same piece of equipment, etcā¦), being able to calculate and extract interference patterns. (e.g. being able to extract/listen to only the sections that were āsimilarā, or sections that were ādifferentā). Being able to do this in (non)realtime.
- Be able to generate predictive versions of āthe futureā from a sufficiently large set of audio. (multi-level and multi-dimensional (microdetail-to-gestural-to-formal) generation and resynthesis).
Multi-ulti-set
The ultimate sound data set is just one tiny part of a whole cultural background. No need for sounds afterwards, just a personal unique control; a huge wavenet including your emails for the past 30 years.
Rock
I work with a lot of filed recordings and it takes hours to listen to and catalogue things. Iād like software to do the leg work so I can do something more fun.
Recorded sound often feels like a solid chunk of stone - can we make it fluid.
City
ā¢ organize
ā¢ organize
ā¢ self-organize
ā¢ map
ā¢ atlas
ā¢ cartography.
-
I want them to be organized in a city. I want to enter their doors, I want to meet their neighbors, I want to have them walk, I want to have them mate and have children, I want them toā¦
-
listen to as many as you can, pick the ones you choose
-
lose all of them, record new ones.
AutotasteMaschine
Sounds suggest themselves for selection in real-time, as I play, presented in some sensible spatial relationship, based on what else is going on, what has happened. They make themselves grabbable.
Transformations adapt to the qualities of my touch, in relation to the sound(s) at hand. They become malleable.
I work easily and fluidly with high-level abstractions ā computer, make it dirgier ā and the machine works out how to turn this into sensible DSP
GESTICULATING
Iād like to come and start using a combination of ( in no preferred order ):
ā gestures ( moving hands/arms, making shapes, pointing at stuff );
ā verbal descriptions ( talked over and/or written ) of what I want;
ā onomatopoeia ( screaming, mimicking what I hear in my head );
ā drawing shapes ( in conjunction with any of the others ): āa kind of tonic and fresh vibe doing this and that ( me drawing stupid curves and shapes in the air or on screen ), and then blablaā¦ā
And then seeing suggestions in order of relevance appearing on a timeline following my actual rendering of the idea; me clicking on each chop and seeing other suggestions.
TITLE
** Subtitle: āReal sound modellingā **
- The true sonic analog of procedural graphic- thatās sort of like procedural sound, but not
That means:
- timbre copying but with variation
- timbral time shaping (morphology) (true divorce of timbral and temporal characteristics with finely mapped temporal events)
- modelling of temporal trajectories (multi-dimensional analysis) through analysis and gestural resynthesis
- whatās in-between gaps in the corpus)
- realisitc/organic
- meaningful (musical) control of temporal and timbral manipulation
- no artefacts
CloudsOfSounds
- musical segmentation of long sound sequences
- all sounds would be analyzed and their aural features be available as mean values, as well as continuous data flows
- the sounds would be represented in a 3-dimential space I could navigate with some VR system
- I could instantly change the way, how the three dimensions are ordered and the āsimilarā sounds would cluster in the 3D space
- by approaching a cloud of sounds, I could trigger them with many methods (chords, arpeggio, rhythmical synced onsets, grain swarms) etc.
- building sound streams from many disparate, but aurally related sources, while keeping a trace of their origin - this will become important in later steps of the process
- output would be written as symbolic notation, not yet as audio streams to allow for post-editing and modification of sounds
- ability to search for sounds in the result and replace with sounds of other āsimilaritiesā
- rendering into high-order, high-density ambisonics images with precise movement patterns for each sound family
Satnav
I personally prefer technology to offer some sort of resistance. Without some limitations, I would not find it interesting to play with technology. It may be interesting to distinguish between āinterfaceā limitations and ātime-spaceā limitations. I still want interface limitations, self-imposed if necessary. We can assume we will soon reach the point (or we have already) where we have more storage than we can make use of. If we have an infinite amount of data, we should make it fun to explore it. Like a walk in the park where you find all sorts of interesting sounds. We should be allowed to create our own way to travel. Make our own maps. And our own satnav.
Like this !! Or maybe no need for organization anymore. And no audience neither.
I designed an instrument called GrainCube that does this for me
GrainCube is a four-part granular processing instrument with numerous randomizing functions and modulation capabilities that allow for indescribable sonic mischief. The heart of GrainCube is a 400mb sample map of exclusive sample material or any sample content you want.
Its free try it here if you have Reaktor of course:
But I like no artefacts !! One also needs transgression for novelty. Isnāt it ?
How cubic your grains can be without clics ?
I gonna try strait away.
I want to know more about the āinterference patternsā.