Offline Decomposing audio file into files containing frequencies of partials

Hi all,

I am new to Flucoma and I am using the Max/MSP package. I was wondering if it is possible to offline decompose audio into a number of wave files that would each contain the frequency of a partial of a sine component of the audio?

there is no direct ways - but there is a roundabout way. If you use fluidbufsines~ and ask for 1 partial only, and do that iteratively on the residual, you will get more and more material that null-sums to the original. Will it be hi-fi? no. to do that hi-fi, I’d use @a.harker 's framelib which has a NRT mode now.

thank you

Can mister @a.harker maybe provide some hints as to have would this be done in Framelib?

I’ve spent the whole day checking out the tutorials and I get the basics of it and I get the NRT mode but I have no idea how to even start decomposing the audio into sines and frequencies.

It’s not fully possible to do a decomposition with phase info in an easy way with the present object set, in that whilst you can get a list of spectral peaks from fl.peaks~, you can’t get any info about phase which for reconstruction that cancels you would need. You also don’t have any data about how partials continue or track between frames at that point. I have an object that does tracking on max messages, but it’s not public (or framelib based - it’s just max messages), so whilst framelib might have more objects to do this kind of thing in future, It doesn’t right now.

You might be able (in theory) to build the full analysis chain from raw framelib objects, but that is quite a low level DSP task, and you’d need to devise a specific analysis model. I’m not even sure the full task is possible and it’s probably not something I’d attempt.

Another option is to use SPEAR to do the sinusoidal decomposition and export the partials data as txt (SPEAR also enables some filtering to remove really short partials or other non desirable partials). Then use that txt data to render the sine wave wav files. It will sound more “addictive synthesis”, not “IFFT”, but perhaps useful for you.

indeed, and the same can be done with fluid.bufsinefeatures… the rendering is the hard part

Maybe I was missunderstood.

I’m not trying to reconstruct the sound, all I need is the frequency curve (pitch line) of partials (not the sine sounds themselves) stored each in its own wav file so I can “play” them along with the sound in max in an mc framework and send them to the pitch of synths/vsts of my own choosing.

ok so then I recommend doing what the helpfile of fluid.bufsinefeature is doing:

  1. you do the analysis and get a buffer of N freq and N mag
  2. you resynthesise them as you scrub that buffer (you get the info and play along at whatever speed. you can even interpolate values although the voice allocation is not done so you might have surprises of voice jumps.

There is a voiceallocator coming but it won’t work with buffers yet. soon. I just need time and brain space. probably in the new year?

will try this weekend.

thanks for the great work, keep it up

1 Like

actually, I couldn’t find the example I’m referring to … we have sooooo many helpfiles :slight_smile:

… so I hacked a short demo here


----------begin_max5_patcher----------
1393.3oc2ZkrbiaCD8rzWAJcbJaUDKbKmr+.xoI2RM0TPTfZfC2BInsblZ72
dvBosFaHQHKHMN4fIMg.YiWu75tA42mOawp5srtEfeC7mfYy997YyzCoFX1v
0yVTR2lUP6zSaQE6g5U2s3JyOIXaE5gKyV1UvWydB.CBBzGFmCesdFx655vz
wAq5K4UELg9gheYv5dw3nvgQanhruwq170VVlvrTwIAKCtBjDGpNAQ5qPnkA
fuLbSlmi3wFl4NVT1WH3YeiVUwJ53apnEK.eQM2eLet5vUWD7m3G7iRPZ7Sh
9nf+hZ55UzpM1.crUPCOVPGkpQabfF6gISiY8B5cgxRVWGcC6Mvj1zT738zh
dVG.BP.Lf.BAQfXPBHUZ0sA+HqvGczvOR6rGSfZadb5z3+ch8rkxE0JV6S1P
CYbvx50rWAj2ht75JQNMSOw.qHdz2W5SZV0Ze4i3vyv2VbhVWkFnOgHX0ILY
m6Qs953+id8AU9UtG8bEXQtzqWn9GdkTfiOy07tlB5ipYqt6vcDUEsT+DWba
KW9H7W3mbMY0Zg8SnGJQqAikztgRMHbZWuSgh4vNfv+K4.FhB++lG39nFyYT
QeqjWLuk82VLbjX+vCRPwFcol.LN5bQCtmHs0zLatkjvi.cAGJRyTRSPf9LN
7E340hUJ4aWW+PkrbEafg3GS0.sQJwDDf9HTk1mrZ8v9EvAC0kF9Q.wYOlUv
rhZjWQch4TzGByLuZMa6SfR5FvMJA0Yu3LbhmHkLUhmDjpRPBCIWNcv93iiQ
KwXYiBi+PAuhkU2WI1Ef6pJ7D+LLLznJz45BCFowtP7yeRlR1lgNzS95o5rO
wglD5Zy8gsylTzd0zlFmXChdh0FOjYkj7hA7BlfcCSPEhVPGsroCbSAuSvpr
llBiNhpawG.voDkYL13wBwNvZqJ4R+mOQdqTLLPScGWvqqr.XzwvVgljvNJU
agSfNh2yCEspbwCxQiB8aZpQN5KXdpU8BgcK5ytvMzVYo4BV6WYUzUEutgkS
n4sPSfLbfxhLd7RtuIOPumkW2VB1zyE1zCAdh6BmZ1cDy9ChuvbWq5yykssp
J6vBHgGyFgcfvWR5tDzwN.xmaS78aYm.w6ooOXjmfroouQHGbQf7diZgvKVT
KBi90E0NlJRsKGw3zHqkRheq9I0WURpsywvcpjTUahuCluqqmawJG6hQNmWv
ddOUxK54qWNx08je16sTS78PJaSsY3nKGmlATx37NoUeXSddBbSdtnSJT4Zs
CfBHIfa5p6ayXZFd4OK4C5YUYONjeWxHVwE8qYlVxjnugQ+qcy3OU+IQ9oXu
wrgIZEIIwk9SNoh81WvUKqoPswjVpq0OgOFGmjXCN+EkIT6M7GsrxUEzGu9V
5mu9VoQtSVU0mEs8kcW+613W8SRCiB.hP6regu6jF5aR6j9pWWpVvpw+YkjI
ZX34N3+BdQzqYRcPEUWn+KyQsGo6Lo510rVMzsZX7ojUozrI4fSRxpxamTzC
KuSRPXWfnGji5Y.mPNpZYOYAodS4S6tfMKmSS0k3fjd0xwSNlpN6lVzImCOy
PWPMA4CKYrKfzGAAgtDnq04PeaIcRzZOMn24XbxIhbN7eCwthZ+6+hc0V6cT
icRgiOGn1MtpyRpLhKgwwdHJlPbgYJ7bDJ4tn8tSEwE+4w02oIIjqYVO4hEf
tPO3ibLXWvD9rv7iRbUzmLHcg2Q8Vr7OHcS+RNGwjZQOUImPeT9fKDbnyRxr
wJlO6bqt.wzocfL8BRaZtm01Mb2ZYJaM9tZ8zStReIuxbo9SvQ1x+87w4aFg
1JaZUH6Xsu0rERaiLuEO8W2TaUOeP1RzJEotsa01M00L7MMo6Ne9Ol+u.eAZ
Ij.
-----------end_max5_patcher-----------

that’s that right there!!!

1 Like

The SPEAR approach does voice allocation, if that ends up being important.

:star_struck: :heart_eyes: :pray: