How do you deal with different samplerates of input samples?

Is it important what sample rate the input sample comes with? Because window and hop size are in samples so they hold different amount of time for different sample rates. Does that influence the results.
Sorry I’m quit new to all this and you will see some fundamental understanding questions from myself in the next time.

It will influence the results a tiny bit for sampling rate differences in the same ballpark (44 vs 48, 88 vs 96 etc), so if your logic relies on associating a particular bin with a particular frequency range, you’ll need to be alert to that. Generally:

  • if you’re building a big corpus, there’s something to be said for harmonising all the sample rates as part of prep, just to keep things simple
  • for objects that glue stuff together from multiple buffers (e.g. fluid.bufcompose~), no resampling happens, so using varied sample rates might get confusing
  • it’s important, if doing a calculation involving sample rate (e.g. translating the number of samples to a time unit) to use the buffer sampling rate, not Max’s (this is always true though)
  • the output buffers from the descriptors objects all assign a sample rate to their buffers which is source sample rate / hop size, so this will vary depending on the source.

Ok, good information. I’m looking for an external to do sample rate conversion. I only know command line tools (eg sox) and how to do it in JAVA. Do you know if there’s something existing already?

Do you use Reaper? That has excellent conversion and a good batch processor.

or if you want to do it in the buffer itself, try the HIRT’s bufresample~ object - the package is available in the Max Package manager.

No, I’m kind of stuck to Ableton Live.

Thanks, that’s it.

if you’re command line savvy:

ffmpeg -i input_file.wav -ar 48000 -a:c copy output_file.wav

to get 48000 out from whatever input (for example)

I came up with this patcher to transform any input buffer format to mono / 44.1 kHz (2.4 KB)

1 Like