In chasing the dragon for “less latency” I’ve eyeballed going to a higher sample rate as it’s pretty low hanging fruit. Going from 44.1k to 48k would be like a “free” 8% increase in speed!
Granted, stuff might get a bit sticky, particularly in the
fluid.verse~ as samples and duration (in my patches at least, though I don’t think I’m alone) are used interchangeably. I’ve been meaning to go back and do some patch hygiene to clean this stuff up anyways.
44.1k and 48k are also close enough that they would be perceptually close enough, for doing things like segmenting based on duration. 100ms is pretty close to 91.875ms in terms of patching based on audible perception. Less so if you start doubling sampling rates.
Does anyone run at 48k by default? Or even higher?
Do you experience any (noticeable) CPU increases?
Any other pitfalls?
I was forced to convert on a gig’s soundcheck because I wasn’t told the DANTE on the gig was sync’d at an unmovable 48k… everything worked after a few tweaks, a bit more cpu, not noticeable in that particular patch
I think a bunch of my stuff should be ok (barring things happening “faster”), but I’m certain in a couple places in that Cloud M4L device I showed at the last plenary, I use ms and samples, so I need to figure out a solution for what I want to do there.
The original Party Van was actually sample rate agnostic. Someone early on mentioned that everything broke in 48k, so I went back and made it work for other SRs (which was unpleasant since all the loopers were
Remaking Cloud so it’s SR agnostic is on my shorter list of things to try, then I’ll see after that.
I get the feeling @tutschku is already the captain of the 48khz ship. Everything he has ever sent me comes at this nice round number
Pete (Dowling) also mention in passing that some of the anti-aliasing filters might be better for 48k as well (as in the ADC process), with 44.1k being an engineering number, and not exactly a musical one.
What are you expecting to get for free (or more CPU)? Perhaps you are attracted to 8% more time resolution with an 8% loss of frequency resolution because it’s a better tradeoff than twice the time for half the frequency resolution?
However, for most of the algos you’d be able to tune the time/frequency tradeoff by separately controlling the window size from the FFT size. The window size is the one that produces latency. When you do a larger FFT than your window size you have a higher resolution of output, but it’s equivalent to ideal interpolation - you can’t resolve different frequencies at that resolution - the window time is the factor that controls that.
Anyway, there’s a danger that the same FFT at a higher sample rate looks like a better time resolution with no downsides, but it covers a wider bandwidth, so it’s not the case. If you just want to trade a bit of freq for time, try just reducing your window size. In that scenario you don’t have to increase your CPU usage for bandwidth you don’t care about.
I guess I’m mainly thinking of brute-force latency/temporal resolution. With tiny analysis windows the frequency resolution isn’t super great anyways, so it being a bit faster would hopefully be better.
If I were to step up to even higher SRs then perhaps a move up in FFT settings would be in order for all the reasons that you mention (some of which I, obviously, would not have considered), while hopefully enjoying the I/O latency improvements (at the cost of CPU).
I think you misunderstand. If you double the SR and keep the FFT size the same, you have twice the time resolution, but the half the frequency resolution - if you double the FFT size to compensate you have exactly the same resolution as you started with, but just over a greater bandwidth (the nyquist freq is doubled).
So - if you want to trade time and frequency resolution (which is always a tradeoff) I’m suggesting you can reduce the window size in order to reduce freq resolution in favour of time - switching to 48k is in essence just a specific version of this tradeoff with a fixed ratio, whereas changing window size in relation to the FFT size you can choose the ratio of trade-off you want - so to my mind that’s a better/more flexible approach.
Changing the SR can seem like you get some magical bonuses, but you don’t you just get more bandwidth, period - the time/freq trade-offs remain identical.
they do, but it might be perceptually less dramatic than the doubling of fft size… I think that is what @rodrigo.constanzo means
I understand that, but if you reduce the window size you can get the same trade-off in a more controllable manner, without needing to switch sample rate.
Honestly, the “free” improvement I want is the temporal resolution of the change in the I/O buffer size, where 64 samples would happen quicker.
There are ramifications for FFT settings in stuff galore, with 44.1 and 48 being “close enough” I think. Things will change for moving much beyond that though. And if/when that day comes where I’m rocking the 192k, I’ll definitely have to rethink how all of that works.
Just stay in 44.1k and make your window size 58 or 59 samples (and hop something related to that) and you should get the same benefit. In this case you could also take (for example) 50 samples, which is still more than half, but a noticeable shorter than 64 samples at 48k.
Or, in other words - you are switching to 48k to take smaller time slices - so why not just take smaller time slices in your current SR?
It’s the Max/soundcard I/O settings I’m after that. Like the literal throughout latency of ADC/DAC.
My computer can’t quite handle 32 there, and for weird-ish reasons, feedback acts quite different at that I/O (feedback actually sounds best for me at 128, which is surprising). So I want to keep that at 64 samples, but that taking less real-world time than at 44.1k. (I’m only talking going to 48k here)
So the idea would be to have everything in Max be the same with window sizes/fftsettings etc…, but it would just “happen” quicker (with some loss of frequency resolution).
My bad - reading too quickly…
Have you thought about using a bigger window size?
All the info is super useful though. I hadn’t actually considered some of the other ramifications of moving up to 88.2k+ with regards to window/fftsettings and such.