I think he’s just running a batch process on many files, so a single process, running over and over.
I could be wrong, but I think that’s exactly the kind of problem that this wouldn’t be good for, since analyzing each individual file probably doesn’t take too long, there will just be many many of them, and the cost of spawning and synchronizing a thread is probably more than the process itself.
Way back around the first plenary (or maybe second) there was some talk of a ‘pie-in-the-sky’ solution of having the FluCoMa mac pro setup as a server that people involved in the project can sent processes to in order to let them render in a more powerful environment. That can obviously get complicated in terms of moving loads of samples around, but it’s another possible approach.
@tremblap also mentioned a cool idea of having a “robot” that keeps an ongoing version of your samples analyzed at all times (kind of like what most DAWs do with their render files). If you add new files, or specify a new analysis algorithm, it would then update what needs updating, so at all points you have some “analysis files” to work with.