I was trying to find the exact code snippet and couldn’t quickly, but pretty sure it’s in this thread where @weefuzzy had worked out a thing where you specify the amount of variance that you want to keep (e.g. 95%) and then the PCA will give you however many dimensions that takes. This beat manually requesting x amount of dimensions from PCA over and over.
All that being said, in my tests I always got better results just training the classifier directly on the raw MFCCs, but the approach @tremblap outlines above (pre-cooking, then PCA->UMAP->MLP) is more conventional.