It’s kind of a thing, yes. See, e.g. 3.2. Tuning the hyper-parameters of an estimator — scikit-learn 0.24.1 documentation
But, still, fleshy brain is remains responsible for gathering the good quality data to feed such a beasty. Often, the time and computation investment to do this sort of thing is overkill, and a disciplined manual approach can get you somewhere useful quickly enough, once the various moving parts make more sense. In this post, and the one after I pointed Alex towards some meatier guidelines and explanations. Bottom line though is to start with hidden
set to the smallest network you can get away with (i.e for an autoencoder, one layer the size of the reduced space you desire), and a minuscule learning rate (like 0.00001 minuscule), and don’t worry overly about the other parameters until later.