In this work, the authors focus on the smooth nonconvex problem $$ \min_{x\in\Bbb{R}^d}\left\{f(x)=\Bbb{E}f_\xi(x)\right\}, $$ where the randomness comes from the selection of data points and is represented by the index $\xi$. If the number of indices $n$ is finite, say $\{\xi_1,\dots,\xi_n\}$, then we talk about empirical risk minimization, and $\Bbb{E}f_\xi(x)$ can be written in a finite-sum form, $$ {\Bbb E}f_{\xi}(x)\coloneq \frac{1}{n}\sum_{i=1}^nf_i(x), $$ where $f_i(x)\coloneq f_{\xi_i}(x)$. \par The contributions of this study are two new methods (namely, Q-Geom-SARAH and E-Geom-SARAH). These algorithms are obtained by combining results of [L.~Lei and M.~Jordan, in {\it Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, 148--156, Proc. Mach. Learn. Res., 54, JMLR, 2017] and [L.~M.~Nguyen et al., in {\it Proceedings of the 34th International Conference on Machine Learning (ICML'17). Vol. 70}, 2613--2621, JMLR, 2017]. \par The algorithms exhibit adaptivity to the Polyak-Łojasiewicz (PL) constant, target accuracy, and the variance of stochastic gradients. \par As is reported in the study, these two methods provide strictly better results when compared to other methods; indeed, these are the only methods that can adapt to multiple regimes, including low vs. high precision or the PL constant $\mu=0$ vs. $\mu>0$. These properties are obtained via the geometrization technique and careful batch-size construction. Moreover, it is shown that the obtained complexity is close to or even matches the best achievable one in all the regimes.