This paper explores Higher-order Methods (HoM) for Matrix Factorization (MF) and Tensor Factorization (TF) models, which are powerful tools for high dimensional data analysis and feature extraction. Unlike First-order Methods (FoM), which use gradients, HoM use higher-order derivatives of the objective function, which makes them faster but more costly per iteration. We develop efficient and implementable higher-order proximal point methods within the BLUM framework for large-scale problems. We introduce the appropriate objective functions, the algorithm, and the experimental results that demonstrate the advantages of our HoM-based algorithms over FoM-based algorithms for MF and TF models. We show that our HoM-based algorithms have a lower number of iterations with respect to their per-iteration cost than FoM-based algorithms.