Multi-objective optimization is a prevalent challenge in the area of deep learning. There is a lack of robust multi-objective optimization methods applicable in deep learning capable of training networks by simultaneously optimizing conflicting multiple loss functions. Its applications include a wide range of deep neural network branches such as multi-loss, multi-task, multi-modal, and cross-modal learning. In this paper, we develop MAdam as a multi-objective extension of the well-known Adam optimization algorithm. MAdam is a classical population-based approach that uses the gradient information of multiple objectives to accelerate population convergence toward an optimal minimum. The method applied a non-dominated sorting algorithm to keep selective population members and improve the diversity across the landscape. The performance of MAdam is evaluated on the standard ZDT test functions as the proof of concept. Promising results show the capability of this approach to converge towards an estimated Pareto front and to generate a well-distributed set of non-dominated solutions.