Neural architecture search (NAS) is a field that automates the architecture design of neural networks. NAS can be modeled as an optimization problem. It describes a space of possible architectures and looks for the most performing one. NAS, however, is not limited to finding the most task-accurate neural network. It can consider other objectives during the search to meet different demands and requirements (model size, latency, energy consumption, etc.). Several methods were proposed for multi-objective NAS. Most of them are either based on the scalarization of objectives, or use a Pareto-based approach. Methods based on scalarization require preference weighting between objectives and can suffer from suboptimality, while the Pareto-based methods found in NAS usually require complex operators and many parameters to tune. The goal of our work is to offer an alternative Pareto-based method that solves the above issues. In this paper, we first present a formulation of the NAS problem as a multi-objective optimization (MO) problem. We then design a dominance-based multi-objective local search (DMLS) to solve it. Unlike other NAS methods, our work uses a simple encoding, few parameters, and does not require preference weighting of objectives. To assess its performance, we evaluate this algorithm on a specialized multi-objective NAS benchmark to optimize both accuracy and network complexity. We compare it to state-of-the-art MO methods of this benchmark. Results show that our method finds significantly more Pareto optimal solutions than NGSA-II and overpasses single-objective local search for the same evaluation budget. We conclude that DMLS provides a more practical MO approach for NAS while providing superior performances.