There are methods that allow building a Bayesian network when an ordering among variables is provided. Based on them, structural learning of Bayesian networks can be carried out by searching for the ordering which generates the best network, instead of directly searching in the space of Directed Acyclic Graphs. In a previous work, a simple Hill-Climbing algorithm defined over the space of orderings was proposed by the authors. Although it achieves competitive results when comparing with the state of the art algorithms, such as GES, it is not very efficient when the number of variables is high. In this work, we propose a more efficient version of the method that improves its scalability in high dimensional domains and is equivalent to the original. We also propose an asymptotically equivalent algorithm based not only on the orderings, but also on the properties of the underlying network. Besides, we prove the correctness of the operations used in both versions, as well as the correctness of the improvements proposed. The algorithms have been tested over a set of different domains, showing that changes introduced lead to significant reductions on execution time, which become more important as the number of variables grows.