Neural architecture search (NAS) has emerged as a hot topic recently which makes artificial intelligence techniques easier to apply and reduces the demand for experts knowledge by generating deep neural network architectures automatically. However, most existing methods for neural architecture search (NAS) heavily rely on underlying black-box controllers to generate potential candidates of network architectures, and suffer from serious problems of lacking interpretability and search efficiency. In this paper, we propose Disentangled Neural Architecture Search (DNAS), which addresses the two issues by adopting disentangled NAS controller and efficient dense-sampling strategy. Specifically, DNAS learns disentangled factors of network architecture by explicitly encouraging the latent factors to be independent. This approach could not only achieve semantic interpretability but also allow us to conveniently identify the promising regions of representations corresponding to high-performance architectures. We further propose a dense-sampling strategy that conducts targeted architecture search within the promising regions to accelerate the searching process. Our DNAS owns several attractive features: 1) it can successfully learn semantic representations of architectures, including operation selection, skip connections, and layer order; 2) it can speed up the process for neural architecture search more than 13× by using dense-sampling and disentangled factors; 3) it can achieve higher accuracy under less computational cost—DNAS achieves state-of-the-art performance of 94.16% on NASBench-101, and 22.7% top-1 test error on ImageNet with 1.6 GPU-days.