The switchable neural network makes it feasible to deploy a deep neural network to a specific hardware platform by adjusting architectures but without retraining parameters. However, searching out the suitable architecture to adjust is time consuming and computation-tense. To accelerate searching architectures, this paper designs a recommender which takes one round to search and evaluate without repeating. In training stages, the recommender is alternately trained with a meta network which offers the recommender gradients as weakly supervision. In searching stages, the trained recommender extracts features from a specified layer’s outputs, and generates suggested channels to prune for every layer. The recommender is deployed and evaluated on three common classification neural networks, including ResNet, MobileNet-v1 and MobileNet-v2. Experimental results validate that the proposed recommender is able to obtain compressed architectures whose accuracies surpass the switchable neural network’s accuracy at a speed faster than evolutionary algorithm based searching methods.