Significant strides have been made in the field of deep learning for computer vision, particularly in image classification, largely attributed to the widespread use of Convolutional Neural Networks (CNNs). Nevertheless, the deepening layers and increasing complexity of CNNs pose challenges in deploying these models due to substantial computational and storage costs. This impedes their application on smaller mobile devices like smartphones. This article delves into a strategy leveraging channel number transformation to mitigate the computational and storage expenses associated with CNNs, optimizing their performance in image classification tasks. An Adaptive Channel Number Transformation (ACT) framework is devised. In this framework, the fundamental concept is that, in image classification tasks, the regulations governing model channel number transformation should be jointly overseen by the classification number labels of the model and the classification task. These regulations are fine-tuned by the labels in the latter part of the model. The ACT framework is applied to the lightweight network model of the ShuffleNets series. Its objective is to curtail the deployment costs of CNNs, ensuring the network sustains accuracy while diminishing the count of model parameters and computational time through adaptive channel number transformation.