Learned Image Compression (LIC), which uses neural networks to compress images, has experienced significant growth in recent years. The hyperprior-module-based LIC model has achieved higher performance than classical codecs. However, the LIC models are too heavy (in calculation and parameter amounts) to apply to edge devices. To solve this problem, some former papers focus on structural pruning for LIC models. However, they either cause noticeable performance decrement or neglect the appropriate pruning threshold for each LIC model. These problems keep their pruning results sub-optimal. This paper proposes a Pruning Threshold Searching on the hyperprior module for different-quality LIC models. Our method removes most parameters and calculations while keeping the performance the same as the models before pruning. We removed at least 49.8% of parameters and 28.5% of calculations for the Channel-Wise-Context-Model-based models and 29.1% of parameters for the Cheng-2020 models.