In recent years, algorithms based on convolutional neural networks (CNNs) have shown great advantages in image denoising. However, the existing state-of-the-art (SOTA) algorithms are too computationally complex to be deployed on embedded devices, like mobile devices. Knowledge distillation is an effective model compression method. However, researches on knowledge distillation are mainly on high-level visual tasks, like image classification, and few on low-level visual tasks, such as image denoising. To solve the above problems, we propose a novel knowledge distillation method for the U-Net based on image denoising algorithms. The experimental results show that the performance of the compressed model is comparable with the original model in the case of quadruple compression.