Generative adversarial networks (GANs) have been widely used for creating diverse data such as images, audio, and videos. However, as the training data of GANs often contain sensitive information, they are vulnerable to privacy attacks on the training dataset, such as membership inference attacks (MIAs). To improve the resistance of GANs to MIA while ensuring their performance, we design a novel GAN framework PGAN-KD (member Privacy protection of GANs based on Knowledge Distillation). PGAN-KD prevents the discriminator from leaking membership information of the training data by introducing knowledge distillation and gradient clipping. Specifically, it adopts an extra teacher discriminator to distilling knowledge and then transfer it to a student discriminator, and thereby isolating the attacker from indirectly obtaining private information through the generator. In addition, the teacher discriminator prevents itself from MIAs through gradient clipping. To evaluate the performance of PGAN-KD, we conducted experiments on both real and simulated datasets. The results indicate that PGAN-KD achieves a 7.8% improvement in privacy protection levels while maintaining similar generation performance with the baselines.