Target-Embedding Autoencoder (TEA) has been successfully utilized in Multi-Label Classification (MLC), where each instance is associated with multiple labels. However, most existing TEA-based approaches mainly focus on the latent space alignment in their encoding phase, ignoring the output bias induced by overfitting in the training process. To address this issue, we provide a new approach named Self-Knowledge Distillation from TEA (SKDTEA) by removing the latent space alignment of TEA-based solutions with self-knowledge distillation in a simple yet effective manner. Unlike conventional self-knowledge distillation in multi-class learning, our SKDTEA leverages self-knowledge distillation by fully exploring the relationship between label smoothing and knowledge distillation. Specifically, an auxiliary module of SKDTEA is designed for ground-truth targets reconstruction, which outputs the recovered outputs as knowledge in a learned multi-label smoothing manner. The whole distillation process provides an efficient regularization to alleviate the overfitting issue in the training process. As far as we know, we are the first attempt to introduce the self-knowledge distillation into TEA-based approaches for the MLC. Experimental results demonstrate our proposed method achieves significant superiority over the well-established approaches in MLC.