In this paper, we propose KGenre, a knowledge-embedded music representation learning framework for improved genre classification. We construct the knowledge graph from the metadata in the open-source FMA-medium and OpenMIC-2018 datasets, with no extra information/effort required. KGenre then mines the correlation between different genres from the knowledge graph and embeds such correlation in audio representation. To our knowledge, KGenre is the first method fusing the audio with high-level knowledge for music genre classification. Experimental results demonstrate the embedded knowledge can effectively enhance the audio feature representation, and the genre classification performance surpasses the state-of-the-art methods.