Despite the significant advancements and promising performance of deep learning-based packet loss concealment (PLC) systems in transmission systems, their focus on modeling acoustic features for reconstructing lost packets is insufficient to achieve smooth transitions during speech reconstruction. Therefore, to address this limitation, we propose integrating linguistic information derived from a speech recognition system as auxiliary features in the PLC system. By extracting ASR-guided representations and incorporating them using auxiliary loss, we successfully demonstrate a substantial improvement in the perceptual quality and intelligibility of the reconstructed speech. Our evaluation conducted on the wall street journal dataset further validates the effectiveness of our approach through experiments involving different packet loss rates and performance metrics.