Radio frequency (RF) fingerprinting techniques have been used as an extra method in physical layer authentication for wireless devices. Unique fingerprints are used to identify wireless devices in order to avoid spoofing or impersonating attacks. With the development of deep learning (DL), many techniques based on DL are used for RF fingerprint identification. However, due to the openness of wireless channel and unexplainability of DL, it is vulnerable to adversarial attacks. In this paper, we investigate hidden backdoor attack to deep learning-aided physical layer authentication, where the adversary puts elaborately designed poisoned samples on the basis of IQ sequences into training dataset. And poisoned samples are same to samples with triggers which are patched samples in feature space. We show that hidden backdoor attack can reduce the accuracy of RF fingerprint identification significantly with patched samples.