The integration of Automatic Modulation Classification (AMC) technology and deep learning has made the technology widely used in applications such as home smart wireless systems, mobile devices, etc. If it is maliciously attacked, it will pose a serious risk to the security of users. Most of the current researches on backdoor attacks are aimed at applications related to computer vision. It has been shown that AMC systems are also prone to backdoor attacks, but their methods do not apply to a signal domain due to the difference between modulated signals and images. In this paper, we propose a backdoor attack method for deep learning-based AMC models, in which the adversary implants a backdoor to the AMC model by poisoning the amplitude at random locations of very little training data and will change its label. In the inference stage, the poisoned model will be misled to a wrong output by poisoned samples containing triggers, while ensuring the normal classification of benign samples. We demonstrate that the method can achieve a 96.7% attack success rate by infecting only 1% of the training set samples without degrading the benign accuracy rate. Since the trigger locations are randomly selected for each sample, the attack concealment is further improved, while the degree of discrepancy between the temporal waveforms before and after the attack is quantitatively evaluated.