The boom in deep learning has gained amazing progress in the synthetic aperture radar automatic target recognition (SAR ATR) in recent years. There are strong expectations on the reliability and safety of SAR ATR to avoid security accidents due to applications with high security risks. Unfortunately, it has been demonstrated that deep neural networks are inclined to be highly susceptible to adversarial examples finely crafted by the adversary, which can cause models to deliver an inaccurate prediction with high confidence. Since the adversary's access to and ability to learn about the target SAR systems and classifiers is severely constrained in real-world scenarios. For the SAR ATR task, this necessitates that the adversarial examples produced be highly transferable across the key deep neural network (DNN) classifiers. Additionally, SAR images' redundant speckle noise displays non-robustness and is challenging to generalize across various models. To this end, we in this paper propose an effective untargeted black-box attack algorithm called Positive Weighed Feature Attack (PFWA) for SAR target recognition and introduce novel positive weighted features associated with the region of interest of the model's decision. The primary goal of PWFA is to carry out adversarial attacks by altering the primary motivator for the victim model's choices. The proposed PWFA could produce adversarial examples that are more transferable between various models in this way. Furthermore, we present a random masking approach for reducing the influence of speckle noises. Experiments conducted on the MSTAR dataset prove that the proposed PWFA method is indeed capable of creating adversarial samples that achieve superior performance in terms of transferability.