Deep neural networks are vulnerable to adversarial examples which apply imperceptible perturbations on benign inputs. Compared to the great success achieved by many adversarial attacks in the white-box setting, most attacks show weak transferability in the black-box scenario. Instead of applying data augmentation on inputs, we propose a new target-oriented method(TOM) to improve attack transferability. Specifically, we choose the appropriate target example for each input in preprocessing, then use it to guide the generation of perturbations during iterations. In order to improve transferability, our approach utilizes the image’s information from the target class to guide the generation of adversarial examples. We only use a small number of gradient calculations and lightweight clipping operations, which greatly saving computing resources. Experiment on ILSVRC2012 shows that out method achieves the same transferability to other attacks like SIM and VTM with approximately one-third of the computational cost. To further show the effectiveness of target orient method, We integrate our methods with other targeted attacks and gain 3% average improvement in transferability.