Deep neural networks have been proven vulnerable to deliberately crafted adversarial example, which cause serious safety and security concerns. Many defense approaches were proposed to resist such threats. However, existing defenses such as pre-compression or adversarial training would degrade the model performance on clean images or incur heavy computational costs. In this work, we propose a plug-and-play defensive module Gradient Sign Inversion (GSI) to defend gradient-based attack. Essentially, GSI attempts to inverse the direction of the backpropagated gradient for the victim model, disturbing the adversarial example generation of the attacking while retaining the performance of the vanilla network on genuine inputs. Specifically, an additive model based on periodic trigonometric function is established by investigating the necessary conditions that a suitable defensive module should have. By enforcing constraints on the defensive module, the parameters of GSI are determined, accompanied by a theoretical justification. Interestingly, we observe that the proposed GSI not only prevents the gradient-based adversarial attack, but can even improve the confidence of the ground-truth label when initiating an attack, making the attack betray as a defense. Source code is publicly available at https://github.com/JidaDiao/GSI.