The authors derive optimal convergence rates for the function values in gradient-related descent methods and inexact gradient methods with fixed step sizes for smooth and strongly convex functions. The results are obtained using an elementary variable metric approach, in which a single step is interpreted as a standard gradient step. Compared to the existing results, the proofs offer a more direct way to obtain convergence rate estimates for perturbed gradient methods given the convergence rate of their exact counterparts.