In the first part of the paper we consider accelerated first order optimization method for convex functions with $L$-Lipschitz-continuous gradient, that is able to automatically adapts to problems which satisfies Polyak-{\L}ojasiewicz condition or which is strongly convex with the value of parameter equal to $\mu$. In these cases method poses linear convergence with factor $\frac{\mu}{L}$, if $\mu$ is unknown. If the problem is strongly convex and $\mu$ is known, than the method possesses linear convergence with the factor $\sqrt{\frac{\mu}{L}}$. If that are not the cases, the method converges with a rate $O(1/k^2)$. The second part contains generalization of the method to the problems, that allows alternating minimization and proofs of the same asymptotic convergence rates. Also it is considered the approach called Adaptive Catalyst, which allows to increase convergence rate up to $O(1/k^2)$ and also it is provided an experimental comparison of the approach with AGM, Alternating AGM, APDAGD and Sinkhorn's algorithm for the dual problem to the discrete entropically regularized optimal transport problem. The result of the work is the attempt to explain the reason why Alternating AGM converge faster than AGM or Adaptive Catalyst despite of the asymptotic theoretical rate $O(1/k^2)$. The hypothesis relies on the fact that Alternating AGM adapts to strong convexity. The hypothesis was tested on quadratic functions, on that Alternating AGM also showed faster convergence.
Comment: 15 pages, in Russian