We provide a concise, self-contained proof that the Silver Stepsize Schedule proposed in Part I directly applies to smooth (non-strongly) convex optimization. Specifically, we show that with these stepsizes, gradient descent computes an $\epsilon$-minimizer in $O(\epsilon^{-\log_{\rho} 2}) = O(\epsilon^{-0.7864})$ iterations, where $\rho = 1+\sqrt{2}$ is the silver ratio. This is intermediate between the textbook unaccelerated rate $O(\epsilon^{-1})$ and the accelerated rate $O(\epsilon^{-1/2})$ due to Nesterov in 1983. The Silver Stepsize Schedule is a simple explicit fractal: the $i$-th stepsize is $1+\rho^{v(i)-1}$ where $v(i)$ is the $2$-adic valuation of $i$. The design and analysis are conceptually identical to the strongly convex setting in Part I, but simplify remarkably in this specific setting.
Comment: 10 pages, 3 figures