Convolution operation consists of about 90% of the computation of a Deep Neural Network, so convolution computation acceleration can boost the application of Deep Learning in engineering applications. In this paper, we propose a method for convolution computation implemented on FPGA. Utilizing shift registers of FPGA, the proposed method can reduce the time needed to obtain data during sliding the kernel window. Furthermore, it needs no extra memory space, making this method superior to the classic sliding window method and the widely used img2col method.