Spiking neural networks (SNN) have good computational energy efficiency and biological plausibility, and are promising for applications in low-power computing and brain-like computing. To achieve hardware acceleration of convolutional SNN (i.e., SCNN), we propose a configurable and extensible hardware architecture based on clock-driven design. In this architecture we use a strategy of output-channel-wise parallelism and hierarchical pipeline structure to speed up the computation. Aiming at retaining the performance when reducing the quantization bits, the adaptive channel-wise logarithmic quantization (ACLQ) method for SCNNs is proposed to ensure performance while significantly reducing the overhead on hardware resources such as on-chip memory. We configured and evaluated two sizes of LeNet networks model based on the proposed architecture in FPGA. Experiments show that our architecture achieved recognition accuracy of up to 99.26% on the MNIST dataset, while achieving a recognition speed of up to 1605 frames per second (FPS) at a 100MHz clock speed and consuming only 0.65mJ of energy to process each image. Using the proposed logarithmic quantization method, the weights’ bit width can be reduced to 3 bits with almost none of accuracy loss, which significantly reduces the on-chip resources and power consumption for weights storage.