Binary neural network (BNN) is widely used in speech recognition, image processing and other fields to save memory and speed up computing. However, the accuracy of the existing binarization scheme in the realistic dataset is obviously low, and the input layer uses 32-bit floating point to avoid excessive precision loss, which requires additional computing units and increases the computational burden. Therefore, it is very important to improve the input layer, save computing resources and reduce precision loss. In this paper, we propose a parallel convolution binary neural network accelerator architecture (PC-BNA). Based on our proposed BNN model, we design an efficient field programmable gate array (FPGA) based accelerator. The input of the first layer is binarized, the traditional binary convolution layer is replaced by a parallel binary convolution, and the network building blocks are improved. The experimental results show that the proposed PC-BNA has higher accuracy and better performance on CIFAR-10. The image recognition accuracy is as high as 91.4%, which is superior to the state-of-art of the BNN accelerator. Compared to the state-of-the-art BNN model, when using the same model size, the use of look-up-table (LUT) saves 9.08%, and the digital signal processor (DSP) saves 27.7%. The results suggest that it is promising for PC-BNA in future high performance mobile artificial intelligence applications.