Binarized neural network (BNN) is a neural network whose input activations and weights are quantified to 1-bit. It has very low parameters compared with other neural networks. What’s more, the multiplication and accumulation calculation of the convolutional layer and the fully connected layer of BNN can be replaced by XNOR-POPCOUNT calculation. FPGA has become a major hardware deployment platform for BNN due to its programmability and parallelism. However, the basic logic cell of the FPGA cannot directly complete the XNOR-POPCOUNT calculation. This paper proposes an architecture that saves numerous look-up tables (LUT), and it improves the efficiency of the logic cell by allowing the LUTs to complete more logical operations. Compared with other architectures, the proposed architecture can save 8%-32% of logic resources.