Binarized Neural Network (BNN), which is a variant of Convolutional Neural Network (CNN) with binary weights and binary outputs on a neuron, has emerged as a promising approach to deploy artificial intelligence on resource-restricted devices in recent years. Due to the binarized weights, there are no longer required multipliers for computation, and there exist relatively high similarities among filters as well. In this work, we propose a partial-filter sharing approach and integrate it with the state-of-the-art to reduce the hardware cost and the synthesis time onto Field Programmable Gate Arrays (FPGAs). As compared to the state-of-the-art, the LUT reduction ratio by our approach is 47.71% on average without any accuracy loss, and 62.5% synthesis time in the Tiny ImageNet layers can be also saved on average.