In this paper, a floating point multiplication and accumulation operator based on FPGA is designed for neural network calculation, and a custom 32 bit floating-point data format is used to change the amount of computation by changing the overall structure of the data, and the performance of the operator is optimized. Finally, the simulation results in FPGA are given to verify the correctness of the design. The design saves the resources by comparing the floating point operation with the common algorithm of 32 bit floating-point data of the IEEE standard.