Symbol classification is a crucial task in many applications such as handwriting recognition, computer vision, Optical Character Recognition (OCR), etc. However, multiclass classification is an intensive task that requires high computational power and memory resources. In this paper Field-Programmable Gate Array (FPGA) based acceleration of a neural network for symbol classification has been implemented with hardware-optimised multiplication techniques. The approach involves using FPGA to perform faster multiplications and accumulation operations with reduced resource usage. The neural network is trained using MATLAB and then implemented in VHDL on Quartus Prime Lite. The multiplication is compared to the Karatsuba Multiplication technique and the Iterative Logarithmic technique. Based on the comparative study the Iterative Logarithmic technique proved to be the best to bring down the overall hardware usage. The final optimised design has been validated using ModelSim and then deployed on an Altera Cyclone® V SE FPGA device.