An Efficient Architecture for Floating Point Based MISO Neural Neworks on FPGA
- Resource Type
- Conference
- Authors
- Laudani, Antonino; Lozito, Gabriele Maria; Fulginei, Francesco Riganti; Salvini, Alessandro
- Source
- 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation Computer Modelling and Simulation (UKSim), 2014 UKSim-AMSS 16th International Conference on. :12-17 Mar, 2014
- Subject
- Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Neurons
Random access memory
Computer architecture
Adders
Artificial neural networks
Delays
Pipelines
FPGA
Neural Networks
VHDL
embedded floating point
- Language
The present paper documents the research towards the development of an efficient algorithm to compute the result from a multiple-input-single-output Neural Network using floating-point arithmetic on FPGA. The proposed algorithm focus on optimizing pipeline delays by splitting the "Multiply and accumulate" algorithm into separate steps using partial products. It is a revisit of the classical algorithm for NN computation, able to overcome the main computation bottleneck in FPGA environment. The proposed algorithm can be implemented into an architecture that fully exploits the pipeline performance of the floating-point arithmetic blocks, thus allowing a very fast computation for the neural network. The performance of the proposed architecture is presented using as target a Cyclone II FPGA Device.