On-chip backpropagation training using parallel stochastic bit streams
- Resource Type
- Conference
- Authors
- Kollmann, K.; Riemschneider, K.-R.; Zeidler, H.C.
- Source
- Proceedings of Fifth International Conference on Microelectronics for Neural Networks Microelectronics for neural networks and fuzzy systems Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference on. :149-156 1996
- Subject
- Components, Circuits, Devices and Systems
Computing and Processing
Stochastic processes
Arithmetic
Silicon
Process design
Information processing
Neural networks
Backpropagation algorithms
Neural network hardware
Automata
Field programmable gate arrays
- Language
- ISSN
- 1086-1947
It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.