Convergence of a gradient algorithm with penalty for training two-layer neural networks
- Resource Type
- Conference
- Authors
- Shao, Hongmei; Lijun Liu; Zheng, Gaofeng
- Source
- 2009 2nd IEEE International Conference on Computer Science and Information Technology Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International Conference on. :76-79 Aug, 2009
- Subject
- Computing and Processing
Convergence
Neural networks
Computer networks
Feedforward neural networks
Educational institutions
Petroleum
Mathematics
Cost function
Training data
Gradient methods
- Language
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based on a linearly separable problem and simulation results are presented. The abstract goes here.