CFP last date
20 May 2024
Reseach Article

Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron

by Arka Ghosh, Mriganka Chakraborty
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 60 - Number 13
Year of Publication: 2012
Authors: Arka Ghosh, Mriganka Chakraborty
10.5120/9749-3332

Arka Ghosh, Mriganka Chakraborty . Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron. International Journal of Computer Applications. 60, 13 ( December 2012), 1-5. DOI=10.5120/9749-3332

@article{ 10.5120/9749-3332,
author = { Arka Ghosh, Mriganka Chakraborty },
title = { Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron },
journal = { International Journal of Computer Applications },
issue_date = { December 2012 },
volume = { 60 },
number = { 13 },
month = { December },
year = { 2012 },
issn = { 0975-8887 },
pages = { 1-5 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume60/number13/9749-3332/ },
doi = { 10.5120/9749-3332 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:06:26.722691+05:30
%A Arka Ghosh
%A Mriganka Chakraborty
%T Hybrid Optimized Back propagation Learning Algorithm for Multi-layer Perceptron
%J International Journal of Computer Applications
%@ 0975-8887
%V 60
%N 13
%P 1-5
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability . This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method . This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron. [13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.

References
  1. Artificial Neural Networks By Dr. B. Yegnanarayana.
  2. Neural Networks – A Comprehensive Foundation BySimon Haykin.
  3. McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 - 133.
  4. "Adaline (Adaptive Linear)". CS 4793: Introduction to Artificial Neural Networks. Department of Computer Science, University of Texas at San Antonio.
  5. Rosenblatt, Frank (1957), The Perceptron--a perceiving and recognizing automaton. Report 85-460-1, Cornell Aeronautical Laboratory.
  6. Bertsekas, D. P. , Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Athena Scientific. pp. 512. ISBN 1-886529-10-8
  7. 1. Nocedal, Math. Comput. 35, 773 (1980).
  8. Nicol N. Schraudolph, Jin Yu, Simon G¨unter-"A Stochastic Quasi-Newton Method for Online Convex Optimization".
  9. Mriganka Chakraborty. Article: Artificial Neural Network for Performance Modeling and Optimization of CMOS Analog Circuits. International Journal of Computer Applications 58(18):6-12, November 2012. Published by Foundation of Computer Science, New York, USA.
  10. Arka Ghosh. Article: Comparative Study of Financial Time series Prediction By Artificial Neural Network Using gradient Descent Learning. International Journal of Scientific & Engineering Research Volume-3 Issue-1 :( p 1–7) ISSN-2229-5518, January 2012. Published by IJSER, France.
  11. The Numerical Algorithms Group. "Keyword Index: Quasi-Newton". NAG Library Manual, Mark 23. Retrieved 2012-02-09.
  12. Wolfe, Philip (1969). "Convergence conditions for ascent methods". SIAM Rev. 11 (2): 226–235. doi:10. 1137/1011036.
  13. Rumelhart, D. E. , Hinton, G. E. , Williams, R. J. 'Learning Internal Representation by Error Propagation' chapitre 8, Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Rumelhart, D. E. and McClelland, J. L. editor, MIT Press, Cambridge, MA, 1986
  14. Fahlman, S. E. 'An Empirical Study of Learning Speed in Back-Propagation Networks' internal report: CMU-CS-88-162, Carnegie Mellon University, Pittsburgh, Juin 1988
  15. Jacob, R. A. 'Increased rates of convergences through learning rate adaptation' Neural Networks, Vol. 1, 29 p. , 1988
  16. Tallaneare, T. 'SuperSAB: Fast Adaptive backpropagation with good scaling properties' Neural Network, Vol. 3, pp. 561-573, 1990
  17. Rigler, A. K. , Irvine, J. M. , Vogl, K. 'Rescalins of variables in backpropagation learning' Neural Networks, Vol. 4, pp. 225-229, 1991
  18. Leonard, J. A. , Kramer. M. A. 'Improvement of the BackPropagation algorithm for training neural networks' Computer chem. Engng. , Vol. 14, No. 3, pp. 337-341, 1990
  19. Van Ooyen, A. , Nienhuis, B. 'Improving the Convergence of the Back-Propagation Algorithm' Neural Networks, Vol. 5, pp. 465-471, 1992
  20. Dennis, I. E. , Schnabel, R. B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations Prentice-Hall, 1983.
  21. The Power of Squares-http://mste. illinois. edu/patel/amar430/meansquare. html.
Index Terms

Computer Science
Information Sciences

Keywords

Neural network Back-propagation learning Delta method Gradient Descent Wolfe condition Multi layer perceptron Quasi Newton