CFP last date
20 May 2024
Reseach Article

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

by Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 39 - Number 8
Year of Publication: 2012
Authors: Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar
10.5120/4837-7097

Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar . An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence. International Journal of Computer Applications. 39, 8 ( February 2012), 1-7. DOI=10.5120/4837-7097

@article{ 10.5120/4837-7097,
author = { Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar },
title = { An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence },
journal = { International Journal of Computer Applications },
issue_date = { February 2012 },
volume = { 39 },
number = { 8 },
month = { February },
year = { 2012 },
issn = { 0975-8887 },
pages = { 1-7 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume39/number8/4837-7097/ },
doi = { 10.5120/4837-7097 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:25:52.998783+05:30
%A Sudarshan Nandy
%A Achintya Das
%A Partha Pratim Sarkar
%T An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence
%J International Journal of Computer Applications
%@ 0975-8887
%V 39
%N 8
%P 1-7
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The efficacy of the proposed method is observed during the training period as it converges quickly for the dataset used in test. The requirement of memory for computing the steps of algorithm is also analyzed.

References
  1. D.E. Rumelhart, G.E. Hinton and R.J. Williams, (1986) “Learning Representation by Back-propagation Errors”, Nature, vol. 323, PP. 533 – 536.
  2. R.A. Jacobs, 1988, “Increased rate of convergence Through Learning Rate Application” , Neural Networks, vol. 1, no. 4, pp. 295-308.
  3. Bello, M.G. , 1994, “Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks”,IEEE Trans. Neural Netw., vol.5., no.6, pp. 989-993.
  4. Samad, T., 1990, “Back-propagation improvements based on heuristic arguments”, Proceedings of International Joint Conference on Neural Networks, Washington, 1, pp. 565-568.
  5. Sperduti, A. & Starita, A. , 1993, “Speed up learning and network optimization with Extended Back-propagation”, Neural Networks, 6, pp. 365-383.
  6. Van Ooten A. , Nienhuis B, 1992, “Improving the convergence of the back-propagation algorithm”, Neural Networks, 5, pp. 465-471.
  7. C. Charalambous, 1992, “Conjugate gradient algorithm for efficent training of neural networks”, IEEE Procedings-G , vol. 139, 3.
  8. Levenberg, K., 1944, “A method for the solution of certain problem in least squares”, Quart. Appl. Math., 2, pp.164-168.
  9. Marquardt , D. , 1963, “ An algorithm for least sqare estimation of nonlinear parameters”, SIAM J. Appl. Math., 11, pp. 431-441.
  10. M. T. Hagan and M. B. Menhaj, 1994, “Training feedforward networks with the Marquardt algorithm,” IEEE Trans. Neural Netw., vol. 5, no. 6, pp.989–993.
  11. G. Lera and M. Pinzolas, Sep. 2002, “Neighborhood based Levenberg–Marquardt algorithm for neural network training,” IEEE Trans. Neural Netw.,vol. 13, no. 5, pp. 1200–1203.
  12. Saman R., Bryan A. T., 2011, “ A new formulation for feed forward Neural Networks”, IEEE Trans. Neural Netw., vol.22, 10, pp. 1588- 1598.
  13. Bogdan M. Wilamowski, Serdar Iplikci, Okyay Kaynak, M. Onder Efe, 2001, “An algorithm for fast convergence in training a Neural Network”, IEEE Proceedings, pp. 1778-1782.
  14. Balakrishnan, K. and Honavar, V. (1992). “Improving convergence of back propagation by handling flat-spots in the output layer”, Proceedings of Second International Conference on Artificial Neural Networks, pp. 1-20.
  15. Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, Cemal Ardil, 2003, “Levenberg-Marquardt Algorithm for Karachi Stock Exchange share rate Forecastin”, World Academy of Science, Engineering & Technology,vol.3,41, pp. 171-177.
  16. UCI Machine Learning Repository : Iris Data Set - http://archive.ics.uci.edu/ml/datasets/Iris
  17. UCI Machine Learning Repository : Wine Data Set - http://archive.ics.uci.edu/ml/datasets/Wine
Index Terms

Computer Science
Information Sciences

Keywords

Back-propagation Neural Network Numerical optimization Fast convergence algorithm