Call for Paper - August 2019 Edition
IJCA solicits original research papers for the August 2019 Edition. Last date of manuscript submission is July 20, 2019. Read More

Software based Method to Specify the Extreme Learning Machine Network Architecture Targeting Hardware Platforms

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2016
Authors:
Alaa M. Abdul-Hadi, Abdullah M. Zyarah, Haider M. Abdul-Hadi
10.5120/ijca2016908967

Alaa M Abdul-Hadi, Abdullah M Zyarah and Haider M Abdul-Hadi. Article: Software based Method to Specify the Extreme Learning Machine Network Architecture Targeting Hardware Platforms. International Journal of Computer Applications 138(7):49-53, March 2016. Published by Foundation of Computer Science (FCS), NY, USA. BibTeX

@article{key:article,
	author = {Alaa M. Abdul-Hadi and Abdullah M. Zyarah and Haider M. Abdul-Hadi},
	title = {Article: Software based Method to Specify the Extreme Learning Machine Network Architecture Targeting Hardware Platforms},
	journal = {International Journal of Computer Applications},
	year = {2016},
	volume = {138},
	number = {7},
	pages = {49-53},
	month = {March},
	note = {Published by Foundation of Computer Science (FCS), NY, USA}
}

Abstract

Extreme learning machine (ELM) is a biologically inspired feed-forward machine learning algorithm that offers a significant training speed. Typically, ELM is used in classification applications, where achieving highly accurate results depend on raising the number of ELM hidden layer neurons, which are randomly weighted independently of the training data and the environment. To this end, determining the rational number of hidden layer neurons in the extreme learning machine (ELM) is an approach that can be adapted to maintain the balance between the classification accuracy and the overall physical network resources. This paper proposes a software based method that uses gradient descent algorithm to determine the rational number of hidden neurons to realize an application specific ELM network in hardware. The proposed method was validated with MNIST standard database of hand-written digits and human faces database (LFW). Classification accuracy of 93.4% has been achieved using MNIST and 90.86% for LFW database.

References

  1. G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1, pp. 489–501, 2006.
  2. G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 42, no. 2, pp. 513–529, 2012.
  3. G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 2. IEEE, 2004, pp. 985–990.
  4. D. L. Sosulski, M. L. Bloom, T. Cutforth, R. Axel, and S. R. Datta, “Distinct representations of olfactory information in different cortical centres,” Nature, vol. 472, no. 7342, pp. 213–216, 2011.
  5. G.-B. Huang, “An insight into extreme learning machines: random neurons, random features and kernels,” Cognitive Computation, vol. 6, no. 3, pp. 376–390, 2014.
  6. R. Kumar Roul, A. Nanda, V. Patel, and S. Kumar Sahay, “Extreme learning machines in the field of text classification,” in Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2015 16th IEEE/ACIS International Conference on. IEEE, 2015, pp. 1–7.
  7. J. Tang, C. Deng, G.-B. Huang, and J. Hou, “A fast learning algorithm for multi-layer extreme learning machine,” in Image Processing (ICIP), 2014 IEEE International Conference on. IEEE, 2014, pp. 175–178.
  8. A. Irturk, S. Mirzaei, and R. Kastner, An efficient FPGA implementation of scalable matrix inversion core using QR decomposition. Department of Computer Science and Engineering, University of California, San Diego, 2009.
  9. G. A. Kumar, T. V. Subbareddy, B. M. Reddy, N. Raju, and V. Elamaran, “An approach to design a matrix inversion hardware module using fpga,” in Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on.IEEE, 2014, pp. 87–90.
  10. J. Tapson, P. de Chazal, and A. van Schaik, “Explicit computation of input weights in extreme learning machines,” in Proceedings of ELM-2014 Volume 1. Springer, 2015, pp. 41–49.
  11. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  12. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Technical Report 07-49, University of Massachusetts, Amherst, Tech. Rep., 2007.
  13. Extreme learning machine. [Online]. Available: http://www.ntu.edu.sg/home/egbhuang/elmcodes.htm
  14. P.-Z. Zhang, S.-X. Zhao, and X.-Z. Wang, “The failure analysis of extreme learning machine on big data and the counter measure,” in Machine Learning and Cybernetics (ICMLC), 2015 International Conference on, vol. 2. IEEE, 2015, pp. 849–853.

Keywords

Extreme Learning Machine, Gradient Descent, Random feature mapping