CFP last date
20 May 2024
Reseach Article

Empirical Study of Least Sensitive FFANN for Weight-Stuck-at Zero Fault

by Amit Prakash Singh, Pravin Chandra, Chandra Shekhar Rai
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 2 - Number 2
Year of Publication: 2010
Authors: Amit Prakash Singh, Pravin Chandra, Chandra Shekhar Rai
10.5120/627-881

Amit Prakash Singh, Pravin Chandra, Chandra Shekhar Rai . Empirical Study of Least Sensitive FFANN for Weight-Stuck-at Zero Fault. International Journal of Computer Applications. 2, 2 ( May 2010), 47-51. DOI=10.5120/627-881

@article{ 10.5120/627-881,
author = { Amit Prakash Singh, Pravin Chandra, Chandra Shekhar Rai },
title = { Empirical Study of Least Sensitive FFANN for Weight-Stuck-at Zero Fault },
journal = { International Journal of Computer Applications },
issue_date = { May 2010 },
volume = { 2 },
number = { 2 },
month = { May },
year = { 2010 },
issn = { 0975-8887 },
pages = { 47-51 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume2/number2/627-881/ },
doi = { 10.5120/627-881 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T19:49:42.529185+05:30
%A Amit Prakash Singh
%A Pravin Chandra
%A Chandra Shekhar Rai
%T Empirical Study of Least Sensitive FFANN for Weight-Stuck-at Zero Fault
%J International Journal of Computer Applications
%@ 0975-8887
%V 2
%N 2
%P 47-51
%D 2010
%I Foundation of Computer Science (FCS), NY, USA
Abstract

An important consideration for neural hardware is its sensitivity to input and weight errors. In this paper, an empirical study is performed to analyze the sensitivity of feedforward neural networks for Gaussian noise to input and weight. 30 numbers of FFANN is taken for four different classification tasks. Least sensitive network for input and weight error is chosen for further study of fault tolerant behavior of FFANN. Weight stuck-at zero fault is selected to study error metrics of fault tolerance. Empirical results for a WSZ fault is demonstrated in this paper.

References
  1. Stevenson, M., Winter, R., and Widrow, B. 1990. Sensitivity of feedforward neural networks to weight errors. IEEE Transaction on Neural Networks, vol. 1, no. 1, pp. 71-80.
  2. Zeng, X., and Yeung, D.S. 2001. Sensitivity analysis of multilayer perceptron to input and weight perturbations. IEEE Transaction on Neural Networks, vol. 12, no. 6, pp. 1358-1366.
  3. Tian, Z., Lin, T.T.Y., Yang, S. and Tong, S. 1997. The faulty behavior of feedforward neural networks with hard-limiting activation function. Neural Computation, vol. 9, no. 5, pp. 1109-1126.
  4. Oh, S. H. and Lee, Y. 1995. Sensitivity analysis of single hidden-layer neural networks with threshold functions. IEEE Transaction on Neural Networks, vol. 6, no. 4, pp. 1005-1007.
  5. Choi, J. Y. and Choi, C. –H. 1992. Sensitivity analysis of multilayer perceptrons with differentiable activation functions. IEEE Transaction on Neural Netowkorks, vol. 3, pp. 101-107.
  6. Zurada, J.M., Malinowski, A., and Usui, S. 1997. Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing, vol. 14, pp. 177-193.
  7. Engelbrecht, A. P., Fletcher, L., and Cloete, I. 1999. Variance analysis of sensitivity information for pruning multilayer feedforward neural networks. In Proceedings of IEEE IJCNN'99, Washington, DC. Vol. 3, pp. 1829-1833.
  8. Zurada, J. M., Malinowski, A., and Cloete, I. 1994. Sensitivity analysis for minimization of input data dimension for feedforward neural network. IEEE International Symposium on Circuits and Systems, New York. pp. 447-450.
  9. Karnin, E. D. 1990. A simple procedure for pruning backpropagation trained neural networks. IEEE Transaction on Neural networks. vol. 1, no. 2. pp. 239-242.
  10. Alippi, C., Piuri, V., and Sami, M. 1995. Sensitivity to errors in artificial neural networks: A behavioral approach. IEEE Transaction of Circuits and Systems-I Fundamental Theory and Applications, vol. 42, no. 6, pp. 358-361.
  11. Arrowsmith, D. K. and Place, C. M. 1992. Dynamical System. Chapman & Hall, London.
  12. Cherkassky, V. 1996. Comparison of Adaptive methods for function estimation from samples. IEEE Transaction on Neural Networks. vol. 7, no. 4, pp. 969-984.
  13. Riedmiller, M. and Braun, H. 1993. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks (ICNN) San Farncisco. pp. 586-591.
  14. Chandra, P. and Singh, Y. 2003. Fault tolerance of feedforward artificial neural networks- A framework of study. In proceeding of IEEE International Joint Conference on Neural Networks, vol. 1, pp. 489-494.
  15. Singh, A. P., Chandra, P., and Rai, C. S. 2009. Fault models for neural hardware. IEEE International Conference on Advances in System Testing and Validation Lifecycle (VALID 2009), Porto. pp. 7-12.
  16. Zurada, J. M., Malinowski, A., and Cloete, I. 1994. Sensitivity analysis for minimization of input data dimension for feedforward neural network. ISCAS’94, vol. 6, pp. 447-450.
  17. Aggarwal, K. K., Singh, Y., Chandra, P., and Puri, M. 2005. Sensitivity analysis of fuzzy and neural network models. ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4. pp. 1-4.
  18. Sequin, C. H. and Clay, R. D. 1990. Fault tolerance in feedforward neural network. Technical Report – 90-031.International Computer Science Institute, UE Berkely, CA.
  19. Phatak, D.S. 1995. Fault tolerant artificial neural network. In proceeding of the 5th dual Use Technologies and Applications Conference, Utica/Rome, NY. Pp. 193-198.
  20. Bolt, G. 1991. Investigating fault tolerance in artificial neural networks. Technical Report: YCS 154. University of York.
  21. Chiu, C. T., Mehrotra, K., Mohan, C. K., and Ranka, S. 1994. Training Techniques to obtain fault tolerant neural networks. Proceeding of International Symposium of Fault Tolerant Computing, Austin TX, pp. 360-369.
  22. Piche, S. W. 1995. The selection of weight accuracies for Madalines. IEEE Transaction on Neural Networks, vol. 6, no. 2, pp. 432-445.
  23. Takenaga, H., Abe, S., Takatoo, M., Kayama, M., Kitamura, T., and Okuyama, Y. 1990. Optimal input selection of neural networks by sensitivity analysis and its application to image recognition. IAPR workshop on Machine Vision Applications, pp. 117-120.
Index Terms

Computer Science
Information Sciences

Keywords

Artificial Neural Network Fault models Sensitivity analysis