Call for Paper - September 2022 Edition
IJCA solicits original research papers for the September 2022 Edition. Last date of manuscript submission is August 22, 2022. Read More

An Implementation of Efficient Datamining Classification Algorithm using Nbtree

International Journal of Computer Applications
© 2013 by IJCA Journal
Volume 67 - Number 12
Year of Publication: 2013
A. Veeraswamy
S. Appavu
E. Kannan

A Veeraswamy, S Appavu and E Kannan. Article: An Implementation of Efficient Datamining Classification Algorithm using Nbtree. International Journal of Computer Applications 67(12):26-29, April 2013. Full text available. BibTeX

	author = {A. Veeraswamy and S. Appavu and E. Kannan},
	title = {Article: An Implementation of Efficient Datamining Classification Algorithm using Nbtree},
	journal = {International Journal of Computer Applications},
	year = {2013},
	volume = {67},
	number = {12},
	pages = {26-29},
	month = {April},
	note = {Full text available}


Knowledge [no more Information] is not only power, but also has significant competitive advantage. Data warehousing is not a new idea. The use of corporate data for strategic decision making, as opposed to the use of data for tracking and enabling operations, has gone on for a computing itself. As the business these days contain huge amounts of data and the users connected to these databases across the globe and round the clock have the necessity for maintaining a separate database for the sake of analysis. This paper proposes one method of feature selection of NB Tree Algorithm. The Proposed algorithm (NB Tree) gives an effective Classification Algorithm for reducing computational time and gives better accuracy results compare with another algorithms. In many applications, however, an accurate ranking of instances based on the class probability is more desirable. The dependence between two attributes is determined based on the probabilities of their joint values that contribute to true and false classification decisions. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.


  • R. Kohavi, "Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid," ser. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1996, pp. 202–207.
  • A. L. Blum, P. Langley. Selection of Relevant Features and Examples in Machine Learning. Artificial Intelligence, vol. 97, no. 1 PP 245-271, 1977.
  • Varun Kumar, Nisha Rathee,"Knowledge Discovery from Database using an Integration of clustering and Classification", IJACSA, vol 2 No. 3,PP. 29-33,March 2011.
  • G. Karraz,G. Magenes,"Automatic Classification of Heart beats using Neural Network Classifier based on a Bayesian Frame Work", IEEE, Vol 1,2006.
  • R. Kohavi, G. H. John. Wrappers for Feature Subset Selection. Artificial Intelligence, vol. 97, no. 1&2, pp. 273-324, 1997.
  • Subramanian Appavu Alias Balamurugan, Ramasamy Rajaram. Effective and Efficient Feature Selection for Large-scale Data Using Bayes Theorem, International Journal of Automation and Computing February 2009, 62-71.