CFP last date
20 May 2024
Reseach Article

A Case Study on the Different Algorithms used for Sentiment Analysis

by Nikhil George Titus, Tinto Anto Alapatt, Niranjan Rao
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 138 - Number 12
Year of Publication: 2016
Authors: Nikhil George Titus, Tinto Anto Alapatt, Niranjan Rao
10.5120/ijca2016909030

Nikhil George Titus, Tinto Anto Alapatt, Niranjan Rao . A Case Study on the Different Algorithms used for Sentiment Analysis. International Journal of Computer Applications. 138, 12 ( March 2016), 18-20. DOI=10.5120/ijca2016909030

@article{ 10.5120/ijca2016909030,
author = { Nikhil George Titus, Tinto Anto Alapatt, Niranjan Rao },
title = { A Case Study on the Different Algorithms used for Sentiment Analysis },
journal = { International Journal of Computer Applications },
issue_date = { March 2016 },
volume = { 138 },
number = { 12 },
month = { March },
year = { 2016 },
issn = { 0975-8887 },
pages = { 18-20 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume138/number12/24431-2016909030/ },
doi = { 10.5120/ijca2016909030 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:39:29.884005+05:30
%A Nikhil George Titus
%A Tinto Anto Alapatt
%A Niranjan Rao
%T A Case Study on the Different Algorithms used for Sentiment Analysis
%J International Journal of Computer Applications
%@ 0975-8887
%V 138
%N 12
%P 18-20
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper is a case study on the different algorithms and techniques used to build a robust sentiment analysis system. In this case study we take the problem of classifying movie reviews based on their sentiment. For this, machine learning algorithms based on naive Bayes and support vector machine were used. Naive Bayes with minimum cuts algorithm [1] to generate context sensitive summaries before classifying the document was also used. There are mainly four phases in this process. A tokenizer is used to normalize words and to split the text into sentences. A feature extraction model is used to remove unwanted words and to take into account negation. Then the Naive Bayes with minimum cuts algorithm is used to filter the subjective sentences. Finally, the standard SVM/Naive Bayes algorithm was used to arrive at a proper sentiment.

References
  1. Bo Pang and Lilian Lee: A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume.
  2. M.F. Porter: An algorithm for suffix stripping, M.F. Porter, (2006) "An algorithm for suffix stripping", Program, Vol. 40 Iss: 3, pp.211 – 218.
  3. Manish Agarwal and Sudeept Sinha: Polarity Detection in Reviews (Sentiment Analysis), CS498: B.Tech. Project Report
  4. Stanford Log-linear POS Tagger http://nlp.stanford.edu/software/tagger.shtml
  5. Daniel Jurafsky, James H Martin: Speech and language processing 2nd edition.
  6. Shilpa Dhanjibhai Serasiya, Neeraj Chaudhary: Simulation of Various Classifications Results using WEKA, International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-1, Issue-3, August 2012
Index Terms

Computer Science
Information Sciences

Keywords

Tokenizer Feature Extraction Classifier Training Minimum cuts