Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

A Novel Approach towards Rating Free-Text Responses in Job Recruitment

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2021
Authors:
Sarvesh Relekar, Sayak Ray
10.5120/ijca2021921048

Sarvesh Relekar and Sayak Ray. A Novel Approach towards Rating Free-Text Responses in Job Recruitment. International Journal of Computer Applications 174(14):1-8, January 2021. BibTeX

@article{10.5120/ijca2021921048,
	author = {Sarvesh Relekar and Sayak Ray},
	title = {A Novel Approach towards Rating Free-Text Responses in Job Recruitment},
	journal = {International Journal of Computer Applications},
	issue_date = {January 2021},
	volume = {174},
	number = {14},
	month = {Jan},
	year = {2021},
	issn = {0975-8887},
	pages = {1-8},
	numpages = {8},
	url = {http://www.ijcaonline.org/archives/volume174/number14/31743-2021921048},
	doi = {10.5120/ijca2021921048},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

The use of chatbots has become mainstream in the field of staffing and recruitment. It has been observed that candidates tend to be much more at ease when interacting with chatbots. The responses provided by the candidates to the questions posed to them via chatbots are evaluated on the basis of various parameters by human evaluators who may have a subjective bias towards what defines a good response. This is especially the case, when it comes to evaluating free-text responses to open-ended questions that have little to no domain constraints. To overcome this hurdle involving human bias, we propose an alternate approach that utilizes modern techniques in the fields of Natural Language Processing and Deep Learning to develop an algorithm that rates free text responses in an impartial manner on the basis of the mood/sentiment expressed by the candidate, the grammatical accuracy of the answer and the relevance of the response w.r.t. the question asked while penalizing it for the presence of any negation or grammatical error, thus acting as a baseline model that aims to achieve the stated task. This algorithm thus sets a common standard towards what can be considered a good response thereby overcoming the hurdle arising from human perspective and establishing a criterion for evaluating free text responses.

References

  1. Rodrigues, F., Arajo, L. (2012). Automatic Assessment of Short Free Text Answers. 4th International Conference on Computer Supported Education. 2.
  2. Noorbehbahani, F., Kardan, A. (2011). The automatic assessment of free text answers using a modified BLEU algorithm. Computers & Education. 56. 337-345. 10.1016/j.compedu.2010.07.013.
  3. Chakraborty, U., Das, S. (2015). Automatic Free Text Answer Evaluation using Knowledge Network. IJCA. 117. 10.5120/20532-2876.
  4. Perez, D., Gliozzo, A., Strapparava, C., Alfonseca, E., Rodriguez, P., Magnini, B. (2005). In automatic assessment of students? free-text answers underpinned by the combination of a BLEU-inspired algorithm and latent semantic analysis. In: Proceedings of the 18th international Florida artificial intelligence research society conference (FLAIRS?05) (pp. 358?362).
  5. Mohler, M., Mihalcea, R., (2009). Text-to-Text Semantic Similarity for Automatic Short Answer Grading. Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)
  6. Burstein, J., Leacock, C., Swartz, R. (2001) Automated Evaluation Of Essays And Short Answers. Fifth International Computer Assisted Assessment Conference Loughborough University 2nd and 3rd July 2001.
  7. Leacock, C., Chodorow, M. C-rater: Automated Scoring of Short-Answer Questions. Computers and the Humanities 37, 389?405 (2003). https://doi.org/10.1023/A:1025779619903
  8. Rudner, L.M., Liang, T. (2002). Automated essay scoring using Bayes? Theorem. The Journal of Technology, Learning and Assessment, 1(2), pp. 3-21.
  9. Larkey, L.S. (1998). Automatic essay grading using text categorization techniques. In: Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval (pp. 90?95).
  10. IBM Watson Platform, ”https://www.ibm.com/watson/”
  11. GrammarBot: Grammar Check API, ”https://www.grammarbot.io/”
  12. How to Write a Spelling Corrector by Peter Norvig, ”https://norvig.com/spell-correct.html”
  13. Quoc, L., Mikolov, T. (2014). Distributed Representations of Sentences and Documents. 31st International Conference on Machine Learning, ICML 2014.
  14. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J. (2013). Distributed Representations ofWords and Phrases and their Compositionality. Advances in Neural Information Processing Systems. 26.
  15. Pennington, J., Socher, R., Manning, C. (2014). GloVe: Global Vectors for Word Representation.
  16. Hewlett Foundation: Automated Essay Scoring, ”https://www.kaggle.com/c/asap-aes”
  17. Camacho-Collados, J., Pilevar, L.M. (2017). On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis.
  18. Bickel, P., Doksum, K. (1981). An Analysis of Transformation Revisited. Journal of the American Statistical Association. 76. 10.2307/2287831.
  19. Jurafsky, D, Martin, J.H., (2009), Speech and Language Processing (2nd Edition). Prentice-Hall, Inc., USA.
  20. Sherstinsky, Alex. (2020). Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Physica D: Nonlinear Phenomena. 404. 132306. 10.1016/j.physd.2019.132306.
  21. Glasgow, B., Mandell, A., Binney D., Lila, G., Fisher, D. (1997). MITA : An Information Extraction Approach to Analysis of Free-Form Text in Life Insurance Applications. Innovative Applications of Artificial Intelligence, Providence, RI, USA, July 27-31, 1997.
  22. Contreras, J.O., Hilles S., Abubakar, Z. B. (2018). Automated Essay Scoring with Ontology based on Text Mining and NLTK tools, International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, pp. 1-6, doi: 10.1109/ICSCEE.2018.8538399.
  23. Shah, C., Pomerantz J., (2010). Evaluating and Predicting Answer Quality in Community QA. 411-418. 10.1145/1835449.1835518.
  24. Mitchell, T., Russell, P., Broomhead, Aldridge, N. (2002). Towards robust computerised marking of free-text responses.
  25. Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J. (2020). Deep Learning Based Text Classification: A Comprehensive Review. ArXiv, abs/2004.03705.

Keywords

Automated Free-Text Grading, Natural Language Processing, Deep Learning, Data Science