Call for Paper - November 2023 Edition
IJCA solicits original research papers for the November 2023 Edition. Last date of manuscript submission is October 20, 2023. Read More

Transformer based Neural Joke Generator

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2021
Authors:
Taaha Kazi, Sameer Joshi, Steeve Kaitharath, Imran Ali Mirza
10.5120/ijca2021921724

Taaha Kazi, Sameer Joshi, Steeve Kaitharath and Imran Ali Mirza. Transformer based Neural Joke Generator. International Journal of Computer Applications 183(34):1-4, October 2021. BibTeX

@article{10.5120/ijca2021921724,
	author = {Taaha Kazi and Sameer Joshi and Steeve Kaitharath and Imran Ali Mirza},
	title = {Transformer based Neural Joke Generator},
	journal = {International Journal of Computer Applications},
	issue_date = {October 2021},
	volume = {183},
	number = {34},
	month = {Oct},
	year = {2021},
	issn = {0975-8887},
	pages = {1-4},
	numpages = {4},
	url = {http://www.ijcaonline.org/archives/volume183/number34/32150-2021921724},
	doi = {10.5120/ijca2021921724},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

Humor is a complex and intrinsic part of human conversation, which involves a deep understanding of grammatical structure and knowledge of the world. Building computational models that can identify and generate humor remains a challenging field. This work presents a neural network based joke generator that employs a transformer-based architecture. To improve the generator's performance, the model was further trained with Proximal Policy Optimization (PPO), a reinforcement learning algorithm. The model's performance was evaluated by human ratings by conductingqualitative analysis.

References

  1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. North American Chapter of the Association for Computational Linguistics.
  2. Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya. 2019. Language Models are Unsupervised Multitask Learners.
  3. John Schulman and FilipWolski and PrafullaDhariwal and Alec Radford and Oleg Klimov 2017. Proximal Policy Optimization Algorithms.
  4. Diyi Yang, AlonLavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,
  5. Orion Weller, Kevin Seppi. Humor Detection: A Transformer Gets the Last Laugh. 2019. Association for Computational Linguistics
  6. IssaAnnamoradnejad. ColBERT: Using {BERT} Sentence Embedding for Humor Detection. 2020. https://arxiv.org/abs/2004.12765
  7. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown ,Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving 2020. Fine-Tuning Language Models from Human Preferences
  8. Ashish Vaswani, Noam Shazeer, NikiParmar, JakobUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and IlliaPolosukhin. 2017. Attention Is All You Need. 31st Conference on Neural Information Processing
  9. AbhinavMoudgil. Short Jokes. 2016. https://www.kaggle.com/abhinavmoudgil95/short-joke
  10. Taaha Kazi. Detoxifying Language Models with Proximal Policy Optimization. 2021. Manuscript in preparation.

Keywords

Natural Language Generation, Humor