CFP last date
20 May 2024
Reseach Article

A Survey on methods of Trustworthiness towards Artificial Intelligence

by Hiralal B. Solunke, Sonal P. Patil, Shital S. Jadhav
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 183 - Number 26
Year of Publication: 2021
Authors: Hiralal B. Solunke, Sonal P. Patil, Shital S. Jadhav
10.5120/ijca2021921635

Hiralal B. Solunke, Sonal P. Patil, Shital S. Jadhav . A Survey on methods of Trustworthiness towards Artificial Intelligence. International Journal of Computer Applications. 183, 26 ( Sep 2021), 5-8. DOI=10.5120/ijca2021921635

@article{ 10.5120/ijca2021921635,
author = { Hiralal B. Solunke, Sonal P. Patil, Shital S. Jadhav },
title = { A Survey on methods of Trustworthiness towards Artificial Intelligence },
journal = { International Journal of Computer Applications },
issue_date = { Sep 2021 },
volume = { 183 },
number = { 26 },
month = { Sep },
year = { 2021 },
issn = { 0975-8887 },
pages = { 5-8 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume183/number26/32089-2021921635/ },
doi = { 10.5120/ijca2021921635 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:17:56.920502+05:30
%A Hiralal B. Solunke
%A Sonal P. Patil
%A Shital S. Jadhav
%T A Survey on methods of Trustworthiness towards Artificial Intelligence
%J International Journal of Computer Applications
%@ 0975-8887
%V 183
%N 26
%P 5-8
%D 2021
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This is the survey paper based on the role of trustworthiness of data analytics from the data quality and privacy concern perspectives in artificial intelligence. Science fiction movies like ‘The Terminator’ and ‘I, Robot’ have exhibited what might happen in case artificial intelligence goes rogue. Such dystopian fantasies about AI are widely discussed by experts and researchers in the field of AI as well. Many of these experts believe that super-intelligent AI systems will pose a significant threat to humanity in the near future. And, considering the untold potential of AI, this may soon become a reality. Artificial Intelligence System Developers need to understand society concerns over the development of Artificial Intelligence System. There have been many reported instances where developers neglected these warnings and created AI systems that went rogue and which is harmful to society. This survey paper describes the risks and challenges AI; also AI can be used to enhance the trustworthiness of a system. It is noteworthy that the same technologies that can lead to trust concerns may also be applied to improve the trust in systems and to mitigate risks. Safety, security and reliability can be improved through the appropriate use of AI technologies since they can enable faster response and adaptability of a system to unforeseen situations.

References
  1. Andrew B. Ware, “Algorithms and Automation: Fostering Trustworthiness in Artificial Intelligence” spring 2018.
  2. Andrea Ferrario, Michele Loi & Eleonora Viganò,” In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions”.
  3. Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: applying the “diffuse, default model” of trust to experiments involving artificial agents. Journal Ethics and Information Technology, 13(1), 39–51.
  4. Anderson, J., & Rainie L. (2018). Artifcial intelligence and the future of humans, Pew Research Centre, available here: https://www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-future-ofhumans/. Accessed 25 Sept 2019.
  5. Andras, P., Esterle, L., Guckert, M., Han, T. A., Lewis, P. R., Milanovic, K., et al. (2018). Trusting intelligent machines: Deepening trust within socio-technical systems. IEEE Technology and Society Magazine, 37(4), 76–83.
  6. Asaro, P. M. (2019). AI ethics in predictive policing: From models of threat to an ethics of care. IEEE Technology and Society Magazine, 38(2), 40–53. https://doi.org/10.1109/MTS.2019.2915154. Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.
  7. Blumberg Capital. (2019). Artifcial Intelligence in 2019: Getting past the adoption tipping point. Blumberg Capital. 2019. https://www.blumbergcapital.com/ai-in-2019/. Accessed 21 Nov 2019. Bryson, J. (2018). AI & Global Governance: No one should trust AI. United Nations.
  8. Bryson, J. J., & Kime, P. P. (2011). Just an artifact: Why machines are perceived as moral agents. In Twenty-second international joint conference on artifcial intelligence.
  9. Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: Applying the “difuse, default model” of trust to experiments involving artifcial agents. Ethics and Information Technology, 13(1), 39–51.
  10. Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral signifcance of the appearance, perception, and performance of artifcial agents. AI & SOCIETY, 24(2), 181–189. Coeckelbergh, M. (2012). Can we trust robots? Ethics and Information Technology, 14(1), 53–60.
  11. Mark Ryan (2020) ,In AI We Trust: Ethics, Artifcial Intelligence, and Reliability
Index Terms

Computer Science
Information Sciences

Keywords

Artificial Intelligence (AI) trustworthiness super-Intelligent adaptability predictability explainability Trust E-Trust.