Explainable artificial intelligence (XAI) for trustworthy decision-making
DOI:
https://doi.org/10.35335/cit.Vol15.2023.622.pp240-246Keywords:
Ethical Decision Making, Fairness and Accountability, Loan Approval Optimization, Transparency in AI, Trustworthy AIAbstract
This research delves into the optimization of loan approval decisions by integrating the Trustworthy Decision Making (TDM) framework into a mathematical model. The study aims to strike a balance between maximizing loan approvals and ensuring fairness, transparency, and accountability in AI-driven decision-making processes. Leveraging principles of transparency, fairness, and accountability, the mathematical model seeks to optimize loan approvals while adhering to ethical considerations. The formulation emphasizes the importance of interpretable models to maintain transparency in decision explanations, ensuring alignment with trustworthy AI practices. Implementation results demonstrate the efficacy of the model in achieving a balanced approval rate across demographic groups while providing transparent explanations for decisions. This study highlights the significance of ethical considerations and mathematical formulations in fostering responsible AI implementations. However, continual refinement and adaptation of such models remain essential to align with evolving ethical standards and societal expectations. Overall, this research contributes to the discourse on responsible AI by showcasing a methodological approach that integrates ethical principles and mathematical formulations to promote fairness, transparency, and accountability in AI-driven decision-making.
Downloads
References
I. Ahmed, G. Jeon, and F. Piccialli, “From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where,” IEEE Trans. Ind. Informatics, vol. 18, no. 8, pp. 5031–5042, 2022.
I. H. Sarker, “Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems,” SN Comput. Sci., vol. 3, no. 2, p. 158, 2022.
A. B. Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. fusion, vol. 58, pp. 82–115, 2020.
O. Ozmen Garibay et al., “Six human-centered artificial intelligence grand challenges,” Int. J. Human–Computer Interact., vol. 39, no. 3, pp. 391–437, 2023.
T. Wischmeyer, “Artificial intelligence and transparency: opening the black box,” Regul. Artif. Intell., pp. 75–101, 2020.
W. J. von Eschenbach, “Transparency and the black box problem: Why we do not trust AI,” Philos. Technol., vol. 34, no. 4, pp. 1607–1622, 2021.
I. Sifat, “Artificial Intelligence (AI) and Future Retail Investment,” 2023.
F. A. Raso, H. Hilligoss, V. Krishnamurthy, C. Bavitz, and L. Kim, “Artificial intelligence & human rights: Opportunities & risks,” Berkman Klein Cent. Res. Publ., no. 2018–6, 2018.
D. Wang et al., “‘Brilliant AI doctor’ in rural clinics: Challenges in AI-powered clinical decision support system deployment,” in Proceedings of the 2021 CHI conference on human factors in computing systems, 2021, pp. 1–18.
H. S. M. Lim and A. Taeihagh, “Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities,” Sustainability, vol. 11, no. 20, p. 5791, 2019.
A. V. S. Madhav and A. K. Tyagi, “Explainable Artificial Intelligence (XAI): connecting artificial decision-making and human trust in autonomous vehicles,” in Proceedings of Third International Conference on Computing, Communications, and Cyber-Security: IC4S 2021, Springer, 2022, pp. 123–136.
L. Longo, R. Goebel, F. Lecue, P. Kieseberg, and A. Holzinger, “Explainable artificial intelligence: Concepts, applications, research challenges and visions,” in International cross-domain conference for machine learning and knowledge extraction, Springer, 2020, pp. 1–16.
W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities,” Knowledge-Based Syst., vol. 263, p. 110273, 2023.
S. Ali et al., “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence,” Inf. Fusion, vol. 99, p. 101805, 2023.
A. M. Antoniadi et al., “Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review,” Appl. Sci., vol. 11, no. 11, p. 5088, 2021.
U. Ehsan et al., “The who in explainable ai: How ai background shapes perceptions of ai explanations,” arXiv Prepr. arXiv2107.13509, 2021.
M. U. Islam, M. Mozaharul Mottalib, M. Hassan, Z. I. Alam, S. M. Zobaed, and M. Fazle Rabby, “The past, present, and prospective future of xai: A comprehensive review,” Explain. Artif. Intell. Cyber Secur. Next Gener. Artif. Intell., pp. 1–29, 2022.
D. D. W. Praveenraj et al., “Exploring Explainable Artificial Intelligence for Transparent Decision Making,” in E3S Web of Conferences, EDP Sciences, 2023, p. 4030.
M. Belghachi, “A Review on Explainable Artificial Intelligence Methods, Applications, and Challenges,” Indones. J. Electr. Eng. Informatics, vol. 11, no. 4, 2023.
M. Nazar, M. M. Alam, E. Yafi, and M. M. Su’ud, “A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques,” IEEE Access, vol. 9, pp. 153316–153348, 2021.
V. Hassija et al., “Interpreting black-box models: a review on explainable artificial intelligence,” Cognit. Comput., pp. 1–30, 2023.
T. Speith, “A review of taxonomies of explainable artificial intelligence (XAI) methods,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2239–2250.
A. F. Markus, J. A. Kors, and P. R. Rijnbeek, “The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies,” J. Biomed. Inform., vol. 113, p. 103655, 2021.
A. Konar, Artificial intelligence and soft computing: behavioral and cognitive modeling of the human brain. CRC press, 2018.
M. Dubova, “Building human-like communicative intelligence: A grounded perspective,” Cogn. Syst. Res., vol. 72, pp. 63–79, 2022.
K. R. Chowdhary and K. R. Chowdhary, “Introducing artificial intelligence,” Fundam. Artif. Intell., pp. 1–23, 2020.
H. Greif, “Exploring Minds: Modes of Modeling and Simulation in Artificial Intelligence,” Perspect. Sci., vol. 29, no. 4, pp. 409–435, 2021.
A. Kassambara, Machine learning essentials: Practical guide in R. Sthda, 2018.
D. Berrar, “Bayes’ theorem and naive Bayes classifier,” Encycl. Bioinforma. Comput. Biol. ABC Bioinforma., vol. 403, p. 412, 2018.
M. Scutari and J.-B. Denis, Bayesian networks: with examples in R. CRC press, 2021.
A. Kumar, A. Zhou, G. Tucker, and S. Levine, “Conservative q-learning for offline reinforcement learning,” Adv. Neural Inf. Process. Syst., vol. 33, pp. 1179–1191, 2020.
J. Clifton and E. Laber, “Q-learning: Theory and applications,” Annu. Rev. Stat. Its Appl., vol. 7, pp. 279–301, 2020.
B. Jang, M. Kim, G. Harerimana, and J. W. Kim, “Q-learning algorithms: A comprehensive classification and applications,” IEEE access, vol. 7, pp. 133653–133667, 2019.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Deni Kurniawan, Lise Pujiastuti

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.