Explainable AI in Neural Networks: Investigating techniques for explaining the decisions and behaviors of neural network models to improve transparency and trust

Authors

  • Dr. Siarhei Katsevich Associate Professor of Computer Science, Belarusian State University of Informatics and Radioelectronics (BSUIR) Author

Keywords:

Explainable AI

Abstract

Explainable Artificial Intelligence (XAI) has emerged as a crucial area of research, especially in complex models like Neural Networks (NNs), where understanding model decisions is challenging. This paper provides a comprehensive review of techniques for enhancing the explainability of NNs. We first discuss the importance of explainability in AI, highlighting its significance in ensuring trust and facilitating decision-making. Next, we delve into various methods for explaining NNs, including feature visualization, attribution methods, and model distillation. We also explore the challenges and future directions in XAI, emphasizing the need for interpretable models in critical applications. Through this paper, we aim to provide researchers and practitioners with a deep understanding of XAI techniques in NNs, fostering the development of transparent and trustworthy AI systems.

Downloads

Download data is not yet available.

References

Tatineni, S., and A. Katari. “Advanced AI-Driven Techniques for Integrating DevOps and MLOps: Enhancing Continuous Integration, Deployment, and Monitoring in Machine Learning Projects”. Journal of Science & Technology, vol. 2, no. 2, July 2021, pp. 68-98, https://thesciencebrigade.com/jst/article/view/243.

K. Joel Prabhod, “ASSESSING THE ROLE OF MACHINE LEARNING AND COMPUTER VISION IN IMAGE PROCESSING,” International Journal of Innovative Research in Technology, vol. 8, no. 3, pp. 195–199, Aug. 2021, [Online]. Available: https://ijirt.org/Article?manuscript=152346

Tatineni, Sumanth, and Sandeep Chinamanagonda. “Leveraging Artificial Intelligence for Predictive Analytics in DevOps: Enhancing Continuous Integration and Continuous Deployment Pipelines for Optimal Performance”. Journal of Artificial Intelligence Research and Applications, vol. 1, no. 1, Feb. 2021, pp. 103-38, https://aimlstudies.co.uk/index.php/jaira/article/view/104.

Downloads

Published

2023-12-30

How to Cite

[1]
Dr. Siarhei Katsevich, “Explainable AI in Neural Networks: Investigating techniques for explaining the decisions and behaviors of neural network models to improve transparency and trust”, Australian Journal of Machine Learning Research & Applications, vol. 3, no. 2, pp. 260–268, Dec. 2023, Accessed: Sep. 18, 2024. [Online]. Available: https://sydneyacademics.com/index.php/ajmlra/article/view/67

Similar Articles

1-10 of 32

You may also start an advanced similarity search for this article.