Explainable Artificial Intelligence: A Study of Current State-of-the-Art Techniques for Making ML Models Interpretable and Transparent
- Resource Type
- Conference
- Authors
- Thakur, Ayush; Vashisth, Rashmi; Tripathi, Sudhanshu
- Source
- 2023 3rd International Conference on Technological Advancements in Computational Sciences (ICTACS) Technological Advancements in Computational Sciences (ICTACS), 2023 3rd International Conference on. :111-115 Nov, 2023
- Subject
- Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineering Profession
General Topics for Engineers
Photonics and Electrooptics
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Industries
Ethics
Data privacy
Data handling
Decision making
Fabrics
Cognition
Explainable Artificial Intelligence
Humanin-the-loop systems
Model Bias
Transparency and Interpret ability
- Language
Artificial intelligence and Machine learning are becoming more prevalent in a variety of industries, necessitating an increased the demand for systems that are capable of explaining their decision-making processes to human users. The idea behind of “Explainable AI” is to create AI systems that can offer clear and reasoned arguments for their activities they perform. This research looks on new approaches for increasing transparency and interpretability in machine learning models. Our focus is on the wide range of different XAI approaches that have been put forth, with a new focus on how broadly applicable they are. Apart from that, we will look into the significant challenges that must be overcome to support the development of ML models that are both transparent and accessible.