ABSTRACT: The integration of artificial intelligence (AI) into clinical decision-making has accelerated over the past decade; however, lack of interpretability continues to limit clinicians’ trust in automated predictions. This study evaluates contemporary explainability approaches—saliency maps, Grad-CAM, SHAP, LIME, attention-based Transformers, and diffusion-based counterfactuals—using a newly available open medical imaging dataset, CheXpert-Plus...
Keywords Explainability, Clinical Trust, Medical AI, SHAP, Grad-CAM, Diffusion Models, Uncertainty Quantification, CheXpert-Plus Dataset, Transparency, Trustworthy AI.
[1].
Adadi, A., & Berrada, M. (2018). Peeking Inside The Black Box: A Survey On Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160.
[2].
Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic Accuracy Of Deep Learning In Medical Imaging: A Systematic Review And Meta-Analysis. Npj Digital Medicine, 4(1), 1–23.[3].
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities And Challenges Toward Responsible AI. Information Fusion, 58, 82–115.
[4].
Begoli, E., Bhattacharya, T., & Kusnezov, D. (2019). The Need For Uncertainty Quantification In Machine-Assisted Medical Decision Making. Nature Machine Intelligence, 1(1), 20–23.
[5].
Borelli, P., Filippo, M., & Silva, L. (2022). Trustworthy Artificial Intelligence In Healthcare: A Comprehensive Review Of Explainability, Fairness, And Transparency. Artificial Intelligence In Medicine, 128, 102–109