top of page

Explainability & Trusted AI

Between admirers and detractors, artificial intelligence is today at the center of our concerns because of the prowess it demonstrates. Its enthusiasts will say that it is able to facilitate our daily lives, optimize our working time, and make the best decisions for us. But others may say that it is unwise to entrust our decisions to a tool whose workings and workings we do not really understand. After all, isn't AI a kind of “Black Box” as we so often call it?



In this article, we explore the importance of AI explainability, especially in the field of medical imaging, and discuss current efforts to make these systems more understandable and trustworthy.



Keywords: Artificial Intelligence, Machine Learning, Deep Learning, explainability, interpretability, Grad-CAM, medical imaging.



AI: a priori elusive


Artificial intelligence revolves around two key sub-disciplines: Machine Learning, and Deep Learning. Despite some similarities, they differ in several ways. Machine Learning is an approach to artificial intelligence that allows machines to learn from data. It uses various techniques, such as regression and decision trees, to create predictive models. Deep Learning, on the other hand, is a subcategory of Machine Learning. It uses multi-layered neural networks, called deep neural networks, to learn from large amounts of data. What really sets Deep Learning apart is its ability to process and learn from unstructured and complex data, like images or natural language, that would be more difficult with traditional Machine Learning techniques.


While some Machine Learning models, such as decision trees, are fairly easy to understand, neural networks are often seen as "black boxes. This name comes from the opacity of their decision-making processes, which turn out to be particularly complex due to their sophisticated architecture and the introduction of non-linearity.


In deep neural networks, millions of parameters interact with each other. The process by which these parameters are adjusted during the training phase is quite complex to decipher. In order to capture the complex relationships between inputs and outputs of each layer, introducing non-linearity in these layers becomes necessary. This non-linearity is generated through a specific function, called activation function. The latter allows the transfer of information as soon as the stimulation threshold is crossed. In a nonlinear model, slight variations in the input can lead to significantly different output results, which can make the interpretation of some results quite difficult.


Transparency and trust


AI no longer needs to prove itself, especially in certain medical applications where it is more efficient than doctors. Prof Jin Mo Goo from the Department of Radiology at Seoul National University Hospital in Korea highlights about a detection AI lung nodules, whose detection rate without AI is 0.25%, against 0.59% with the help of AI:


"Our study provided strong evidence that AI could really help interpret chest X-ray. This will help to more effectively identify lung diseases, especially lung cancer, at an earlier stage."


However, despite the impressive AI performance, it cannot be considered sufficient. The medical profession, which bears a legal responsibility, must never validate an automated diagnosis without first having verified and understood it.


That's why AI should never replace the doctor, because it cannot replace the doctor's essential confidence and presence to make decisions critical and provide essential human support. AI-based medical decisions can have a significant impact on patients' lives, so it's critical that users understand how and why these decisions are made.


When AI systems are able to provide clear and compelling explanations, it helps build a higher level of trust and drive adoption wider range of these technologies.



Tools to understand AI (explainable AI)


There are several popular solutions used to explain AI.


In Machine Learning two commonly used libraries are LIME and SHAP . These libraries provide the ability to explain model predictions by providing global and local explanations. They assign importance values to different input features, which helps to better understand how models work.


In Deep Learning the famous method used to explain model predictions is the technique called Grad-CAM [1] (Gradient -weighted Class Activation Mapping). The Grad-CAM generates a heat map that highlights the areas of the image examined by the model during its prediction.


This method is applied to the last convolutional layer of the model, as this is where the final predictions are generated from the gradients. Gradients are calculated during back-propagation, using the class output predicted by the model against the activations of the last convolutional layer. These gradients make it possible to measure the relative importance of each activation for the predicted class.


The principle of Grad-CAM is based on the weighting of activations. Gradients are used to weight activations, meaning the higher the gradient for a given activation, the more important that activation is considered to be for the predicted class. The feature map of the last convolutional layer is then combined with the activation weighting to produce a heat map that highlights the activated regions for a certain class.


By superimposing this heat map on the original image, it is possible to visualize the areas of the image used by the model to perform its predictions. This visualization allows you to better understand the areas of interest of the model and the reasons on which it bases its decisions.


Figure 1 . Grad-CAM for the detection of COVID-19 from chest X-rays [2]. Left: a chest X-ray from the COVID-19 dataset [3]. Right: the Grad-CAM attention map calculated from the original image on the left, via the Pytorch library M3d-CAM [ 4].



References


[1] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).


[2] Wang, L., Lin, Z. Q., & Wong, A. (2020). Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific reports, 10(1), 19549.


[3] Cohen, J. P., Morrison, P., Dao, L., Roth, K., Duong, T. Q., & Ghassemi, M. (2020). Covid-19 image data collection: Prospective predictions are the future. arXiv preprint arXiv:2006.11988.


[4] Gotkowski, K., Gonzalez, C., Bucher, A., & Mukhopadhyay, A. (2020). M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning. arXiv preprint arXiv:2007.00453.


[5] Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018, March). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV) (pp. 839-847). IEEE.



6 views0 comments
bottom of page