2022/11/24 | Research | Artificial Intelligence

Using interpretability to improve medical AI

The Medical Image Analysis lab at the ARTORG Center has just published a study where interpretability informed sample selection from a set of radiological images. The lab, which has focused on interpretability both as a means for more transparency in medical AI workings and as a tool to improve AI itself, reports that their novel sample selection approach based on graph analysis to identify informative samples in a multi-label setting improved model performance, learning rates, and robustness when compared to state-of-the-art Active Learning methods.

Workflow of proposed GESTALT concept for Graph NodE BaSed InTerpretAbility Guided SampLe SelecTion approach (https://doi.org/10.1109/TMI.2022.3215017)

In supervised learning, selecting the most informative samples with a minimum of labeled samples contributes to optimum system performance while needing minimum expert interventions. Active learning (AL) methods can solve this to a certain extend if applied to a single label setting. But in radiology, more often multiple disease labels are needed.

In a paper just published, the Medical Image Analysis (MIA) lab has proposed a new sample selection approach based on graph anaylsis to identify informative samples in such a multi-label setting. Building on findings from interpretability of deep learning models, edge interactions in the graph characterize similarity between corresponding interpretability saliency map model encodings.

MIA has pioneered interpretability of AI technologies since 2018 when it established the first workshop dedicated to interpretability of AI for medical imaging: iMIMIC. It has established itself as a strong reference point for interpretable AI applications in neuroradiology and radiology and for interpretability technology. MIA both proposes to apply interpretability as a quality control strategy to evaluate trained AI models and to use interpretability-derived information to guide the learning process of AI models. Latest trends in these areas will be discussed in a daylong symposium on the topic in March 2023: www.caim.unibe.ch/bias2023