Medical Image Analysis

The Medical Image Analysis (MIA) group develops advanced AI-based image analysis technologies and translational biomedical engineering solutions to quantify, diagnose, and monitor diseases.

Our core expertise lies in multimodal image segmentation and longitudinal analysis for brain tumours (glioblastoma, brain metastases, ischemic stroke), as well as deep learning for thoracic imaging. We aim to identify robust, non-invasive imaging biomarkers to characterize disease evolution, guide therapy, and support clinical decision-making.

AI in Neuro-Oncology

Accurate, Robust and Clinically Translated AI

Magnetic Resonance Imaging (MRI) remains the cornerstone of brain tumour assessment. Over the last years, the MIA group has co-developed AI models leading to clinically deployed software in collaboration with Neosoma Inc., including FDA-cleared solutions for glioma segmentation.

Building on this foundation, 2025 marks an important strategic development: the creation of Neosoma GmbH in Bern. In this new structure, we serve as the Swiss-based hub for all current and future AI developments of Neosoma Inc., working in close partnership with the University of Bern and Inselspital to strengthen Switzerland’s position as a global innovation site for AI in brain oncology.

SNSF–MAPS: A-BEACON AWARD

A major highlight this year is the award of the SNSF Multi-Area Projects in Science (MAPS) grant for A-BEACON (AI-based Brain Metastases Tracking and Segmentation). This international consortium aims to address a critical unmet need: consistent and accurate monitoring of brain metastases across longitudinal MRI timepoints.

A-BEACON develops a "zero-miss" AI system focused on:

  • High detection sensitivity
  • Reliable tumour volume quantification
  • Precise longitudinal lesion tracking
  • Seamless integration into clinical workflows
Featured: DeepBraTumIA
Featured: Lumiere

Interpretable and Clinically Relevant AI in Medical Imaging

AI in Radiotherapy

The group continues to advance AI methods that directly incorporate clinical dosimetric relevance into segmentation evaluation and contouring workflows. Recent work includes:

  • A multifaceted AI contouring evaluation framework benchmarking geometric and dosimetric performance.
  • Dose-prediction models that provide radiotherapy-aware quality assurance.
  • Automated systems for ranking segmentation variants based on dosimetric impact.

These developments help ensure that segmentation accuracy is assessed not only geometrically but also in terms of actual clinical consequences.

Interpretability of Deep Learning-based MIA

Trustworthy AI in medicine requires models that provide transparent, human-aligned decision pathways. A central theme this year is the advancement of human-aligned learning frameworks, where models are trained to attend to radiologically meaningful patterns.

We employ explanation-guided learning and transformer-based attention mechanisms to ensure internal representations align with clinical expectations. These advances extend beyond neuro-oncology into thoracic imaging, ensuring performance gains remain tied to medically meaningful explanations.

Our interpretability research is now embedded into major applied initiatives, including A-BEACON, contributing to AI systems that are safe-by-design, clinically explainable, and reliable.