AI in Radiotherapy
The group continues to advance AI methods that directly incorporate clinical dosimetric relevance into segmentation evaluation and contouring workflows. Recent work includes:
- A multifaceted AI contouring evaluation framework benchmarking geometric and dosimetric performance.
- Dose-prediction models that provide radiotherapy-aware quality assurance.
- Automated systems for ranking segmentation variants based on dosimetric impact.
These developments help ensure that segmentation accuracy is assessed not only geometrically but also in terms of actual clinical consequences.
Interpretability of Deep Learning-based MIA
Trustworthy AI in medicine requires models that provide transparent, human-aligned decision pathways. A central theme this year is the advancement of human-aligned learning frameworks, where models are trained to attend to radiologically meaningful patterns.
We employ explanation-guided learning and transformer-based attention mechanisms to ensure internal representations align with clinical expectations. These advances extend beyond neuro-oncology into thoracic imaging, ensuring performance gains remain tied to medically meaningful explanations.
Our interpretability research is now embedded into major applied initiatives, including A-BEACON, contributing to AI systems that are safe-by-design, clinically explainable, and reliable.