Generalist Models in Medical Image Segmentation:
A Survey and Performance Comparison with Task-Specific Approaches

Andrea Moglia1✦ , Matteo Leccardi1✦ ,
Matteo Cavicchioli1 , Alice Maccarini2 , Marco Marcon1 ,
Luca Mainardi1 , Pietro Cerveri1,2 ,

1: Politecnico di Milano, 2: Università di Pavia
✦: Equally contributing authors
For correspondence: andrea.moglia@polimi.it

Abstract

Following the successful paradigm shift of large language models, which leverages pre-training on a massive corpus of data and fine-tuning on various downstream tasks, generalist models have made their foray into computer vision. The introduction of the Segment Anything Model (SAM) marked a milestone in the segmentation of natural images, inspiring the design of numerous architectures for medical image segmentation. In this survey, we offer a comprehensive and in-depth investigation of generalist models for medical image segmentation. We begin with an introduction to the fundamental concepts that underpin their development. Then, we provide a taxonomy based on features fusion on the different declinations of SAM in terms of zero-shot, few-shot, fine-tuning, adapters, on SAM2, on other innovative models trained on images alone, and others trained on both text and images. We thoroughly analyze their performances at the level of both primary research and best-in-literature, followed by a rigorous comparison with the state-of-the-art task-specific models. We emphasize the need to address challenges in terms of compliance with regulatory frameworks, privacy and security laws, budget, and trustworthy artificial intelligence (AI). Finally, we share our perspective on future directions concerning synthetic data, early fusion, lessons learnt from generalist models in natural language processing, agentic AI, and physical AI, and clinical translation. We publicly release a database-backed interactive app with all survey data (https://hal9000-lab.github.io/GMMIS-Survey/)

Try again!

Use another query combination

Use the selectors below to display the table of the best-in-literature Dice scores [%].
Click on the table header to sort results for a specific dataset.
While navigating the table, click on a Dice score to see which models obtained it on which dataset.
If the Dice Score for a certain model on a certain dataset was obtained by someone who is not the research group that created the model, the tooltip will show a bibtex-style citation of the work where the Dice Score was found.
Scroll left and right to navigate the full table (use mouse wheel while pressing the shift - or maiusc - key).
Selection of interest: Selection:
Filter results by:
In the models table, orange rows represent generalist models, while green rows represent task-specific models.
Click on the table header elements to sort results for a specific column (not all columns support sorting).
Click on listed publishers to go to the official publication website.
Click on code logo(s) to go to the official repo.
Scroll left and right to navigate the full table (use mouse wheel while pressing the shift - or maiusc - key).
Filter models by:



Click on the table header elements to sort results for a specific column (not all columns support sorting).
Click on the online resources to go to the datasets' related websites.
Scroll left and right to navigate the full table (use mouse wheel while pressing the shift - or maiusc - key).
Filter datasets by:

Available on Information Fusion

26 Sep 2025

https://doi.org/10.1016/j.inffus.2025.103709

Preprint available on ArXiv

12 Jun 2025

https://doi.org/10.48550/arXiv.2506.10825