Producción CyT

MICCAI FAIMI Workshop (Fairness of AI in Medical Imaging) - Towards unraveling calibration biases in medical image analysis

Congreso

Autoría:

Maria Agustina Ricci Lara ; Candelaria Mosquera ; FERRANTE, ENZO ; Rodrigo Echeveste

Fecha:

2023

Editorial y Lugar de Edición:

Springer Nature

Resumen *

In recent years the development of artificial intelligence (AI) for medical image analysis has gained enormous momentum. At the same time, a large body of work has shown that AI systems can systematically and unfairly discriminate against certain populations in various application scenarios, motivating the emergence of algorithmic fairness studies. Most research on healthcare algorithmic fairness to date has focused on the assessment of biases in terms of classical discrimination metrics such as AUC and accuracy. Potential biases in terms of model calibration, however, have only recently begun to be evaluated. This is especially important when working with clinical decision support systems, as predictive uncertainty is key to optimally evaluate and combine multiple sources of information. Here we study discrimination and calibration biases in models trained for automatic detection of malignant dermatological conditions from skin lesions images. Importantly, we show how several typically employed calibration metrics are systematically biased with respect to sample sizes, and how this can lead to erroneous conclusions if not taken into consideration. This is of particular relevance to fairness studies, where data imbalance results in drastic sample size differences between demographic sub-groups, which could act as confounders. Información suministrada por el agente en SIGEVA

Palabras Clave

medical imagingcalibrationfairness