Improving the Robustness of Deep Learning Models in Predicting Hematoma Expansion from Admission Head CT [ARTIFICIAL INTELLIGENCE]

BACKGROUND AND PURPOSE:
Robustness against input data perturbations is essential for deploying deep learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep learning models’ prediction errors. Testing deep learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model’s robustness. In this study, we examined ad…

Read the full article on ajnr.org