Fact-Checking of AI-Generated Reports

With advances in generative artificial intelligence (AI), it is now possible to produce realistic-looking automated reports for preliminary reads of radiology images. However, it is also well-known that such models often hallucinate, leading to false findings in the generated reports. In this paper, we propose a new method of fact-checking of AI-generated reports using their associated images. Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings. To train such an examiner, we first created a new dataset of fake reports by perturbing the findings in the original ground truth radiology reports associated with images. Text encodings of real and fake sentences drawn from these reports are then paired with image encodings to learn the mapping to real/fake labels. The examiner is then demonstrated for verifying automatically generated reports.

Reference

R. Mahmood, G. Wang, M.K. Kalra, P. Yan, " Fact-Checking of AI-Generated Reports ,"

Proceedings of the Machine Learning in Medical Imaging (MLMI), Vancouver, Canada (2023)