Skip to main content

Cross-modal Attention for MRI and Ultrasound Volume Registration

In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.

Reference

X. Song, H. Guo, X. Xu, H. Chao, S. Xu, B. Turkbey, B.J. Wood, G. Wang, P. Yan, "Cross-modal Attention for MRI and Ultrasound Volume Registration ,"

Medical Image Analysis (MEDIA) 82, 102612, November, 2022.

Bibtex

@article{SONG2022102612,
title = {Cross-modal attention for multi-modal image registration},
journal = {Medical Image Analysis},
volume = {82},
pages = {102612},
year = {2022},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2022.102612},
url = {https://www.sciencedirect.com/science/article/pii/S1361841522002407},
author = {Xinrui Song and Hanqing Chao and Xuanang Xu and Hengtao Guo and Sheng Xu and Baris Turkbey and Bradford J. Wood and Thomas Sanford and Ge Wang and Pingkun Yan},
}