Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning

Transrectal ultrasound (US) is the most commonly used imaging modality to guide prostate biopsy and its 3D volume provides even richer context information. Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame. In this paper, we propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device. The proposed DCL-Net utilizes 3D convolutions over a US video segment for feature extraction. An embedded self-attention module makes the network focus on the speckle-rich areas for better spatial movement prediction. We also propose a novel case-wise correlation loss to stabilize the training process for improved accuracy. Highly promising results have been obtained by using the developed method. The experiments with ablation studies demonstrate superior performance of the proposed method by comparing against other state-of-the-art methods. Source code of this work is publicly available at https://github.com/DIAL-RPI/FreehandUSRecon.

Reference

H. Guo, S. Xu, B.J. Wood, P. Yan, " Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning ,"

International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Lima, Peru, Oct. 4-8, 2020.