Deep Learning-Based Three-Dimensional Segmentation of the Prostate on Computed Tomography Images

item.page.doi

Abstract

Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm³ for signed volume difference (ΔV). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm³ (ΔV). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images. ©2019 Society of Photo-Optical Instrumentation Engineers (SPIE).

Description

Due to copyright restrictions and/or publisher's policy full text access from Treasures at UT Dallas is limited to current UTD affiliates (use the provided Link to Article).

Keywords

Tomography, Neural networks (Computer science), Convolutions (Mathematics), Prostate, Urology, Three-dimensional imaging

item.page.sponsorship

U.S. National Institutes of Health (NIH) Grants (R21CA176684, R01CA156775, R01CA204254, and R01HL140325).

Rights

©2019 Society of Photo-Optical Instrumentation Engineers

Citation

Collections