Browsing by Author "Shahedi, Maysam"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Deep Learning-Based Framework for In Vivo Identification of Glioblastoma Tumor Using Hyperspectral Images of Human Brain(MDPI Ag) Fabelo, Himar; Halicek, Martin; Ortega, S.; Shahedi, Maysam; Szolna, A.; Piñeiro, J. F.; Sosa, C.; O’Shanahan, A. J.; Bisshopp, S.; Espino, C.; Márquez, M.; Hernández, M.; Carrera, D.; Morera, J.; Callico, G. M.; Sarmiento, R.; Fei, Baowei; 0000-0002-9794-490X (Fabelo, H); 0000-0002-9123-9484 (Fei, B); Fabelo, Himar; Halicek, Martin; Shahedi, Maysam; Fei, BaoweiThe main goal of brain cancer surgery is to perform an accurate resection of the tumor, preserving as much normal brain tissue as possible for the patient. The development of a non-contact and label-free method to provide reliable support for tumor resection in real-time during neurosurgical procedures is a current clinical need. Hyperspectral imaging is a non-contact, non-ionizing, and label-free imaging modality that can assist surgeons during this challenging task without using any contrast agent. In this work, we present a deep learning-based framework for processing hyperspectral images of in vivo human brain tissue. The proposed framework was evaluated by our human image database, which includes 26 in vivo hyperspectral cubes from 16 different patients, among which 258,810 pixels were labeled. The proposed framework is able to generate a thematic map where the parenchymal area of the brain is delineated and the location of the tumor is identified, providing guidance to the operating surgeon for a successful and precise tumor resection. The deep learning pipeline achieves an overall accuracy of 80% for multiclass classification, improving the results obtained with traditional support vector machine (SVM)-based approaches. In addition, an aid visualization system is presented, where the final thematic map can be adjusted by the operating surgeon to find the optimal classification threshold for the current situation during the surgical procedure. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.Item Deep Learning-Based Three-Dimensional Segmentation of the Prostate on Computed Tomography Images(SPIE, 2019) Shahedi, Maysam; Halicek, Martin; Dormer, James D.; Schuster, D. M.; Fei, Baowei; Shahedi, Maysam; Halicek, Martin; Dormer, James D.; Fei, BaoweiSegmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm³ for signed volume difference (ΔV). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm³ (ΔV). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images. ©2019 Society of Photo-Optical Instrumentation Engineers (SPIE).