Fei, Baowei

Permanent URI for this collectionhttps://hdl.handle.net/10735.1/7074

Professor Baowei Fei holds the Cecil H. amd Ida Green Chair in Systems Biology Science. He is also the Principal Investigator of the Quantitative BioImaging Laboratory. Fei's work "has transformed medical imaging and intervention for cancer care." His research interests include:

  • Biomedical and Digital Imaging
  • Image-Guided Interventions, including surgery, therapy, biopsy amd drug delivery
  • Machine Learning and Artificial Intelligence
  • Multimodality Imaging, including hyperspectral, MRI, PET, CT and ultrasound.
  • Virtual, Augmented and Mixed Realities for biomedical and clinical applications
  • Quantitative Imaging
  • Translational Imaging
  • Cancer Research, particularly of the prostate, head and neck, breast, pancreas, and brain
  • Cardiovascular Diseases

ORCID page

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Deep 3D Convolutional Neural Networks for Fast Super-Resolution Ultrasound Imaging
    (SPIE, 2019-03-15) Brown, Katherine; Dormer, James; Fei, Baowei; Hoyt, Kenneth; Brown, Katherine; Dormer, James; Fei, Baowei; Hoyt, Kenneth
    Super-resolution ultrasound imaging (SR-US) is a new technique which breaks the diffraction limit and can help visualize microvascularity at a resolution of tens of microns. However, image processing methods for spatiotemporal filtering needed in SR-US for microvascular delineation, such as singular value decomposition (SVD), are computationally burdensome and must be performed off-line. The goal of this study was to evaluate a novel and fast method for spatiotemporal filtering to segment the microbubble (MB) contrast agent from the tissue signal with a trained 3D convolutional neural network (3DCNN). In vitro data was collected using a programmable ultrasound (US) imaging system (Vantage 256, Verasonics Inc, Kirkland, WA) equipped with an L11-4v linear array transducer and obtained from a tissue-mimicking vascular flow phantom at flow rates representative of microvascular conditions. SVD was used to detect MBs and label the data for training. Network performance was validated with a leave-one-out approach. The 3DCNN demonstrated a 22% higher sensitivity in MB detection than SVD on in vitro data. Further, in vivo 3DCNN results from a cancer-bearing murine model revealed a high level of detail in the SR-US image demonstrating the potential for transfer learning from a neural network trained with in vitro data. The preliminary performance of segmentation with the 3DCNN was encouraging for real-time SR-US imaging with computation time as low as 5 ms per frame.
  • Item
    Deep Learning-Based Three-Dimensional Segmentation of the Prostate on Computed Tomography Images
    (SPIE, 2019) Shahedi, Maysam; Halicek, Martin; Dormer, James D.; Schuster, D. M.; Fei, Baowei; Shahedi, Maysam; Halicek, Martin; Dormer, James D.; Fei, Baowei
    Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm³ for signed volume difference (ΔV). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm³ (ΔV). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images. ©2019 Society of Photo-Optical Instrumentation Engineers (SPIE).
  • Item
    Deep Learning-Based Framework for In Vivo Identification of Glioblastoma Tumor Using Hyperspectral Images of Human Brain
    (MDPI Ag) Fabelo, Himar; Halicek, Martin; Ortega, S.; Shahedi, Maysam; Szolna, A.; Piñeiro, J. F.; Sosa, C.; O’Shanahan, A. J.; Bisshopp, S.; Espino, C.; Márquez, M.; Hernández, M.; Carrera, D.; Morera, J.; Callico, G. M.; Sarmiento, R.; Fei, Baowei; 0000-0002-9794-490X (Fabelo, H); 0000-0002-9123-9484 (Fei, B); Fabelo, Himar; Halicek, Martin; Shahedi, Maysam; Fei, Baowei
    The main goal of brain cancer surgery is to perform an accurate resection of the tumor, preserving as much normal brain tissue as possible for the patient. The development of a non-contact and label-free method to provide reliable support for tumor resection in real-time during neurosurgical procedures is a current clinical need. Hyperspectral imaging is a non-contact, non-ionizing, and label-free imaging modality that can assist surgeons during this challenging task without using any contrast agent. In this work, we present a deep learning-based framework for processing hyperspectral images of in vivo human brain tissue. The proposed framework was evaluated by our human image database, which includes 26 in vivo hyperspectral cubes from 16 different patients, among which 258,810 pixels were labeled. The proposed framework is able to generate a thematic map where the parenchymal area of the brain is delineated and the location of the tumor is identified, providing guidance to the operating surgeon for a successful and precise tumor resection. The deep learning pipeline achieves an overall accuracy of 80% for multiclass classification, improving the results obtained with traditional support vector machine (SVM)-based approaches. In addition, an aid visualization system is presented, where the final thematic map can be adjusted by the operating surgeon to find the optimal classification threshold for the current situation during the surgical procedure. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.
  • Item
    Optical Biopsy of Head and Neck Cancer Using Hyperspectral Imaging and Convolutional Neural Networks
    (SPIE) Halicek, Martin; Little, J. V.; Wang, X.; Chen, A. Y.; Fei, Baowei; 0000-0002-9123-9484 (Fei, B); Halicek, Martin; Fei, Baowei
    For patients undergoing surgical cancer resection of squamous cell carcinoma (SCCa), cancer-free surgical margins are essential for good prognosis. We developed a method to use hyperspectral imaging (HSI), a noncontact optical imaging modality, and convolutional neural networks (CNNs) to perform an optical biopsy of ex-vivo, surgical gross-tissue specimens, collected from 21 patients undergoing surgical cancer resection. Using a cross-validation paradigm with data from different patients, the CNN can distinguish SCCa from normal aerodigestive tract tissues with an area under the receiver operator curve (AUC) of 0.82. Additionally, normal tissue from the upper aerodigestive tract can be subclassified into squamous epithelium, muscle, and gland with an average AUC of 0.94. After separately training on thyroid tissue, the CNN can differentiate between thyroid carcinoma and normal thyroid with an AUC of 0.95, 92% accuracy, 92% sensitivity, and 92% specificity. Moreover, the CNN can discriminate medullary thyroid carcinoma from benign multinodular goiter (MNG) with an AUC of 0.93. Classical-type papillary thyroid carcinoma is differentiated from MNG with an AUC of 0.91. Our preliminary results demonstrate that an HSI-based optical biopsy method using CNNs can provide multicategory diagnostic information for normal and cancerous head-and-neck tissue, and more patient data are needed to fully investigate the potential and reliability of the proposed technique. ©2019 The Authors

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).