• Login
    View Item 
    •   Treasures Home
    • Electronic Theses and Dissertations
    • UTD Theses and Dissertations
    • View Item
    •   Treasures Home
    • Electronic Theses and Dissertations
    • UTD Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Driver Behavior and Environment Interaction Modeling for Intelligent Vehicle Advancements

    Thumbnail
    View/Open
    Dissertation (5.527Mb)
    Date
    2018-05
    Author
    Zheng, Yang
    Metadata
    Show full item record
    Abstract
    Abstract
    With continued progress in artificial intelligence, vehicle technologies have advanced significantly from human controlled driving towards fully automated driving. During the transition, the intelligent vehicle should be able to understand the driver’s perception of the environment and controlling behavior of the vehicle, as well as provide human-like interaction with the driver. To understand the complicated driving task which incorporates the interaction among the driver, the vehicle, and the environment, naturalistic driving studies and autonomous driving perception experiments are necessary to capture the in-vehicle and out-of-vehicle signals, process their dynamics, and migrate the driver’s decision-making into the vehicle. This dissertation is focused on intelligent vehicle advancements, which include driver behavior analysis, environment perception, and advanced human-machine interface. First, with the availability of UTDrive naturalistic driving corpus, the driver’s lane-change event is detected from vehicle dynamic signals, achieving over 80% accuracies using CAN signals only. Human factors for the lane-change detection are analyzed. Second, a high-digits road map corpus is leveraged to retrieve driving environment attributes, as well as to provide the road prior knowledge for drivable space segmentation on images. Combining environment attributes with vehicle dynamic signals, the lane-change recognition accuracies are improved from 82.22%-88.46% to 92.50%-96.67%. The road prior mask generated from the map data is shown to be an additional source to fuse with vision/laser sensors for the autonomous driving road perception, and in addition, it also has the capability for automatic annotation and virtual street views compensation. Next, the vehicle dynamics sensing functionality is migrated into a mobile platform – Mobile-UTDrive, which allows for a smartphone device to be freely positioned in the vehicle. As an application, the smartphone collected signals are employed for an unsupervised driving performance assessment, giving the driver’s objective rating score. Finally, a voice-based interface between the driver and vehicle is simulated, and natural language processing tasks are investigated in the design of a navigation dialogue system. The accuracy for intent detection (i.e., classify whether a sentence is navigation-related or not) is achieved as 98.83%, and for semantic parsing (i.e., extract useful context information) is achieved as 99.60%. Taken collectively, these advancements contribute to improved driver-to-vehicle interaction modeling, improved safety, and therefore reduce the transition challenge between human controlled to fully automated smart vehicles.
    URI
    http://hdl.handle.net/10735.1/5849
    Collections
    • UTD Theses and Dissertations

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of TreasuresCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV