Finger-Tip Recognition for 3D Mixed Reality Applications
Kumar, Hiranya Garbha
MetadataShow full item record
Developments in depth sensors like Microsoft Kinect and Leap Motion, have lead to significant improvements in systems for Human Computer Interaction (HCI). One of the several areas benefiting from this advancement is the domain of hand gesture recognition. Gesture recognition is one of the most natural ways to communicate even among humans. As such, developing a system that can accurately recognize hand gestures can be considered as a key milestone in HCI. Among the several ways to recognize hand gestures, detection and tracking of fingertips is one of the most common approach. In this thesis, we develop and implement an algorithm to detect, recognize and generate framework compatible skeletal information for fingertips with reasonable accuracy in real-time using Microsoft Kinect V2, MS Kinect SDK and the HUNA framework. The skeletal information is then mapped to a 3D space in Unity and can be used for various 3D Mixed reality applications. As the algorithm is based on depth data, it is inherently tolerant to changes in illumination and background clutter, which a significant proportion of other approaches struggle against. The algorithm outperforms other contour-based approaches to the problem and provides reliable fingertip detection with a very low false positive rate.