An Unsupervised Visual-Only Voice Activity Detection Approach Using Temporal Orofacial Features

Date

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

International Speech and Communication Association

item.page.doi

Abstract

Detecting the presence or absence of speech is an important step toward building robust speech-based interfaces. While previous studies have made progress on voice activity detection (VAD), the performance of these systems significantly degrades when subjects employ challenging speech modes that deviate from normal acoustic patterns (e.g., whisper speech), or in noisy/adverse conditions. An appealing approach under these conditions is visual voice activity detection (VVAD), which detects speech using features characterizing the orofacial activity. This study proposes an unsupervised approach that relies only on visual features, and, therefore, is insensitive to vocal style or time-varying background noise. This study proposes an unsupervised approach that relies on visual features. We estimate optical flow variance and geometrical features around lips, extracting the short-time zero crossing rates, short-time variances, and delta features over a small temporal window. These variables are fused using principal component analysis (PCA) to obtain a "combo" feature, which displays a bimodal distributions (speech versus silence). A threshold is automatically determine using the expectation-maximization (EM) algorithm. The approach can be easily transformed into a supervised VVAD, if needed. We evaluate the system in neutral and whisper speech. While speech based VADs generally fail to detect speech activity in whisper speech, given its important acoustic differences, the proposed VVAD achieves near 80% accuracy in both neutral and whisper speech, highlighting the benefits of the system. Copyright

Description

Keywords

Nonverbal communication, Automatic speech recognition, Speech, Principal components analysis, Expectation-maximization algorithms

item.page.sponsorship

Funded in part by NSF grant #IIS-1217104

Rights

©2015 ISCA

Citation