Lingual Speech Motor Control Assessed by a Novel Visuomotor Tracking Paradigm
Abstract
Abstract
Like other types of human motor control, speech production is thought to be accomplished
through the process of receiving sensory feedback and continually refining predictive
feedforward models to achieve the desired articulatory movement. Visuomotor tracking (VMT)
has been an influential paradigm used to test this motor control theory. VMT tasks examine
articulatory movement by requiring participants to follow external signals visually presented on a
screen. By varying the predictability of the signal, information about processes involved in
speech motor planning and execution can be gained. Previous studies of healthy adults have
suggested that when tracking predictable frequencies, an internal model of the target movement
is formed to guide accurate movement. When the signal is unpredictable, no model can be
formed, and feedback information is used to detect errors and aid in tracking. Research from this
line of work has also suggested that the underlying basis of apraxia of speech, a speech motor
disorder, is due to a deficit in feedforward processing.
Speech VMT studies have focused on the lips and jaw (external articulators); therefore, little is
known about the tracking capabilities of the tongue, the primary articulator for speech. Although
it could be the case that tongue motor control uniformly resembles that of other articulators, its
biomechanically unique properties (i.e., a muscular hydrostat) and braced position during speech
suggest it may not share the same motor control properties. In the present study, tongue motor
control was assessed using a novel VMT paradigm based on an electromagnetic articulography
system (Opti-Speech). In a first experiment, ten healthy young adults (mean age = 28.8 years)
used their tongue tip to track a virtual intra-oral moving target that varied in conditions of
predictability, frequency (0.4, 0.6, 0.8 Hz), and direction (vertical, horizontal, lateral). These
conditions tested feedforward/feedback control, speed-accuracy tradeoff, and speech-like versus
non-speech-like properties, respectively. In a second experiment, another group of ten healthy
young adults (mean age = 21.0 years) participated in a similar tracking experiment assessing
whether synchronous tongue-jaw patterns extend to cases of the tongue moving in isolation. In
both experiments, tracking accuracy was measured by computing correlation coefficients,
amplitude ratios, and phase differences. Experiment 1 demonstrated significantly higher
accuracy in the predictable condition than the unpredictable condition, providing support for the
notion of an internal model guiding expected movement. In addition, a speed-accuracy tradeoff
was found, with significantly higher accuracy for the slowest frequency (0.4 Hz) compared with
the fastest (0.8 Hz). Amplitude ratio data revealed significantly higher accuracy in the lateral
direction when compared to the vertical direction, suggesting a difference in control of
movement for speech-like (vertical) versus non-speech-like (lateral) directions.
Results from Experiment 2 corroborated that basic motor control principles noted in Experiment
1 are also found for movement of the tongue alone. Taken together, the findings suggest that
tongue motor control shares similar properties with the limbs and previously studied speech
articulators. Results serve as a basis for expanded investigations into visual feedback and
feedforward deficit theories.