Группа авторов

The Handbook of Speech Perception


Скачать книгу

interrupted flow of speech. In unaltered speech, the delay between speaking and hearing one’s own speech is about 1 millisecond (Yates, 1963). When this interval is artificially lengthened, numerous speech changes are introduced: vocal intensity rises, production speed slows, and stuttering or word repetitions are common (Chase et al., 1961). In birdsong, DAF yields similar errors as in humans: zebra finches produce more frequent stuttering (more repetitions of introductory notes) and more syllabic omissions when feedback is delayed (Cynx & von Rad, 2001).

      One of the unique aspects of DAF is that it is not something that can be readily compensated for. Unlike feedback for vocal pitch, loudness, spectral detail, or even the detailed timing of the utterances (e.g. Mitsuya, MacDonald, & Munhall, 2014), all of which define the intentional characteristics of the signal, DAF is an indicator of the transmission speed of the sensorimotor organization. As such, feedback timing acts as a constraint on the use of speech motor feedback. Recently, Mitsuya, Munhall, and Purcell (2017) showed that the amount of compensation for perturbed formant frequency decreased linearly with delay in feedback. In this study a 200 Hz perturbation to F1 auditory feedback was introduced with 100 ms delay in feedback. Every 10 trials the delay was reduced by 10 ms though the magnitude of the frequency perturbation remained constant. The magnitude of F1 compensation grew as the delay was reduced. These findings demonstrate that auditory feedback beyond a temporal window ceases to play its role as an effective control signal for speech production.

      Collectively, these findings provide consistent support for the importance of auditory feedback for the development and maintenance of spoken language. This feedback processing is evident for a variety of attributes of spoken language and the data imply the existence of some form of articulatory/acoustic goals that are supported by perceptual feedback. However, the mechanisms underlying this process remain unclear.

       Computational processing of feedback

      The term efference copy is a direct translation of the German Efferenzkopie, introduced by von Holst and Mittelstaedt in 1950 to explain how we might distinguish changes in visual sensations due to our own movement and changes in visual sensations due to movement of the world. Crapse and Sommer (2008) consider corollary discharge (coined by Sperry in the same year, 1950) to be the more general term. Corollary discharges are viewed as copies of motor commands sent to any sensory structures, while efference copies were thought to be sent only to early or primary sensory structures.

      Two current types of neurocomputational models of speech production differentiate how such corollary discharges and sensory feedback could influence speech. The Directions into Velocities of Articulators (DIVA) model and its extension, the Gradient Order DIVA (GODIVA) model, use the comparison of overt auditory feedback to auditory target maps as the mechanism to control speech errors (Guenther & Hickok, 2015). The auditory target maps can be understood as the predictions of the sensory state following a motor program. These predictions are also the goals represented in the speech‐sound map, where a speech sound is defined as a phonetic segment with its own motor program. This model requires two sensory‐to‐movement mappings to be learned in development. The speech‐sound map must be mapped to appropriate movements in what is considered a forward model. When errors are detected by mismatches between feedback and predicted sensory information, a correction must be generated. The sensorimotor mapping responsible for such corrective movements is considered an inverse model.

      In contrast, the state feedback control model of speech production (SFC), or its extension, the hierarchical state feedback control model (HSFC) assumes an additional internal feedback loop (Hickok, 2012; Houde & Nagarajan, 2011; Houde & Chang, 2015). Similar to the DIVA models, the SFC models incorporate a form of corollary discharge. One critical difference is that the corollary discharge in SFC models is checked against an internal target map rather than overt auditory feedback (i.e. a prediction of speech errors is generated and thus provides a mechanism to prevent such errors). Overt auditory feedback is included in the model through its influence on how the speech‐error predictions are converted into corrections (Houde & Nagarajan, 2011).

       Neural processing of feedback

      There is an extensive literature on the neural substrates supporting speech production (see Guenther, 2016, for a review). Much of this is based on mapping the speech‐production network using fMRI (Guenther, Ghosh, & Tourville, 2006). Our focus here is more narrow – how speech sounds produced by the talker are dealt with in the nervous system. The neural processing of self‐produced sound necessitates mechanisms that allow the differentiation between sound produced by oneself and sound produced by others. Two coexisting processes may play a role in this: (1) a perceptual suppression of external sound and voices, and (2) specialized processing of one’s own speech (Eliades & Wang, 2008). Cortical suppression has unique adaptive functions depending on the species. In nonhuman primates, for example, the ability to discern self‐vocalization from external sound serves to promote antiphonal calling whereby the animal must recognize their species‐specific call and respond by producing the same call (Miller & Wang, 2006). Takahashi, Fenley, and Ghazanfar (2016) have invoked the development of self‐monitoring and self‐recognition as essential in developing coordinated turn taking in marmoset monkeys.