Группа авторов

The Handbook of Language and Speech Disorders


Скачать книгу

where longer time periods between the onset of profound hearing loss and implantation were associated with significantly poorer speech perception scores (Blamey et al., 1996). The model also shows that the postoperative ranking is better in the phase where the patient learns to listen with their CI, provided that they had used HA in the period prior to implantation.

      Source: Lazard, D. S., Vincent, C., Venail, F., Van de Heyning, P., Truy, E., Sterkers, O., et al. (2012). Pre‐, per‐ and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS ONE, 7(11): e48739. doi:10.1371/journal.pone.0048739

      In terms of hardware CIs consist of an external ear‐level processor situated in housing that also contains microphone(s), battery and transcutaneous transducer. The implanted components include the receiver‐stimulator and the electrode array, which is surgically inserted into the Scala Tympani compartment of the cochlea. At the time of writing we note reports of fully implantable devices where all external components are subcutaneously implanted, and this trend has obvious cosmetic advantages.

       3.3.1 Sound Processing in Cochlear Implants and the Electrical–Neural Bottleneck

      There are different approaches to the conversion of sound into electrical stimulation, but some principles are commonly found in all devices. These include pre‐emphasis filtering of the input signal so that the low‐frequency energy is attenuated in relation to higher frequencies. After pre‐emphasis the input signal is decomposed at a band‐split filter so as to facilitate independent channel processing. There is often a 1–1 relationship between the number of independent processing channels and the number of electrodes. The signal can then be half‐wave rectified and low‐pass filtered. The output of this processing is then convolved with electrical pulses that are presented at individual electrodes within the clinically defined electrical dynamic range of the implant recipient. Commercially available electrode arrays may have up to 30 intracochlear electrodes, yet there is often considerable disparity between the center frequency of the band associated with the stimulating electrode and the tonotopically organized characteristic frequency of the neuronal population that is in closest proximity. CI recipients can often adapt to this mismatch, so that receptive communication is adequately supported in natural listening conditions.

      Another principle employed in sound processing strategies that are based on the Continuous Interleaved Sampling (CIS) stimulation strategy (Wilson, Finley, Lawson, Wolford, & Zerbi, 1993) is that only a subset of electrodes is used to convey the signal along the array. This processing is generically referred to as n‐of‐m, where n is the subset of total m electrodes that may be active. The selection of the specific n electrodes is usually made on the basis of spectral peaks in the input signal. In CIS‐derived processing strategies the temporal presentation of the electrical signal is staggered along the implant array, so that no two electrodes are active at exactly the same time.

      The perceptual bottleneck imposed by CI listening also involves difficulties hearing in noise. Listening in noise is, of course, a common complaint even among listeners with normal audiometric results (for instance, see Tremblay et al., 2015); however, the extent to which noise affects the receptive communication of CI listeners is of a different magnitude. Noise that is innocuous for normally hearing listeners can pose difficulties for CI listeners. For instance, Shannon, Cruz, and Galvin (2011) reported that with CI listeners (n = 7) using their clinical processors, word recognition in the context of IEEE sentences decreased from 42% in quiet to 13% in 10 dB signal–noise ratio (SNR). Although the IEEE sentences are difficult, it is unlikely that the addition of noise at this SNR would incur similar performance declines in normally hearing listeners.

      The deleterious effect of noise on CI listening is attributable to factors that are associated with the operation of CIs and to limitations in the electrical–neuronal interface, the sum total of which constitutes the perceptual bottleneck. Device factors include limited transmission of both spectral and temporal information. Electrical–neural interface limits also appear to restrict the number of effective spectral channels, so that, for example, improvements in speech intelligibility in noise plateau when the number of channels is increased above eight (Friesen, Shannon, Baskent, & Wang, 2001). This plateau may be attributable to a broad spread of in‐vivo excitation caused by the distance between the electrode and the receptor site, and also to patchy neuronal integrity in the vicinity of the electrode. The encoding of more rapid temporal information is also limited by neuronal factors beyond the limits imposed by the processing (Zeng, 2002).

      The perceptual bottleneck is, to a great extent, the reason for the variability that is reported in the speech‐perceptual and listening performance results of CI listeners. This variability means that few postlingually deafened CI recipients automatically experience an improvement in receptive communication abilities that boost them to be wholly on a par with their NH counterparts. With this variability in mind, let us now review some of the work that has been done on speech‐perceptual performance and consider how these results can inform our understanding of candidacy issues and the setting of