Группа авторов

The Science of Reading


Скачать книгу

in the lexical system creates undesirable side effects, including errors in pronouncing nonwords, especially lexicalizations, and exaggerated regularity effects for higher and lower frequency words.

       Relative difficulty of words and nonwords

      In general, words are read aloud more rapidly than nonwords (Forster & Chambers, 1973; Frederiksen & Kroll, 1976). This “lexicality effect” can be seen as an extension of the standard frequency effect: Pronounceable nonwords are essentially very low frequency words. Coltheart et al. (2001) counted the lexicality effect among the phenomena their model simulated correctly. Again, however, the model’s behavior differed substantially from people’s.

      This point can be illustrated by examining the distributions of naming latencies for words and nonwords from a study by Rastle and Coltheart (1999). Words were named faster than nonwords (the “lexicality” effect”), but the distributions of naming times overlap. As in this case, a statistically significant difference between two means does not indicate that all members of one group differed from all members of the other. (See supplemental materials.)

      Accommodating these results again seems to require changing parameters in the model to slow the lexical route or to speed the nonlexical route, which produces undesirable side effects (lexicalizations of nonwords or regularizations of exception words). Moreover, changing parameters in the model to account for one phenomenon spoils the simulations of other phenomena (Seidenberg & Plaut, 2006).

       Length effects for words versus nonwords

      An additional phenomenon serves to illustrate that DRC’s treatment of nonwords yielded a broad range of anomalous results. Coltheart et al. (2001) simulated a widely cited study by Weekes (1997) that examined the effect of length (3–6 letters) on the pronunciation of high‐and low‐frequency words and nonwords. Nonword reading times showed a linear increase with length. The time taken to read lower frequency words also showed a linear increase with length but the slope was shallower. Higher frequency words showed no effect of length.

      For their simulation, Coltheart et al. (2001) collapsed the data for high and low frequency words, which then showed no reliable effects of length, in contrast to nonwords. The DRC simulation reproduced this word‐nonword difference, because the nonlexical route used for nonwords operates serially, whereas the lexical route does not. The length effect for nonwords but not words was taken as another finding the model successfully explained.

      This account is inaccurate. The DRC model produced length effects for nonwords but not words; however, Weekes (1997) found a length effect for lower frequency words, which was not examined in the simulation. Looking at all three conditions, the correct generalization is about the impact of frequency, not lexicality, on length effects: high‐frequency words < low‐frequency words < nonwords, which are tantamount to very low frequency words. Several studies predating Coltheart et al. (2001) found length effects on word naming that were modulated by frequency and reading skill: Mason (1978), Cosky (1976), Frederiksen and Kroll (1976); see New et al. (2006) and Barton et al. (2014). In Balota et al.’s (2007) large corpus of naming times, length effects are observed for both words and nonwords.

      Coltheart et al.’s treatment of the Weekes 1997 study is problematic. The DRC model did not accurately simulate the results of the study or others like it. The characterization of the findings as showing that length affects nonwords but not words was inaccurate. The explanation of a word‐nonword difference does not explain the difference between the two types of words. The narrow focus on Weekes’ 1997 experiment did not yield a deeper understanding of the relationship between lexicality and length. Coltheart et al.’s (2001) claim that the DRC model accounts for this relation is therefore unwarranted.

       Semantic effects on word naming

      Finally, we consider an important aspect of reading aloud that dual‐route models do not address at all: the role of semantic information. Although semantics plays no role in the dual‐route account of normal reading, behavioral and neuroimaging studies of skilled readers in English and other languages (e.g., Japanese, Chinese) indicate that semantic information is utilized in word naming (Evans et al., 2012; Taylor et al., 2015). In the triangle framework, the learner’s task is to find ways to generate pronunciations quickly and accurately. The solution involves developing an efficient division of labor between different parts of the triangle, orthography➔phonology and orthography➔semantics➔phonology (O➔S➔P). The exact contributions from these components are affected by characteristics of words, writing systems, and readers. In general, there is greater input from O➔S➔P for words that are difficult to pronounce via the orthography➔phonology computation. In English, those are lower frequency words with atypical spelling‐sound correspondences (Strain et al. 1995; Strain & Herdman, 1999). There is also greater semantic involvement in reading aloud in writing systems that are relatively deep, such as Chinese (Yang et al., 2009) and Japanese Kanji (Shibahara et al. 2003; Smith et al., 2021).

      These results follow from properties of the mappings between codes and their impact on learning in connectionist networks (Plaut et al., 1996; Harm & Seidenberg, 2004). The division of labor account has provided the basis for investigations of the development of reading in typical and dyslexic readers (e.g., Siegelman et al., 2020; Snowling & Hayiou‐Thomas, 2006; Harm & Seidenberg, 1999), the brain bases of the orthography➔phonology and orthography➔semantics➔phonology computations (Frost et al., 2005), and individual differences in reliance on the pathways in skilled readers (Graves et al., 2014; Woollams, 2005). Harm and Seidenberg (2004) developed a complementary model of the division of labor in computing meaning from print.

      Every finding that semantics is used in normal reading aloud is a disconfirmation of the dual‐route model. In that architecture, semantics cannot be accessed until after a word is recognized (i.e., its entry in the orthographic or phonological lexicon is contacted).Accounting for semantic effects that arise in generating pronunciations would require rethinking basic tenets of the approach.

      Claims that the dual‐route model correctly reproduced basic behavioral phenomena were overstated, undermined by the inaccuracy of many simulations, the file‐drawer problem, and anomalous effects that could not be ameliorated. Researchers were unable to implement the two routes in a manner consistent with behavior. That is cause to focus on other approaches.

      The DRC models were succeeded by a series of hybrid, “connectionist dual process” (CDP) models. These models replaced the grapheme‐phoneme correspondence rules with connectionist networks, but retained a separate lexical route (Perry et al., 2007, 2010; Ziegler et al., 2014). The work was guided by several precepts (Perry