et al., 1984; Coltheart & Leahy, 1992; Jared 1997; Jared et al., 1990; Seidenberg et al., 1984; Taraban & McClelland, 1987). Jared et al. (1990) showed that word‐body consistency was a good predictor of naming latencies in multiple studies. Later research (Treiman et al., 1995; Kessler & Treiman, 2001) showed that consistency over other units such as the onset‐nucleus has an impact for some words. In recent work, Siegelman, Kearns, and Rueckl (2020) defined consistency by the surprisal of the vowel in a monosyllable (an information theoretic measure of the degree to which the pronunciation is unexpected), which produces similar predictions (see also Chee et al., 2020).
Coltheart et al. (2001) counted consistency effects as one of the phenomena the DRC model correctly simulated, based on the results for a “benchmark” study by Jared (1997) that manipulated frequency and consistency factorially. A consistency effect was found for both the high‐ and low‐frequency words (the atypical effect for “higher frequency” words was because they were moderate in frequency with higher frequency enemies). The DRC model reproduced these results, as did SM89. Coltheart et al. concluded:
[We] have shown that the DRC model can simulate the Jared (1997) results and have discovered why; hence we have shown that there is no conflict between her results and the DRC model. Even more generally, for these reasons we believe that the body of experiments showing effects of consistency on reading aloud are compatible with the DRC model despite the fact that this model contains no level of representation specific to orthographic bodies. (p. 233)
According to Coltheart et al., the “consistency” effect in the Jared study was due to other properties of the stimuli. In their view, the inconsistent words were a mix of regular and exception words. For example, Jared categorized doll as inconsistent because of the alternative pronunciations of –oll (as in poll and doll). DRC 2001 treats doll as an exception. A set of “inconsistent” words containing a significant proportion of exceptions will yield poorer performance than a set of entirely regular words. The “inconsistency effect” is therefore seen as an artifact of averaging the latencies for regular and exception words used in this condition.
If this account is correct, the consistency effect should be eliminated in both human and DRC data if exception words are excluded because the remaining words are regular (according to DRC). If the effect is due to consistency, it should still be obtained because the remaining words are inconsistent (according to the connectionist account). Seidenberg and Plaut (2006) tested this hypothesis by recomputing the condition means in Jared’s (1997) study, removing words that are considered exceptions in DRC and their matched control words (9 pairs of stimuli). Removing the exceptions eliminates the consistency effect in the DRC simulation; in the behavioral data, however, a statistically reliable consistency effect remains. Thus, Coltheart et al.’s analysis is a correct account of consistency effects in their model but not in people.
The claim that the DRC model is compatible with consistency effects rests entirely on the simulation of this single experiment. Coltheart et al.’s conjecture that consistency effects found in other studies are also compatible with their model could have been tested by conducting the relevant simulations. In fact, the model uniformly fails to reproduce the effects (e.g., Jared et al., 1990; Seidenberg et al., 1984). Further, Cortese and Simpson (2000) and Jared (2002) explicitly contrasted regularity and consistency effects. There were consistency effects in both studies, a small regularity effect in Cortese and Simpson (2000) and none in Jared (2002). DRC simulations of these studies yield regularity effects but no effects of consistency.
Recognizing the importance of consistency effects in evaluating models of word reading, Coltheart et al. (2001) observed:
[One] outcome that would refute the model would be to find that when body consistency, as defined by Glushko (1979), is taken into account, regularity has no effect on reading aloud. (p. 247)
Following this reasoning, Jared’s (2002) study refutes the DRC. More importantly, this is one of many experiments documenting the theoretically decisive consistency effect and DRC’s failure to account for it.
Nonword Pronunciation
Many studies have examined children and adults’ ability to read aloud novel letter strings such as nust and mave. This task provides information about readers’ knowledge of spelling‐sound correspondences and their ability to generalize beyond the words they already know. This ability is particularly important for beginning readers, for whom every letter string is initially novel. Generating the pronunciation (or covert phonological code) of an unfamiliar letter string can allow it to be recognized via its spoken form, an important mechanism in learning to read (Share, 1995). Nonword pronunciation may seem artificial, but it taps into the same knowledge and processes that are used in everyday reading.
The bases of our ability to generalize are an important issue in cognition. For many years, generalization was taken as the primary evidence that linguistic knowledge consists of rules (Pinker, 1994). Generalization – pronouncing novel letter strings – is the principal motivation for the grapheme‐phoneme correspondence rules (GPCs) in dual‐route models. If people lacked this capacity, the lexical route would suffice because words could simply be memorized. Whereas the ability to generate the past tense of a nonword such as wug was taken as evidence for a linguistic rule (Berko, 1958), the ability to pronounce it was taken as evidence for GPCs. The connectionist approach offered a novel account of generalization (Rumelhart & McClelland, 1986), a major theoretical advance. A neural network is trained to perform a task based on exposure to examples. The knowledge encoded by the network can then produce correct output for novel (untrained) items (see Seidenberg & Plaut, 2006, 2014, for reviews).
The GPCs in the Coltheart et al. (2001) model were handcrafted to produce plausible pronunciations of nonwords and so unsurprisingly they produced accurate pronunciations on the items that were tested. However, the model’s nonword performance deviates from people’s when other phenomena are examined. Here, we briefly summarize issues in three areas: 1) consistency effects for nonwords; 2) relative difficulty of word and nonword naming; and 3) length effects for words and nonwords.
Nonword consistency effects
Spelling‐sound consistency also affects nonword pronunciation. Glushko (1979) found that naming was slower for nonwords such as mave, which is inconsistent because of have, an atypically pronounced neighbor, compared to nonwords such as nust, whose neighbors are pronounced alike. The effect of word neighbors on nonword pronunciation presents a particularly strong challenge to the dual‐route approach, which holds that nonwords are pronounced by nonlexical rules, independent of word knowledge.
Coltheart et al. (2001) discussed Glushko’s study in detail, suggesting that consistency effects for words and nonwords could arise from conflicts between the two routes, as in the case of exception words. Simulating such effects in the DRC model requires changing parameters to increase the activation of words in the lexical network to the point where an exception word such as have could influence both gave and mave. Coltheart et al. (2001) noted that this requirement motivated the “cascaded” property of the model, setting the timing of activity in the two routes to allow such conflicts to occur.
This discussion is odd because the authors did not report any simulations of word or nonword consistency effects employing the proposed mechanism. Their account of the consistency effect in the Jared study was that it was due to the inclusion of exception words, not conflicts between the routes. No simulations of nonword consistency effects were reported. Zevin and Seidenberg (2006) examined nonword consistency effects reported in three representative studies: Glushko (1979), Andrews and Scarratt (1998), and Treiman, Kessler, and Bick (2003). Whereas the DRC model did not reproduce any of these effects, they were correctly generated by a model based on Harm and Seidenberg (1999).
In summary, the DRC model presented in Coltheart et al. (2001) does not produce consistency effects for words or nonwords. The authors did not test their proposal that the effects can be obtained by changing lexical activation parameters.