the objects with sound and without sound, as well as hearing the sounds alone. We then probed brain responses though blood-oxygen-level-dependent (BOLD) activation in regions of the multimodal network (see Section 2.5 and Figure 2.7). We observed that there was significantly greater activation after the active learning experience compared to the passive learning experience [Butler and James 2013]. Only active learning served to recruit the extended multimodal network including frontal (motor) and sensory brain regions. Thus, simply perceiving the objects or hearing the sound that was learned automatically recruited the sensory and motor brain regions used during the learning episode. Furthermore, we were interested in whether or not the visual regions were functionally connected to the motor regions. Assessing functional connectivity allows one to investigate whether the active regions are recruited together because of the task, or whether the recruitment is due to other factors (such as increased physiological responses). Indeed, only after active learning were visual seed regions (in blue in Figure 2.7) functionally connected to motor regions (in orange Figure 2.7).
2.5.2 Neural Systems Supporting Symbol Processing in Adults
Symbols represent an interesting category with which to study questions of interactions among sensory and motor brain systems. Just as multisensory experiences help shape the functioning of the subcortical region of the superior colliculus, active interactions and their inherent multisensory nature have been shown to shape the functioning of cortical brain regions. During handwriting, the motor experience is spatiotemporally coordinated with the visual and somatosensory input. Although we focus largely on the visual modality, it is important to note that, to the brain, an action such as handwriting is marked by a particular set of multisensory inputs along with motor to sensory mappings, or multimodal mappings. For example, the creation of a capital “A” results in a slightly different multimodal pairing than the creation of a capital “G”. Similarly, writing a capital “A” requires a different motor command than writing a capital “G” and each action looks and feels different. If we were to extrapolate single neuron data and apply it to human systems neuroscience, we could speculate that the visual stimulation of a capital “A” invokes subcortical multisensory neurons tuned through multisensory experiences (or, in the case of visual only learning, unimodal neurons) that pass information to cortical regions associated with the multimodal experience of handwriting.
Figure 2.7 Functional connectivity between the visual Lateral Occipital Complex (LOC) regions and motor regions in the brain after active learning. Note that left side of the brain in figure is right hemisphere due to radiological coordinate system. (From Butler and James [2013])
Indeed, the perception of individual letters has been shown to be supported by a neural system that encompasses both sensory and motor brain regions, often including ventral-temporal, frontal, and parietal cortices [Longcamp et al. 2003, 2008, 2011, 2014, James and Gauthier 2006, James and Atwood 2009]. We investigated the neural overlap between writing letters and perceiving letters and found that even when participants did not look at the letters they were writing, significant overlap in brain systems emerged for writing and perception tasks with letters (see Figure 2.8). This network included not only the usual visual regions observed when one perceives letters, but also an extended frontal network that included dorsal and ventral regions that are known to code actions and motor programs. As depicted in Figure 2.8, the putative visual region was active during letter writing, and the traditional motor areas were active during letter perception. These results suggest that action and perception, even in the case of symbol processing, automatically recruit an integrated multimodal network.
Figure 2.8 A schematic of results from James and Gauthier [2006], showing the overlap in brain activation patterns as a result of handwriting, perceiving and imagining letters. (From James and Gauthier [2006])
This overlap in activation led us to the next obvious question: What are the experiences that create such a system? It is possible that any motor act with symbols would recruit such a system, but it could also be that the creation of the symbols by hand, feature-by-feature, may serve to pair the visual input and motor output during this behavior. To test this idea, we had adult participants learn a novel symbol system, pseudoletters, through two types of multimodal-multisensory learning (writing + vision and typing + vision) and one type of multisensory learning (vision + audition) [James and Atwood 2009]. Results demonstrated that only after learning the pseudoletters through handwriting the left fusiform gyrus active for these novel forms (Figure 2.9). This finding was the first to show that this “visual” region was affected by how a stimulus was learned. More specifically, it was not responding only to the visual presentation of letters, but was responding to the visual presentation of a letter with which the observer had a specific type of motor experience: handwriting experience. Furthermore, the dorsal precentral gyrus seen in the above study for letter perception and writing was also active only after training that involved multimodal learning (writing + vision and typing + vision) but not through training that involved only multisensory (vision + audition) learning. Thus, the network of activation seen for symbol processing is formed by multimodal experience of handwriting.
Figure 2.9 The difference between trained and untrained pseudoletters in the left fusiform as a function of training experience. Note that the left fusiform gyrus is a visual processing region, in which only handwriting (labeled as motor in this figure) experience resulted in greater activation after learning. (From James and Atwood [2009])
2.6 Neuroimaging Studies in Developing Populations
Methodological limitations precludes the use of fMRI in infants and toddlers that are fully awake, because it requires that individuals stay still for a minimum of 5 min at a time, usually for 30 min in total. Nonetheless, because we are very interested in how multimodal-multisensory learning serves to create neural systems, we routinely scan 4–6-year-old children in the hopes that we can observe the development of these neural systems. In the summary that follows, we outline studies that have investigated how multimodal networks emerge. In other words, what experiences are required for this functional network to be automatically activated?
2.6.1 The Multimodal Network Underlying Object and Verb Processing
There is now ample evidence that reading verbs recruits the motor systems that are used for performing the action that a verb describes [Hauk et al. 2003, Pulvermüller 2005, 2012, 2013]. These findings suggest that the perception of verbs re-activates motor systems that were used when the word was encountered during action, and, by extension, also suggest that only actions that we have performed will recruit the motor system during perception. However, not all verbs describe actions with which we have personal experience (e.g., skydiving), which begs the question: Do our action systems then become linked with perception if we simply watch an action? The research in this realm is controversial. Some studies have shown that indeed, action systems are recruited during action observation (e.g., [Gallese et al. 1996, Rizzolatti and Craighero 2004]), while others have shown that we have to have had personal experience performing the action for these systems to be automatically recruited [Lee et al. 2001].
We wished to address this question with children, given their relatively limited personal experience with actions and their associated verbs. We asked