{"id":1238,"date":"2015-09-09T19:20:56","date_gmt":"2015-09-09T19:20:56","guid":{"rendered":"https:\/\/courses.candelalearning.com\/intropsychmaster\/?post_type=chapter&#038;p=1238"},"modified":"2015-09-09T19:32:34","modified_gmt":"2015-09-09T19:32:34","slug":"multi-modal-perception","status":"publish","type":"chapter","link":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/chapter\/multi-modal-perception\/","title":{"raw":"Multi-Modal Perception","rendered":"Multi-Modal Perception"},"content":{"raw":"<section>\r\n<p class=\"lead\">Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects.<\/p>\r\n\r\n<\/section><section>\r\n<h1 id=\"learning-objectives\">Learning Objectives<\/h1>\r\n<ul>\r\n\t<li>Define the basic terminology and basic principles of multimodal perception.<\/li>\r\n\t<li>Describe the neuroanatomy of multisensory integration and name some of the regions of the cortex and midbrain that have been implicated in multisensory processing.<\/li>\r\n\t<li>Explain the difference between multimodal phenomena and crossmodal phenomena.<\/li>\r\n\t<li>Give examples of multimodal and crossmodal behavioral effects.<\/li>\r\n<\/ul>\r\n<\/section><section class=\"content\">\r\n<h1 id=\"perception-unified\">Perception: Unified<\/h1>\r\nAlthough it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of\u00a0unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline. However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not.\r\n\r\nFor the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world.\r\n<h1 id=\"questions-about-multimodal-perception\">Questions About Multimodal Perception<\/h1>\r\nSeveral theoretical problems are raised by multimodal perception. After all, the world is a \u201cblooming, buzzing world of confusion\u201d that constantly bombards our perceptual system with light, sound, heat, pressure, and so forth. To make matters more complicated, these stimuli come from multiple events spread out over both space and time. To return to our example: Let\u2019s say the car crash you observed happened on Main Street in your town. Your perception during the car crash might include a lot of stimulation that was <em>not<\/em> relevant to the car crash. For example, you might also overhear the conversation of a nearby couple, see a bird flying into a tree, or smell the delicious scent of freshly baked bread from a nearby bakery (or all three!). However, you would most likely not make the mistake of associating any of these stimuli with the car crash. In fact, we rarely combine the auditory stimuli associated with one event with the visual stimuli associated with another (although, under some unique circumstances\u2014such as ventriloquism\u2014we do). How is the brain able to take the information from separate sensory modalities and match it appropriately, so that stimuli that belong together stay together, while stimuli that do not belong together get treated separately? In other words, how does the perceptual system determine which unimodal stimuli must be integrated, and which must not?\r\n\r\nOnce unimodal stimuli have been appropriately integrated, we can further ask about the consequences of this integration: What are the effects of multimodal perception that would not be present if perceptual processing were only unimodal? Perhaps the most robust finding in the study of multimodal perception concerns this last question. No matter whether you are looking at the actions of neurons or the behavior of individuals, it has been found that responses to multimodal stimuli are typically greater than the combined response to either modality independently. In other words, if you presented the stimulus in one modality at a time and measured the response to each of these unimodal stimuli, you would find that adding them together would still not equal the response to the multimodal stimulus. This superadditive effect of multisensory integrationindicates that there are consequences resulting from the integrated processing of multimodal stimuli.\r\n\r\nThe extent of the superadditive effect (sometimes referred to as multisensory enhancement) is determined by the strength of the response to the single stimulus modality with the biggest effect. To understand this concept, imagine someone speaking to you in a noisy environment (such as a crowded party). When discussing this type of multimodal stimulus, it is often useful to describe it in terms of its unimodal components: In this case, there is an auditory component (the sounds generated by the speech of the person speaking to you) and a visual component (the visual form of the face movements as the person speaks to you). In the crowded party, the auditory component of the person\u2019s speech might be difficult to process (because of the surrounding party noise). The potential for visual information about speech\u2014lipreading\u2014to help in understanding the speaker\u2019s message is, in this situation, quite large. However, if you were listening to that same person speak in a quiet library, the auditory portion would probably be sufficient for receiving the message, and the visual portion would help very little, if at all (Sumby &amp; Pollack, 1954). In general, for a stimulus with multimodal components, if the response to each component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component\u2014by itself\u2014is sufficient to evoke a strong response, then the opportunity for multisensory enhancement is relatively small. This finding is called the Principle of Inverse Effectiveness (Stein &amp; Meredith, 1993) because the effectiveness of multisensory enhancement is inversely related to the unimodal response with the greatest effect.\r\n\r\nAnother important theoretical question about multimodal perception concerns the neurobiology that supports it. After all, at some point, the information from each sensory modality is definitely separated (e.g., light comes in through the eyes, and sound comes in through the ears). How does the brain take information from different neural systems (optic, auditory, etc.) and combine it? If our experience of the world is multimodal, then it must be the case that at some point during perceptual processing, the unimodal information coming from separate sensory organs\u2014such as the eyes, ears, skin\u2014is combined. A related question asks where in the brain this integration takes place. We turn to these questions in the next section.\r\n<h1 id=\"biological-bases-of-multimodal-perception\">Biological Bases of Multimodal Perception<\/h1>\r\n<h2 id=\"multisensory-neurons-and-neural-convergence\">Multisensory Neurons and Neural Convergence<\/h2>\r\nA surprisingly large number of brain regions in the midbrain and cerebral cortex are related to multimodal perception. These regions contain neurons that respond to stimuli from not just one, but multiple sensory modalities. For example, a region called the superior temporal sulcus contains single neurons that respond to both the visual and auditory components of speech (Calvert, 2001;Calvert, Hansen, Iversen, &amp; Brammer, 2001). These multisensory convergence zones are interesting, because they are a kind of neural intersection of information coming from the different senses. That is, neurons that are devoted to the processing of one sense at a time\u2014say vision or touch\u2014send their information to the convergence zones, where it is processed together.\r\n\r\nOne of the most closely studied multisensory convergence zones is the superior colliculus (Stein &amp; Meredith, 1993), which receives inputs from many different areas of the brain, including regions involved in the unimodal processing of visual and auditory stimuli (Edwards, Ginsburgh, Henkel, &amp; Stein, 1979). Interestingly, the superior colliculus is involved in the \u201corienting response,\u201d which is the behavior associated with moving one\u2019s eye gaze toward the location of a seen or heard stimulus. Given this function for the superior colliculus, it is hardly surprising that there are multisensory neurons found there (Stein &amp; Stanford, 2008).\r\n<h2 id=\"crossmodal-receptive-fields\">Crossmodal Receptive Fields<\/h2>\r\nThe details of the anatomy and function of multisensory neurons help to answer the question of how the brain integrates stimuli appropriately. In order to understand the details, we need to discuss a neuron\u2019s receptive field. All over the brain, neurons can be found that respond only to stimuli presented in a very specific region of the space immediately surrounding the perceiver. That region is called the neuron\u2019s receptive field. If a stimulus is presented in a neuron\u2019s receptive field, then that neuron responds by increasing or decreasing its firing rate. If a stimulus is presented outside of a neuron\u2019s receptive field, then there is no effect on the neuron\u2019s firing rate. Importantly, when two neurons send their information to a third neuron, the third neuron\u2019s receptive field is the combination of the receptive fields of the two input neurons. This is called neural convergence, because the information from multiple neurons converges on a single neuron. In the case of multisensory neurons, the convergence arrives from different sensory modalities. Thus, the receptive fields of multisensory neurons are the combination of the receptive fields of neurons located in different sensory pathways.\r\n\r\nNow, it could be the case that the neural convergence that results in multisensory neurons is set up in a way that ignores the locations of the input neurons\u2019 receptive fields. Amazingly, however, these\u00a0crossmodal receptive fields overlap. For example, a multisensory neuron in the superior colliculus might receive input from two unimodal neurons: one with a visual receptive field and one with an auditory receptive field. It has been found that the unimodal receptive fields refer to the same locations in space\u2014that is, the two unimodal neurons respond to stimuli in the same region of space. Crucially, the overlap in the crossmodal receptive fields plays a vital role in the integration of crossmodal stimuli. When the information from the separate modalities is coming from within these overlapping receptive fields, then it is treated as having come from the same location\u2014and the neuron responds with a superadditive (enhanced) response. So, part of the information that is used by the brain to combine multimodal inputs is the location in space from which the stimuli came.\r\n\r\nThis pattern is common across many multisensory neurons in multiple regions of the brain. Because of this, researchers have defined the spatial principle of multisensory integration: Multisensory enhancement is observed when the sources of stimulation are spatially related to one another. A related phenomenon concerns the <em>timing<\/em> of crossmodal stimuli. Enhancement effects are observed in multisensory neurons only when the inputs from different senses arrive within a short time of one another (e.g., Recanzone, 2003).\r\n<h2 id=\"multimodal-processing-in-unimodal-cortex\">Multimodal Processing in Unimodal Cortex<\/h2>\r\nMultisensory neurons have also been observed outside of multisensory convergence zones, in areas of the brain that were once thought to be dedicated to the processing of a single modality (unimodal cortex). For example, the primary visual cortex was long thought to be devoted to the processing of exclusively visual information. The primary visual cortex is the first stop in the cortex for information arriving from the eyes, so it processes very low-level information like edges. Interestingly, neurons have been found in the primary visual cortex that receives information from the primary auditory cortex (where sound information from the auditory pathway is processed) and from the superior temporal sulcus (a multisensory convergence zone mentioned above). This is remarkable because it indicates that the processing of visual information is, from a very early stage, influenced by auditory information.\r\n\r\nThere may be two ways for these multimodal interactions to occur. First, it could be that the processing of auditory information in relatively late stages of processing feeds back to influence low-level processing of visual information in unimodal cortex (McDonald, Teder-S\u00e4lej\u00e4rvi, Russo, &amp; Hillyard, 2003). Alternatively, it may be that areas of unimodal cortex contact each other directly (Driver &amp; Noesselt, 2008; Macaluso &amp; Driver, 2005), such that multimodal integration is a fundamental component of all sensory processing.\r\n\r\nIn fact, the large numbers of multisensory neurons distributed all around the cortex\u2014in multisensory convergence areas and in primary cortices\u2014has led some researchers to propose that a drastic reconceptualization of the brain is necessary (Ghazanfar &amp; Schroeder, 2006). They argue that the cortex should not be considered as being divided into isolated regions that process only one kind of sensory information. Rather, they propose that these areas only <em>prefer<\/em> to process information from specific modalities but engage in low-level multisensory processing whenever it is beneficial to the perceiver (Vasconcelos et al., 2011).\r\n<h1 id=\"behavioral-effects-of-multimodal-perception\">Behavioral Effects of Multimodal Perception<\/h1>\r\nAlthough neuroscientists tend to study very simple interactions between neurons, the fact that they\u2019ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. As discussed above, our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it.\r\n\r\nIt will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class\u2014multimodal phenomena\u2014concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class\u2014crossmodal phenomena\u2014concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, &amp; Roder, 2009).\r\n<h1 id=\"multimodal-phenomena\">Multimodal Phenomena<\/h1>\r\n<h2 id=\"audiovisual-speech\">Audiovisual Speech<\/h2>\r\nMultimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to\u00a0<em>visual<\/em> patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is. Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called \u201cspeechreading\u201d). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, R\u00f6nnberg, &amp; Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer &amp; Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, &amp; Tucker, 2001; Mohammed et al., 2005).\r\n\r\nHow does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (\u201cwhite noise\u201d\u2014which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants\u2019 task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the \u201cauditory-alone\u201d condition), and one in both the auditory and visual components were presented (the \u201caudiovisual\u201d condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior.\r\n\r\nAnother phenomenon using audiovisual speech is a very famous illusion called the \u201cMcGurk effect\u201d (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables \u201cgaga.\u201d Another movie is made of the same speaker saying the syllables \u201cbaba.\u201d Then, the auditory portion of the \u201cbaba\u201d movie is dubbed onto the visual portion of the \u201cgaga\u201d movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable \u201cdada\u201d\u2014which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception.\r\n\r\nhttps:\/\/youtu.be\/G-lN8vWm3m0?t=32s\r\n<h2 id=\"tactilevisual-interactions-in-body-ownership\">Tactile\/Visual Interactions in Body Ownership<\/h2>\r\nNot all multisensory integration phenomena concern speech, however. One particularly compelling multisensory illusion involves the integration of tactile and visual information in the perception of body ownership. In the \u201crubber hand illusion\u201d (Botvinick &amp; Cohen, 1998), an observer is situated so that one of his hands is not visible. A fake rubber hand is placed near the obscured hand, but in a visible location. The experimenter then uses a light paintbrush to simultaneously stroke the obscured hand and the rubber hand in the same locations. For example, if the middle finger of the obscured hand is being brushed, then the middle finger of the rubber hand will also be brushed. This sets up a correspondence between the tactile sensations (coming from the obscured hand) and the visual sensations (of the rubber hand). After a short time (around 10 minutes), participants report feeling as though the rubber hand \u201cbelongs\u201d to them; that is, that the rubber hand is a part of their body. This feeling can be so strong that surprising the participant by hitting the rubber hand with a hammer often leads to a reflexive withdrawing of the obscured hand\u2014even though it is in no danger at all. It appears, then, that our awareness of our own bodies may be the result of multisensory integration.\r\n\r\nhttps:\/\/youtu.be\/sxwn1w7MJvk\r\n<h2 id=\"crossmodal-phenomena\">Crossmodal Phenomena<\/h2>\r\nCrossmodal phenomena are distinguished from multimodal phenomena in that they concern the influence one sensory modality has on the perception of another.\r\n<h2 id=\"visual-influence-on-auditory-localization\">Visual Influence on Auditory Localization<\/h2>\r\nA famous (and commonly experienced) crossmodal illusion is referred to as \u201cthe ventriloquism effect.\u201d When a ventriloquist appears to make a puppet speak, she fools the listener into thinking that the location of the origin of the speech sounds is at the puppet\u2019s mouth. In other words, instead of localizing the auditory signal (coming from the mouth of a ventriloquist) to the correct place, our perceptual system localizes it incorrectly (to the mouth of the puppet).\r\n\r\nWhy might this happen? Consider the information available to the observer about the location of the two components of the stimulus: the sounds from the ventriloquist\u2019s mouth and the visual movement of the puppet\u2019s mouth. Whereas it is very obvious where the visual stimulus is coming from (because you can see it), it is much more difficult to pinpoint the location of the sounds. In other words, the very precise visual location of mouth movement apparently overrides the less well-specified location of the auditory information. More generally, it has been found that the location of a wide variety of auditory stimuli can be affected by the simultaneous presentation of a visual stimulus (Vroomen &amp; De Gelder, 2004). In addition, the ventriloquism effect has been demonstrated for objects in motion: The motion of a visual object can influence the perceived direction of motion of a moving sound source (Soto-Faraco, Kingstone, &amp; Spence, 2003).\r\n<h2 id=\"auditory-influence-on-visual-perception\">Auditory Influence on Visual Perception<\/h2>\r\nA related illusion demonstrates the opposite effect: where sounds have an effect on visual perception. In the double-flash illusion, a participant is asked to stare at a central point on a computer monitor. On the extreme edge of the participant\u2019s vision, a white circle is briefly flashed one time. There is also a simultaneous auditory event: either one beep or two beeps in rapid succession. Remarkably, participants report seeing <em>two<\/em> visual flashes when the flash is accompanied by two beeps; the same stimulus is seen as a single flash in the context of a single beep or no beep (Shams, Kamitani, &amp; Shimojo, 2000). In other words, the number of heard beeps influences the number of seen flashes!\r\n\r\nAnother illusion involves the perception of collisions between two circles (called \u201cballs\u201d) moving toward each other and continuing through each other. Such stimuli can be perceived as either two balls moving through each other or as a collision between the two balls that then bounce off each other in opposite directions. Sekuler, Sekuler, and Lau (1997) showed that the presentation of an auditory stimulus at the time of contact between the two balls strongly influenced the perception of a collision event. In this case, the perceived sound influences the interpretation of the ambiguous visual stimulus.\r\n<h2 id=\"crossmodal-speech\">Crossmodal Speech<\/h2>\r\nSeveral crossmodal phenomena have also been discovered for speech stimuli. These crossmodal speech effects usually show altered perceptual processing of unimodal stimuli (e.g., acoustic patterns) by virtue of prior experience with the alternate unimodal stimulus (e.g., optical patterns). For example, Rosenblum, Miller, and Sanchez (2007) conducted an experiment examining the ability to become familiar with a person\u2019s voice. Their first interesting finding was unimodal: Much like what happens when someone repeatedly hears a person speak, perceivers can become familiar with the \u201cvisual voice\u201d of a speaker. That is, they can become familiar with the person\u2019s speaking style simply by seeing that person speak. Even more astounding was their crossmodal finding: Familiarity with this <em>visual<\/em> information also led to increased recognition of the speaker\u2019s <em>auditory<\/em> speech, to which participants had never had exposure.\r\n\r\nSimilarly, it has been shown that when perceivers see a speaking face, they can identify the (auditory-alone) voice of that speaker, and vice versa (Kamachi, Hill, Lander, &amp; Vatikiotis-Bateson, 2003; Lachs &amp; Pisoni, 2004a, 2004b, 2004c; Rosenblum, Smith, Nichols, Lee, &amp; Hale, 2006). In other words, the visual form of a speaker engaged in the act of speaking appears to contain information about what that speaker should sound like. Perhaps more surprisingly, the auditory form of speech seems to contain information about what the speaker should look like.\r\n<h1 id=\"conclusion\">Conclusion<\/h1>\r\nIn this module, we have reviewed some of the main evidence and findings concerning the role of multimodal perception in our experience of the world. It appears that our nervous system (and the cortex in particular) contains considerable architecture for the processing of information arriving from multiple senses. Given this neurobiological setup, and the diversity of behavioral phenomena associated with multimodal stimuli, it is likely that the investigation of multimodal perception will continue to be a topic of interest in the field of experimental perception for many years to come.\r\n\r\n<\/section>&nbsp;\r\n\r\n<section>\r\n<h1 id=\"outside-resources\">Outside Resources<\/h1>\r\n<dl class=\"noba-chapter-resources\"><dt>Article: A review of the neuroanatomy and methods associated with multimodal perception:<\/dt><dd><a href=\"http:\/\/dx.doi.org\/10.1016\/j.neubiorev.2011.04.015\">http:\/\/dx.doi.org\/10.1016\/j.neubiorev.2011.04.015<\/a><\/dd><dt>Journal: Experimental Brain Research Special issue: Crossmodal processing<\/dt><dd><a href=\"http:\/\/www.springerlink.com\/content\/0014-4819\/198\/2-3\">http:\/\/www.springerlink.com\/content\/0014-4819\/198\/2-3<\/a><\/dd><dt>Video: McGurk demo<\/dt><dd><a href=\"https:\/\/www.youtube.com\/watch?v=aFPtc8BVdJk\">https:\/\/www.youtube.com\/watch?v=aFPtc8BVdJk<\/a><\/dd><dt>Video: The Rubber Hand Illusion<\/dt><dd>\r\n<div class=\"video\"><a href=\"https:\/\/www.youtube.com\/watch?v=sxwn1w7MJvk\">https:\/\/www.youtube.com\/watch?v=sxwn1w7MJvk<\/a><\/div>\r\n<\/dd><dt>Web: Double-flash illusion demo<\/dt><dd><a href=\"http:\/\/www.cns.atr.jp\/~kmtn\/soundInducedIllusoryFlash2\/\">http:\/\/www.cns.atr.jp\/~kmtn\/soundInducedIllusoryFlash2\/<\/a><\/dd><\/dl><\/section><section>\r\n<h1 id=\"discussion-questions\">Discussion Questions<\/h1>\r\n<ol>\r\n\t<li>The extensive network of multisensory areas and neurons in the cortex implies that much perceptual processing occurs in the context of multiple inputs. Could the processing of unimodal information ever be useful? Why or why not?<\/li>\r\n\t<li>Some researchers have argued that the Principle of Inverse Effectiveness (PoIE) results from ceiling effects: Multisensory enhancement cannot take place when one modality is sufficient for processing because in such cases it is not possible for processing to be enhanced (because performance is already at the \u201cceiling\u201d). On the other hand, other researchers claim that the PoIE stems from the perceptual system\u2019s ability to assess the relative value of stimulus cues, and to use the most reliable sources of information to construct a representation of the outside world. What do you think? Could these two possibilities ever be teased apart? What kinds of experiments might one conduct to try to get at this issue?<\/li>\r\n\t<li>In the late 17th century, a scientist named William Molyneux asked the famous philosopher John Locke a question relevant to modern studies of multisensory processing. The question was this: Imagine a person who has been blind since birth, and who is able, by virtue of the sense of touch, to identify three dimensional shapes such as spheres or pyramids. Now imagine that this person suddenly receives the ability to see. Would the person, without using the sense of touch, be able to identify those same shapes visually? Can modern research in multimodal perception help answer this question? Why or why not? How do the studies about crossmodal phenomena inform us about the answer to this question?<\/li>\r\n<\/ol>\r\n<\/section>","rendered":"<section>\n<p class=\"lead\">Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects.<\/p>\n<\/section>\n<section>\n<h1 id=\"learning-objectives\">Learning Objectives<\/h1>\n<ul>\n<li>Define the basic terminology and basic principles of multimodal perception.<\/li>\n<li>Describe the neuroanatomy of multisensory integration and name some of the regions of the cortex and midbrain that have been implicated in multisensory processing.<\/li>\n<li>Explain the difference between multimodal phenomena and crossmodal phenomena.<\/li>\n<li>Give examples of multimodal and crossmodal behavioral effects.<\/li>\n<\/ul>\n<\/section>\n<section class=\"content\">\n<h1 id=\"perception-unified\">Perception: Unified<\/h1>\n<p>Although it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of\u00a0unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline. However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not.<\/p>\n<p>For the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world.<\/p>\n<h1 id=\"questions-about-multimodal-perception\">Questions About Multimodal Perception<\/h1>\n<p>Several theoretical problems are raised by multimodal perception. After all, the world is a \u201cblooming, buzzing world of confusion\u201d that constantly bombards our perceptual system with light, sound, heat, pressure, and so forth. To make matters more complicated, these stimuli come from multiple events spread out over both space and time. To return to our example: Let\u2019s say the car crash you observed happened on Main Street in your town. Your perception during the car crash might include a lot of stimulation that was <em>not<\/em> relevant to the car crash. For example, you might also overhear the conversation of a nearby couple, see a bird flying into a tree, or smell the delicious scent of freshly baked bread from a nearby bakery (or all three!). However, you would most likely not make the mistake of associating any of these stimuli with the car crash. In fact, we rarely combine the auditory stimuli associated with one event with the visual stimuli associated with another (although, under some unique circumstances\u2014such as ventriloquism\u2014we do). How is the brain able to take the information from separate sensory modalities and match it appropriately, so that stimuli that belong together stay together, while stimuli that do not belong together get treated separately? In other words, how does the perceptual system determine which unimodal stimuli must be integrated, and which must not?<\/p>\n<p>Once unimodal stimuli have been appropriately integrated, we can further ask about the consequences of this integration: What are the effects of multimodal perception that would not be present if perceptual processing were only unimodal? Perhaps the most robust finding in the study of multimodal perception concerns this last question. No matter whether you are looking at the actions of neurons or the behavior of individuals, it has been found that responses to multimodal stimuli are typically greater than the combined response to either modality independently. In other words, if you presented the stimulus in one modality at a time and measured the response to each of these unimodal stimuli, you would find that adding them together would still not equal the response to the multimodal stimulus. This superadditive effect of multisensory integrationindicates that there are consequences resulting from the integrated processing of multimodal stimuli.<\/p>\n<p>The extent of the superadditive effect (sometimes referred to as multisensory enhancement) is determined by the strength of the response to the single stimulus modality with the biggest effect. To understand this concept, imagine someone speaking to you in a noisy environment (such as a crowded party). When discussing this type of multimodal stimulus, it is often useful to describe it in terms of its unimodal components: In this case, there is an auditory component (the sounds generated by the speech of the person speaking to you) and a visual component (the visual form of the face movements as the person speaks to you). In the crowded party, the auditory component of the person\u2019s speech might be difficult to process (because of the surrounding party noise). The potential for visual information about speech\u2014lipreading\u2014to help in understanding the speaker\u2019s message is, in this situation, quite large. However, if you were listening to that same person speak in a quiet library, the auditory portion would probably be sufficient for receiving the message, and the visual portion would help very little, if at all (Sumby &amp; Pollack, 1954). In general, for a stimulus with multimodal components, if the response to each component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component\u2014by itself\u2014is sufficient to evoke a strong response, then the opportunity for multisensory enhancement is relatively small. This finding is called the Principle of Inverse Effectiveness (Stein &amp; Meredith, 1993) because the effectiveness of multisensory enhancement is inversely related to the unimodal response with the greatest effect.<\/p>\n<p>Another important theoretical question about multimodal perception concerns the neurobiology that supports it. After all, at some point, the information from each sensory modality is definitely separated (e.g., light comes in through the eyes, and sound comes in through the ears). How does the brain take information from different neural systems (optic, auditory, etc.) and combine it? If our experience of the world is multimodal, then it must be the case that at some point during perceptual processing, the unimodal information coming from separate sensory organs\u2014such as the eyes, ears, skin\u2014is combined. A related question asks where in the brain this integration takes place. We turn to these questions in the next section.<\/p>\n<h1 id=\"biological-bases-of-multimodal-perception\">Biological Bases of Multimodal Perception<\/h1>\n<h2 id=\"multisensory-neurons-and-neural-convergence\">Multisensory Neurons and Neural Convergence<\/h2>\n<p>A surprisingly large number of brain regions in the midbrain and cerebral cortex are related to multimodal perception. These regions contain neurons that respond to stimuli from not just one, but multiple sensory modalities. For example, a region called the superior temporal sulcus contains single neurons that respond to both the visual and auditory components of speech (Calvert, 2001;Calvert, Hansen, Iversen, &amp; Brammer, 2001). These multisensory convergence zones are interesting, because they are a kind of neural intersection of information coming from the different senses. That is, neurons that are devoted to the processing of one sense at a time\u2014say vision or touch\u2014send their information to the convergence zones, where it is processed together.<\/p>\n<p>One of the most closely studied multisensory convergence zones is the superior colliculus (Stein &amp; Meredith, 1993), which receives inputs from many different areas of the brain, including regions involved in the unimodal processing of visual and auditory stimuli (Edwards, Ginsburgh, Henkel, &amp; Stein, 1979). Interestingly, the superior colliculus is involved in the \u201corienting response,\u201d which is the behavior associated with moving one\u2019s eye gaze toward the location of a seen or heard stimulus. Given this function for the superior colliculus, it is hardly surprising that there are multisensory neurons found there (Stein &amp; Stanford, 2008).<\/p>\n<h2 id=\"crossmodal-receptive-fields\">Crossmodal Receptive Fields<\/h2>\n<p>The details of the anatomy and function of multisensory neurons help to answer the question of how the brain integrates stimuli appropriately. In order to understand the details, we need to discuss a neuron\u2019s receptive field. All over the brain, neurons can be found that respond only to stimuli presented in a very specific region of the space immediately surrounding the perceiver. That region is called the neuron\u2019s receptive field. If a stimulus is presented in a neuron\u2019s receptive field, then that neuron responds by increasing or decreasing its firing rate. If a stimulus is presented outside of a neuron\u2019s receptive field, then there is no effect on the neuron\u2019s firing rate. Importantly, when two neurons send their information to a third neuron, the third neuron\u2019s receptive field is the combination of the receptive fields of the two input neurons. This is called neural convergence, because the information from multiple neurons converges on a single neuron. In the case of multisensory neurons, the convergence arrives from different sensory modalities. Thus, the receptive fields of multisensory neurons are the combination of the receptive fields of neurons located in different sensory pathways.<\/p>\n<p>Now, it could be the case that the neural convergence that results in multisensory neurons is set up in a way that ignores the locations of the input neurons\u2019 receptive fields. Amazingly, however, these\u00a0crossmodal receptive fields overlap. For example, a multisensory neuron in the superior colliculus might receive input from two unimodal neurons: one with a visual receptive field and one with an auditory receptive field. It has been found that the unimodal receptive fields refer to the same locations in space\u2014that is, the two unimodal neurons respond to stimuli in the same region of space. Crucially, the overlap in the crossmodal receptive fields plays a vital role in the integration of crossmodal stimuli. When the information from the separate modalities is coming from within these overlapping receptive fields, then it is treated as having come from the same location\u2014and the neuron responds with a superadditive (enhanced) response. So, part of the information that is used by the brain to combine multimodal inputs is the location in space from which the stimuli came.<\/p>\n<p>This pattern is common across many multisensory neurons in multiple regions of the brain. Because of this, researchers have defined the spatial principle of multisensory integration: Multisensory enhancement is observed when the sources of stimulation are spatially related to one another. A related phenomenon concerns the <em>timing<\/em> of crossmodal stimuli. Enhancement effects are observed in multisensory neurons only when the inputs from different senses arrive within a short time of one another (e.g., Recanzone, 2003).<\/p>\n<h2 id=\"multimodal-processing-in-unimodal-cortex\">Multimodal Processing in Unimodal Cortex<\/h2>\n<p>Multisensory neurons have also been observed outside of multisensory convergence zones, in areas of the brain that were once thought to be dedicated to the processing of a single modality (unimodal cortex). For example, the primary visual cortex was long thought to be devoted to the processing of exclusively visual information. The primary visual cortex is the first stop in the cortex for information arriving from the eyes, so it processes very low-level information like edges. Interestingly, neurons have been found in the primary visual cortex that receives information from the primary auditory cortex (where sound information from the auditory pathway is processed) and from the superior temporal sulcus (a multisensory convergence zone mentioned above). This is remarkable because it indicates that the processing of visual information is, from a very early stage, influenced by auditory information.<\/p>\n<p>There may be two ways for these multimodal interactions to occur. First, it could be that the processing of auditory information in relatively late stages of processing feeds back to influence low-level processing of visual information in unimodal cortex (McDonald, Teder-S\u00e4lej\u00e4rvi, Russo, &amp; Hillyard, 2003). Alternatively, it may be that areas of unimodal cortex contact each other directly (Driver &amp; Noesselt, 2008; Macaluso &amp; Driver, 2005), such that multimodal integration is a fundamental component of all sensory processing.<\/p>\n<p>In fact, the large numbers of multisensory neurons distributed all around the cortex\u2014in multisensory convergence areas and in primary cortices\u2014has led some researchers to propose that a drastic reconceptualization of the brain is necessary (Ghazanfar &amp; Schroeder, 2006). They argue that the cortex should not be considered as being divided into isolated regions that process only one kind of sensory information. Rather, they propose that these areas only <em>prefer<\/em> to process information from specific modalities but engage in low-level multisensory processing whenever it is beneficial to the perceiver (Vasconcelos et al., 2011).<\/p>\n<h1 id=\"behavioral-effects-of-multimodal-perception\">Behavioral Effects of Multimodal Perception<\/h1>\n<p>Although neuroscientists tend to study very simple interactions between neurons, the fact that they\u2019ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. As discussed above, our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it.<\/p>\n<p>It will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class\u2014multimodal phenomena\u2014concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class\u2014crossmodal phenomena\u2014concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, &amp; Roder, 2009).<\/p>\n<h1 id=\"multimodal-phenomena\">Multimodal Phenomena<\/h1>\n<h2 id=\"audiovisual-speech\">Audiovisual Speech<\/h2>\n<p>Multimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to\u00a0<em>visual<\/em> patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is. Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called \u201cspeechreading\u201d). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, R\u00f6nnberg, &amp; Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer &amp; Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, &amp; Tucker, 2001; Mohammed et al., 2005).<\/p>\n<p>How does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (\u201cwhite noise\u201d\u2014which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants\u2019 task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the \u201cauditory-alone\u201d condition), and one in both the auditory and visual components were presented (the \u201caudiovisual\u201d condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior.<\/p>\n<p>Another phenomenon using audiovisual speech is a very famous illusion called the \u201cMcGurk effect\u201d (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables \u201cgaga.\u201d Another movie is made of the same speaker saying the syllables \u201cbaba.\u201d Then, the auditory portion of the \u201cbaba\u201d movie is dubbed onto the visual portion of the \u201cgaga\u201d movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable \u201cdada\u201d\u2014which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception.<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"Try this bizarre audio illusion! \ud83d\udc41\ufe0f\ud83d\udc42\ud83d\ude2e - BBC\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/G-lN8vWm3m0?start=32&#38;feature=oembed\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h2 id=\"tactilevisual-interactions-in-body-ownership\">Tactile\/Visual Interactions in Body Ownership<\/h2>\n<p>Not all multisensory integration phenomena concern speech, however. One particularly compelling multisensory illusion involves the integration of tactile and visual information in the perception of body ownership. In the \u201crubber hand illusion\u201d (Botvinick &amp; Cohen, 1998), an observer is situated so that one of his hands is not visible. A fake rubber hand is placed near the obscured hand, but in a visible location. The experimenter then uses a light paintbrush to simultaneously stroke the obscured hand and the rubber hand in the same locations. For example, if the middle finger of the obscured hand is being brushed, then the middle finger of the rubber hand will also be brushed. This sets up a correspondence between the tactile sensations (coming from the obscured hand) and the visual sensations (of the rubber hand). After a short time (around 10 minutes), participants report feeling as though the rubber hand \u201cbelongs\u201d to them; that is, that the rubber hand is a part of their body. This feeling can be so strong that surprising the participant by hitting the rubber hand with a hammer often leads to a reflexive withdrawing of the obscured hand\u2014even though it is in no danger at all. It appears, then, that our awareness of our own bodies may be the result of multisensory integration.<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-2\" title=\"The Rubber Hand Illusion - Horizon: Is Seeing Believing? - BBC Two\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/sxwn1w7MJvk?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<h2 id=\"crossmodal-phenomena\">Crossmodal Phenomena<\/h2>\n<p>Crossmodal phenomena are distinguished from multimodal phenomena in that they concern the influence one sensory modality has on the perception of another.<\/p>\n<h2 id=\"visual-influence-on-auditory-localization\">Visual Influence on Auditory Localization<\/h2>\n<p>A famous (and commonly experienced) crossmodal illusion is referred to as \u201cthe ventriloquism effect.\u201d When a ventriloquist appears to make a puppet speak, she fools the listener into thinking that the location of the origin of the speech sounds is at the puppet\u2019s mouth. In other words, instead of localizing the auditory signal (coming from the mouth of a ventriloquist) to the correct place, our perceptual system localizes it incorrectly (to the mouth of the puppet).<\/p>\n<p>Why might this happen? Consider the information available to the observer about the location of the two components of the stimulus: the sounds from the ventriloquist\u2019s mouth and the visual movement of the puppet\u2019s mouth. Whereas it is very obvious where the visual stimulus is coming from (because you can see it), it is much more difficult to pinpoint the location of the sounds. In other words, the very precise visual location of mouth movement apparently overrides the less well-specified location of the auditory information. More generally, it has been found that the location of a wide variety of auditory stimuli can be affected by the simultaneous presentation of a visual stimulus (Vroomen &amp; De Gelder, 2004). In addition, the ventriloquism effect has been demonstrated for objects in motion: The motion of a visual object can influence the perceived direction of motion of a moving sound source (Soto-Faraco, Kingstone, &amp; Spence, 2003).<\/p>\n<h2 id=\"auditory-influence-on-visual-perception\">Auditory Influence on Visual Perception<\/h2>\n<p>A related illusion demonstrates the opposite effect: where sounds have an effect on visual perception. In the double-flash illusion, a participant is asked to stare at a central point on a computer monitor. On the extreme edge of the participant\u2019s vision, a white circle is briefly flashed one time. There is also a simultaneous auditory event: either one beep or two beeps in rapid succession. Remarkably, participants report seeing <em>two<\/em> visual flashes when the flash is accompanied by two beeps; the same stimulus is seen as a single flash in the context of a single beep or no beep (Shams, Kamitani, &amp; Shimojo, 2000). In other words, the number of heard beeps influences the number of seen flashes!<\/p>\n<p>Another illusion involves the perception of collisions between two circles (called \u201cballs\u201d) moving toward each other and continuing through each other. Such stimuli can be perceived as either two balls moving through each other or as a collision between the two balls that then bounce off each other in opposite directions. Sekuler, Sekuler, and Lau (1997) showed that the presentation of an auditory stimulus at the time of contact between the two balls strongly influenced the perception of a collision event. In this case, the perceived sound influences the interpretation of the ambiguous visual stimulus.<\/p>\n<h2 id=\"crossmodal-speech\">Crossmodal Speech<\/h2>\n<p>Several crossmodal phenomena have also been discovered for speech stimuli. These crossmodal speech effects usually show altered perceptual processing of unimodal stimuli (e.g., acoustic patterns) by virtue of prior experience with the alternate unimodal stimulus (e.g., optical patterns). For example, Rosenblum, Miller, and Sanchez (2007) conducted an experiment examining the ability to become familiar with a person\u2019s voice. Their first interesting finding was unimodal: Much like what happens when someone repeatedly hears a person speak, perceivers can become familiar with the \u201cvisual voice\u201d of a speaker. That is, they can become familiar with the person\u2019s speaking style simply by seeing that person speak. Even more astounding was their crossmodal finding: Familiarity with this <em>visual<\/em> information also led to increased recognition of the speaker\u2019s <em>auditory<\/em> speech, to which participants had never had exposure.<\/p>\n<p>Similarly, it has been shown that when perceivers see a speaking face, they can identify the (auditory-alone) voice of that speaker, and vice versa (Kamachi, Hill, Lander, &amp; Vatikiotis-Bateson, 2003; Lachs &amp; Pisoni, 2004a, 2004b, 2004c; Rosenblum, Smith, Nichols, Lee, &amp; Hale, 2006). In other words, the visual form of a speaker engaged in the act of speaking appears to contain information about what that speaker should sound like. Perhaps more surprisingly, the auditory form of speech seems to contain information about what the speaker should look like.<\/p>\n<h1 id=\"conclusion\">Conclusion<\/h1>\n<p>In this module, we have reviewed some of the main evidence and findings concerning the role of multimodal perception in our experience of the world. It appears that our nervous system (and the cortex in particular) contains considerable architecture for the processing of information arriving from multiple senses. Given this neurobiological setup, and the diversity of behavioral phenomena associated with multimodal stimuli, it is likely that the investigation of multimodal perception will continue to be a topic of interest in the field of experimental perception for many years to come.<\/p>\n<\/section>\n<p>&nbsp;<\/p>\n<section>\n<h1 id=\"outside-resources\">Outside Resources<\/h1>\n<dl class=\"noba-chapter-resources\">\n<dt>Article: A review of the neuroanatomy and methods associated with multimodal perception:<\/dt>\n<dd><a href=\"http:\/\/dx.doi.org\/10.1016\/j.neubiorev.2011.04.015\">http:\/\/dx.doi.org\/10.1016\/j.neubiorev.2011.04.015<\/a><\/dd>\n<dt>Journal: Experimental Brain Research Special issue: Crossmodal processing<\/dt>\n<dd><a href=\"http:\/\/www.springerlink.com\/content\/0014-4819\/198\/2-3\">http:\/\/www.springerlink.com\/content\/0014-4819\/198\/2-3<\/a><\/dd>\n<dt>Video: McGurk demo<\/dt>\n<dd><a href=\"https:\/\/www.youtube.com\/watch?v=aFPtc8BVdJk\">https:\/\/www.youtube.com\/watch?v=aFPtc8BVdJk<\/a><\/dd>\n<dt>Video: The Rubber Hand Illusion<\/dt>\n<dd>\n<div class=\"video\"><a href=\"https:\/\/www.youtube.com\/watch?v=sxwn1w7MJvk\">https:\/\/www.youtube.com\/watch?v=sxwn1w7MJvk<\/a><\/div>\n<\/dd>\n<dt>Web: Double-flash illusion demo<\/dt>\n<dd><a href=\"http:\/\/www.cns.atr.jp\/~kmtn\/soundInducedIllusoryFlash2\/\">http:\/\/www.cns.atr.jp\/~kmtn\/soundInducedIllusoryFlash2\/<\/a><\/dd>\n<\/dl>\n<\/section>\n<section>\n<h1 id=\"discussion-questions\">Discussion Questions<\/h1>\n<ol>\n<li>The extensive network of multisensory areas and neurons in the cortex implies that much perceptual processing occurs in the context of multiple inputs. Could the processing of unimodal information ever be useful? Why or why not?<\/li>\n<li>Some researchers have argued that the Principle of Inverse Effectiveness (PoIE) results from ceiling effects: Multisensory enhancement cannot take place when one modality is sufficient for processing because in such cases it is not possible for processing to be enhanced (because performance is already at the \u201cceiling\u201d). On the other hand, other researchers claim that the PoIE stems from the perceptual system\u2019s ability to assess the relative value of stimulus cues, and to use the most reliable sources of information to construct a representation of the outside world. What do you think? Could these two possibilities ever be teased apart? What kinds of experiments might one conduct to try to get at this issue?<\/li>\n<li>In the late 17th century, a scientist named William Molyneux asked the famous philosopher John Locke a question relevant to modern studies of multisensory processing. The question was this: Imagine a person who has been blind since birth, and who is able, by virtue of the sense of touch, to identify three dimensional shapes such as spheres or pyramids. Now imagine that this person suddenly receives the ability to see. Would the person, without using the sense of touch, be able to identify those same shapes visually? Can modern research in multimodal perception help answer this question? Why or why not? How do the studies about crossmodal phenomena inform us about the answer to this question?<\/li>\n<\/ol>\n<\/section>\n\n\t\t\t <section class=\"citations-section\" role=\"contentinfo\">\n\t\t\t <h3>Candela Citations<\/h3>\n\t\t\t\t\t <div>\n\t\t\t\t\t\t <div id=\"citation-list-1238\">\n\t\t\t\t\t\t\t <div class=\"licensing\"><div class=\"license-attribution-dropdown-subheading\">CC licensed content, Shared previously<\/div><ul class=\"citation-list\"><li>Multi-Modal Perception. <strong>Authored by<\/strong>: Lorin Lachs. <strong>Provided by<\/strong>:  California State University, Fresno. <strong>Located at<\/strong>: <a target=\"_blank\" href=\"http:\/\/nobaproject.com\/modules\/multi-modal-perception\">http:\/\/nobaproject.com\/modules\/multi-modal-perception<\/a>. <strong>Project<\/strong>: The Noba Project. <strong>License<\/strong>: <em><a target=\"_blank\" rel=\"license\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/\">CC BY-NC-SA: Attribution-NonCommercial-ShareAlike<\/a><\/em><\/li><\/ul><div class=\"license-attribution-dropdown-subheading\">All rights reserved content<\/div><ul class=\"citation-list\"><li>The McGurk Effect. <strong>Provided by<\/strong>: BBC. <strong>Located at<\/strong>: <a target=\"_blank\" href=\"https:\/\/youtu.be\/G-lN8vWm3m0?t=32s\">https:\/\/youtu.be\/G-lN8vWm3m0?t=32s<\/a>. <strong>License<\/strong>: <em>Other<\/em>. <strong>License Terms<\/strong>: Standard YouTube License<\/li><li>The Rubber Hand Illusion. <strong>Provided by<\/strong>: BBC. <strong>Located at<\/strong>: <a target=\"_blank\" href=\"https:\/\/youtu.be\/sxwn1w7MJvk\">https:\/\/youtu.be\/sxwn1w7MJvk<\/a>. <strong>License<\/strong>: <em>Other<\/em>. <strong>License Terms<\/strong>: Standard YouTube License<\/li><\/ul><\/div>\n\t\t\t\t\t\t <\/div>\n\t\t\t\t\t <\/div>\n\t\t\t <\/section>","protected":false},"author":74,"menu_order":13,"template":"","meta":{"_candela_citation":"[{\"type\":\"cc\",\"description\":\"Multi-Modal Perception\",\"author\":\"Lorin Lachs\",\"organization\":\" California State University, Fresno\",\"url\":\"http:\/\/nobaproject.com\/modules\/multi-modal-perception\",\"project\":\"The Noba Project\",\"license\":\"cc-by-nc-sa\",\"license_terms\":\"\"},{\"type\":\"copyrighted_video\",\"description\":\"The McGurk Effect\",\"author\":\"\",\"organization\":\"BBC\",\"url\":\"https:\/\/youtu.be\/G-lN8vWm3m0?t=32s\",\"project\":\"\",\"license\":\"other\",\"license_terms\":\"Standard YouTube License\"},{\"type\":\"copyrighted_video\",\"description\":\"The Rubber Hand Illusion\",\"author\":\"\",\"organization\":\"BBC\",\"url\":\"https:\/\/youtu.be\/sxwn1w7MJvk\",\"project\":\"\",\"license\":\"other\",\"license_terms\":\"Standard YouTube License\"}]","CANDELA_OUTCOMES_GUID":"","pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-1238","chapter","type-chapter","status-publish","hentry"],"part":514,"_links":{"self":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapters\/1238","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/wp\/v2\/users\/74"}],"version-history":[{"count":3,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapters\/1238\/revisions"}],"predecessor-version":[{"id":1241,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapters\/1238\/revisions\/1241"}],"part":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/parts\/514"}],"metadata":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapters\/1238\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/wp\/v2\/media?parent=1238"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/pressbooks\/v2\/chapter-type?post=1238"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/wp\/v2\/contributor?post=1238"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-ss-151-1\/wp-json\/wp\/v2\/license?post=1238"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}