Research Article

Neural Correlates of Perceiving Emotional Faces and Bodies in Developmental Prosopagnosia: An Event-Related fMRI-Study

  • Jan Van den Stock,

    Affiliations: Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands, Old Age Psychiatry Department, University Hospitals Leuven, Leuven, Belgium

  • Wim A. C. van de Riet,

    Affiliation: Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands

  • Ruthger Righart,

    Affiliation: Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands

  • Beatrice de Gelder mail

    Affiliations: Laboratory of Cognitive and Affective Neuroscience, Tilburg University, Tilburg, The Netherlands, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, United States of America

  • Published: September 17, 2008
  • DOI: 10.1371/journal.pone.0003195


Many people experience transient difficulties in recognizing faces but only a small number of them cannot recognize their family members when meeting them unexpectedly. Such face blindness is associated with serious problems in everyday life. A better understanding of the neuro-functional basis of impaired face recognition may be achieved by a careful comparison with an equally unique object category and by a adding a more realistic setting involving neutral faces as well facial expressions. We used event-related functional magnetic resonance imaging (fMRI) to investigate the neuro-functional basis of perceiving faces and bodies in three developmental prosopagnosics (DP) and matched healthy controls. Our approach involved materials consisting of neutral faces and bodies as well as faces and bodies expressing fear or happiness. The first main result is that the presence of emotional information has a different effect in the patient vs. the control group in the fusiform face area (FFA). Neutral faces trigger lower activation in the DP group, compared to the control group, while activation for facial expressions is the same in both groups. The second main result is that compared to controls, DPs have increased activation for bodies in the inferior occipital gyrus (IOG) and for neutral faces in the extrastriate body area (EBA), indicating that body and face sensitive processes are less categorically segregated in DP. Taken together our study shows the importance of using naturalistic emotional stimuli for a better understanding of developmental face deficits.


Recognizing faces of family and friends usually proceeds effortlessly. Yet a minority of people has difficulties telling apart who they are meeting with or remembering who they met previously when they can only go by the visual memory of the face. These problems can be quite dramatic, even to the point where they fail to recognize the face of their own spouse or child or for that matter their own face. The original reports of face recognition deficits for which the term prosopagnosia [1] was coined concerned cases of brain damage sustained in adulthood. More recently there have been reports of face recognition deficits that do not appear to be associated with any known neurological history. Although there are still only a few systematic reports of this condition, many more cases are described now compared to a decade ago and some authors have argued that as much as 2% of the population suffers from face recognition difficulties [2]. In analogy with developmental dyslexia these cases are now commonly referred to as developmental prosopagnosia (DP), referring to the possible origin of the adult face recognition deficit in anomalous development of the full face recognition skills. This behavioral deficit may include an anomaly in the putative congenital basis involved in the acquisition of the skill, but so far very little is known about this genetic basis and its importance for explaining behavioral deficits [3].

Recent research on behavioral face recognition deficits and their neural basis has followed the leads from the reports on the neural basis of face recognition in normals as mainly revealed in fMRI studies over the last decade. There is now a consensus in the literature that face recognition is implemented in a network of brain areas [4], [5]. Among these, an area in the fusiform gyrus (FG), labeled the fusiform face area (FFA) [6], [7], has attracted most attention. Next to this area, the role of the inferior occipital gyrus (IOG) is repeatedly stressed in normal e.g. [8][10] and anomalous face recognition [11]. But it is fair to say that the functional significance of these two main areas for person recognition and its deficits is not yet entirely clear.

Investigations of the neuro-functional correlates of DP with fMRI have yielded inconsistent results [11][16] (see Table 1 for an overview). The first fMRI-study including a DP case by Hadjikhani and de Gelder [11] found no face-specific activation in these two areas. A similar pattern was observed with another DP case [15]. On the other hand, other studies reported normal face-specific activation in developmental prosopagnosics (DPs) despite their severe behavioral deficits in face recognition [12][14], [16]. These findings suggest that intact functioning of the FFA and IOG are necessary, but not sufficient for successful face recognition.


Table 1. Results from fMRI-studies on prosopagnosia.


In view of the many different kinds of information a face provides (gender, age, emotion, familiarity, attractiveness etc.) and the different ways in which this information is called upon and used in daily life (whether the context only requires rapid detection that there is a face present, or on the contrary, full recognition of all facial attributes including name retrieval), it is worth stressing that the contextual requirements and the task settings are very important for evaluating face recognition problems and for understanding its neuro-functional basis and possible deficits. A finely tuned comparison of face recognition skills with other object recognition skills at the behavioral and neuro-functional level requires comparable task settings whether the object categories to be matched are faces or any other category that is suitable [17][21]. Since faces convey many different kinds of information it has so far been a daunting task to find a matching category to use as control stimuli. Previous approaches to find the best matching category have tended to explore either the physical similarity dimension (for example, using a continuum of more or less face like stimuli), the perceptual one or the functional one (for example, expertise with one or another specific object category). This has fed an ongoing debate about whether face processing mechanisms are qualitatively different from the processing mechanisms for objects (modularity hypothesis) [22], or on the other hand whether relative face specificity reflects the level of perceptual expertise with the stimulus category (expertise hypothesis) [9], [23]. As a matter of fact there are very few objects other than faces for which strong claims about category specific representation have been made. One exception concerns houses. Several studies report that this object category differentially activates a region around the collateral sulcus [24][26].

An interesting object category not used so far concerns human bodies. Recently, it has been shown in normal subjects that perceiving human bodies or body parts activates an area in extrastriate cortex, labeled extrastriate body area (EBA) [27]. More recently a second body specific area was defined in the FG [28], [29]. This body sensitive area in FG overlaps at least partially with the face-sensitive one and it has been termed the fusiform body area (FBA). In parallel, recent findings show that the close similarities between face and body perception exist at the level of perceptual mechanisms as revealed by the inversion effect (a decline in performance for inverted stimuli compared to upright stimuli that is more pronounced for faces than for other object categories [30]), since the same inversion effect has been reported for bodies [31], [32] for reviews, see [33], [34].

These behavioral and neuro-functional similarities between perceiving faces and bodies in normals and the fact that bodies represent a distinct but yet very closely related object category, raise the issue how bodies are processed in DP. A study by Duchaine et al. [35] presented natural faces and computer generated neutral body postures for testing face and body identity recognition in a DP patient using a sequential identity matching paradigm involving a minimal memory component. The performance of the patient was impaired for the faces, but within normal range for the bodies suggesting dissociation between face and body processing mechanisms with these task settings. Another study used event-related potentials (ERP) to investigate face and body perception in four DPs and found abnormal brain activation in the early time windows of the EEG (around 170 ms) for both faces and bodies in three of the four DPs [36].

A second main objective of the present study is to investigate how the neural underpinnings of face and body processing in prosopagnosia are influenced by emotional information in the face and the body. As a matter of fact, the face-sensitive area in FG is well known from investigations of face recognition using neutral faces but it also figures predominantly in research on the neural basis of recognizing facial expressions. The presence of an emotion expression adds realism to the face but may also be an interesting developmental factor. Studies with younger subjects have predominantly reported higher activation for fearful faces, compared to neutral faces [37][41], but a recent study with both adolescents and adults found a reverse pattern in the FFA, namely higher activation for neutral than for fearful faces [42]. The mechanism of this emotional modulation in the FFA may be based on feedback loops with the amygdala [39], [41]. A similar explanation has been proposed for the increased activation in FG sensitive to body images representing an emotional expression [28].

So far, the evidence concerning the neural correlates of processing emotional faces in DP is scarce. One study by de Gelder et al. [43] investigated this issue in acquired prosopagnosics (prosopagnosia occurring after brain damage). The included patients had lesions in either the FG, IOG or both. The results showed that the patients more strongly activated other face sensitive areas like the superior temporal sulcus (STS) or amygdala when they perceive facial expressions compared to neutral faces. The patients were also more accurate and faster in processing emotional faces compared to neutral faces, a finding that has been reported previously [44][46]. Since the patients in de Gelder et al. [43] had lesions in the ventral occipito-temporal cortex, the question arises how these brain areas respond to emotional information in prosopagnosics with severe face recognition problems but no known brain anomalies. To investigate this issue we presented the participants with neutral, fearful and happy facial and bodily expressions.



The DPs were recruited after they had contacted us via our website or through reports in the popular press. All participants report life-long problems in recognizing people and typically complain about difficulties when meeting familiar persons unexpectedly and the ensuing social problems. AM (female) is a 54-year old housewife. She reports problems in recognizing others when meeting them outside the usual context, for example when she meets her parents in the supermarket. HV (male) is 43 years old and teaches writing and coaches in communication training. He experiences severe face recognition problems for as long as he can remember. LW (male) is a 48-year old university professor with longstanding difficulties for example in recognizing colleagues at conferences and students. None of the DPs had a neurological history and their structural MR-scans showed no abnormalities as judged independently by four experienced neurologists. The group of four control subjects was matched with the DP group on age, sex and educational level. All participants gave written informed consent according to the Declaration of Helsinki and the study was approved by the local ethics committee (CMO region Arnhem-Nijmegen, The Netherlands).

Neuropsychological testing

All participants were presented with an extensive face recognition battery. Visual object recognition and face recognition were assessed with standard clinical tests and additional face and object perception experiments were run in sessions preceding the fMRI measurements. The neuropsychological tests and normative data are described elsewhere [36]. Face matching and face memory were tested with the Benton Face Recognition Test (BFRT) [47] and the Warrington Face Memory Test (WFMT) [48]. We used a computerized version of the latter test to obtain information about speed-accuracy trade-off. Basic visual functions were measured with the Birmingham Object Recognition Battery (BORB) (line length, size, orientation, gap, minimal feature match, foreshortened views and object decision) [49]. To investigate in detail different aspects of face perception, all participants were administered additional face and object perception experiments which have proven useful in previous investigations of face recognition and provided insight in processing strategies in prosopagnosia [4], [17], [36], [43], 50, 51.

Like in our previous studies on prosopagnosia, the behavioral pattern of a normal inversion effects for faces compared to another single object category was measured with the faces and shoes task [17]. Participants were required to select the probe that corresponded with the identity of a simultaneously presented target. The target was always a frontal picture and the two probes underneath consisted of pictures in three quarter profile. Faces and shoes were presented upright and inverted for details, see [17], [50]. Feature-based processing was tested with a part-to-whole matching task which required participants to select the face-part probe (i.e., mouth or eyes) that was the same as that in the simultaneously presented whole face. The same procedure was followed for house-part probes (i.e., door or upper window) that had to be matched to the corresponding part in a whole house stimulus. Faces and houses were presented once upright and once inverted [4], [43]. Participants were instructed to respond as accurately and rapidly as possible. Accuracy and mean response-times were calculated for each test. We compared the accuracy and response times from the upright stimuli with the inverted stimuli in one-tailed paired-sample t-tests. A significantly lower accuracy or longer response time for the inverted stimuli is defined as an inversion effect, whereas a higher accuracy or shorter response time for the inverted stimuli is defined as a paradoxical inversion effect. Data of the control group were normalized and z-scores were obtained for every DP.

fMRI measurements

Stimulus materials.

The face and body stimuli were used previously in an fMRI investigation of the neural substrates of processing face and body perception in neurologically intact observers [52]. Pictures of fearful, happy and neutral faces were taken from the Karolinska Directed Emotional Face database [53]. From our own database, pictures of fearful and happy bodily expressions, instrumental (emotionally neutral) bodily expressions (pouring water into a glass) and houses were used. We used houses as stimuli for the control condition, because they constitute a single object category that has been extensively explored in other studies and is known to elicit activation in specific brain areas [24][26]. Instrumental body expressions were used because, like emotional expressions, these displays elicit action representation and implicit movement [54], and hence constitute a balanced comparison category for the emotional expressions. All images of faces and bodies were previously validated regarding emotional expression (minimum recognition rate: 75%). For further details concerning the validation procedure, see [52].

A total of 42 images was used, six in every condition (fearful faces, happy faces, neutral faces, fearful bodies, happy bodies, neutral bodies and houses). There was no identity overlap between faces and bodies or between the emotions. Faces were fitted inside a gray oval shape, which masked external aspects of the faces. Body and house stimuli were cut out, removing all background. The faces of the body stimuli were covered with a gray opaque mask. Additionally, one picture of a chair was used as an oddball stimulus. All stimuli were resized to 300 pixels in height and presented on a gray background.


The design was adapted from our previous study [52]. In order not to exacerbate the face handicap of the DP group, we modified the experimental paradigm from a facial expression categorization task to an oddball detection task thereby also avoiding selective attention to the faces with an emotional expression. Moreover, this procedure excludes that activation profiles are contaminated by motor responses in the conditions of interest while still providing control data on attention to the stimuli. A trial started with the presentation of a fixation cross (200 ms), followed by a stimulus (500 ms) and finally by a gray screen (2200 ms) (see Figure 1). All stimuli were presented six times in random order in an oddball paradigm (participants were instructed to press a response button when a chair was shown). The session consisted of 288 trials (7 conditions×6 identities×6 presentations, plus 36 oddball trials). Additionally, 96 null-events consisting of a gray screen lasting the whole trial length were included to reduce stimulus onset predictability and to establish a baseline [55]. The experiment was preceded by a short practice-session which used a different set of face and body stimuli.


Figure 1. Schematic representation of the experimental design.

Participants were instructed to press the response button when a chair was presented.


Participants lay supine in the scanner with head movements minimized by an adjustable padded head holder. Stimuli were projected onto a mirror above the participant's head. Responses were recorded via an MR-compatible keypad (MRI Devices, Waukesha, WI), positioned on the right side of the participant's abdomen. A PC running Presentation 9.70 (Neurobehavioral Systems, San Francisco, CA) controlled stimulus presentation and response registration.

Image Acquisition

Images were acquired using a 1.5 Tesla Sonata scanner (Siemens, Erlangen, Germany). Blood oxygenation level depend (BOLD) sensitive functional images were acquired using a single shot gradient echo-planar imaging (EPI) sequence [TR (repetition time) = 3790 ms, TE (echo time) = 40 ms, 43 transversal slices, ascending acquisition, 2.5 mm slice thickness, with 0.25 mm gap, FP (flip angle) = 90°, FOV (field of view) = 32 cm]. An automatic shimming procedure was performed before each scanning session. A total of 312 functional volumes were collected for each participant. Following the experimental session, structural images were acquired using an MP-RAGE sequence [TR/TE/TI (inversion time) 2250 ms/3.93 ms/850 ms, voxel size 1×1×1 mm].


Neuropsychological testing

All DPs scored outside the normal range for the BFRT and/or the WFMT, but none showed an anomalous score on more than one subtest of the BORB suggesting that the visual recognition difficulties of the DPs as measured by these two clinical tests are not due to basic visual perception problems diagnosed in the BORB (see Table 2). AM scored significantly below the mean on the BFRT and WFMT, for both accuracies and response times. HV had a borderline performance on the BFRT and prolonged response times on the WFMT. LW scored within normal range on the BFRT, but on the WFMT both accuracy and response times were anomalous.


Table 2. Results from neuropsychological testing.


To measure face and object recognition in a comparable way and assess relative configural processing routines, we compared upright and inverted stimulus matching for each object category [17], [36]. The control group showed an inversion effect for matching faces in both the accuracy (t(10) = 1.892, p<.05) and response time (t(10) = 3.164, p<.005). The controls showed no inversion effect for matching shoes. For the DPs, the response times were high as previously reported [4], [36]. AM was impaired in matching both upright (Z<−5.75) and inverted (Z<−3.39) faces. Her response times showed a paradoxical inversion effect pattern for matching faces and a normal inversion for matching shoes. HV had accuracies within the normal range, but displayed a normal inversion pattern in the response times for matching faces and a paradoxical inversion effect in the response times for matching shoes. LW showed reduced accuracy for matching inverted faces (Z<−2.82) and inverted shoes (Z<−2.74). His response times for matching upright faces were prolonged (Z>2.39), while the latencies for inverted faces were on average. He displayed the normal inversion pattern for matching faces and shoes in both accuracy and response times.

Feature-based matching was tested with the faces and houses task see [4] for details. The control group showed a normal inversion effect for matching face parts in accuracy (t(10) = 1.746, p<.05) and in response time (t(10) = 4.754, p<.001). However, they showed a paradoxical inversion effect for matching house-parts in accuracy (t(10) = 1.743, p<.05) and response time (t(10) = 2.667, p<.01). AM showed lower accuracies for matching both upright (Z = −11.81) and inverted (Z = −5.36) face-parts. Her latencies for matching upright face-parts (Z = 2.51) and house-parts (Z = 2.06) were higher than normal. She displayed a paradoxical inversion effect in the accuracy data for matching face-parts and house parts, and in the response times for matching house-parts. Her response times for matching face-parts showed a normal inversion pattern. HV had a reduced accuracy for matching upright face-parts (Z = −2.00). He also had highly prolonged response times for upright faces (Z = 13.46) and to a lesser extend for inverted faces (Z = 8.27). Latencies for upright houses (Z = 2.28) and inverted houses (Z = 3.31) were also prolonged, but less than for faces. HV showed paradoxical inversion effects in both the accuracy and response times for face-part and house-part matching. LW's accuracy for matching upright (Z = −2.76) and inverted (Z = −3.16) faces was impaired. His responses for matching upright face-parts (Z = 8.87), inverted face-parts (Z = 5.13), upright house-parts (Z = 4.12) and inverted house-parts (Z = 4.61) were prolonged. LW's accuracy data showed a normal inversion pattern for matching face-parts and a paradoxical inversion pattern for matching house parts. He displayed a paradoxical inversion effect in his response times for matching face-parts and house-parts.

fMRI analysis

All participants performed flawlessly on the oddball detection task.


Imaging data were analyzed using Brainvoyager QX (Brain Innovation, Maastricht, the Netherlands). The first five volumes of each functional run were discarded to allow for T1 equilibration. Pre-processing of the functional data included 3D-motion correction, slice scan time correction, temporal data smoothing (high pass filter 3 cycles in time course) and spatial smoothing with an isotropic 6-mm full-width-half-maximum (FWHM) Gaussian kernel. Images were spatially normalized to Talairach space [56] and resampled to a voxel size of 1×1×1 mm. Statistical analysis was based on the general linear model (GLM), with each condition defined as a separate predictor. Null-events were modeled explicitly.

ROI definition.

We used a “split-half” method for defining regions of interest (ROI), in order to be sure that the observed effects are not due to a selection bias [57]. The even trials were used to define the ROIs and the odd trials were used for the within ROI analysis. To localize face-sensitive activation in FG, i.e. FFA, we contrasted the even trials of all face conditions (fearful, happy and neutral) with houses (all trials) and identified significant voxels in each subject within a restricted region of the FG (Talairach y-coordinate between −25 and −65). The voxel set comprising this activation determined the ROI, in this case the FFA. The same procedure was followed in a restricted region of the IOG (Talairach y-coordinate <−70). To identify body sensitive areas, we compared the even trials of all bodies (fearful, happy and instrumental) with houses and mapped the selective activation in a restricted region of FG to determine the FBA (Talairach y-coordinate between −25 and −65) and the region around the junction of the middle temporal and middle occipital gyrus to determine the EBA (Talairach x-coordinate between 25 and 60; y-coordinate between −55 and −75; z-coordinate between −15 and 15). We used a liberal threshold (p<.05, uncorrected). Since previous studies reported that cortical face and body selective regions are often weaker or even absent in the left hemisphere [6], [29], we restricted the analysis to the right hemisphere.

Smoothed activation maps are projected on the inflated right hemisphere of one subject. For every ROI, the activation maps of the control subjects are collapsed and the result is displayed by the black contours. This procedure allows visualization of the spatial extent of the activation across different subjects. Activation of the individual DPs is plotted in color (see Figures 2 to 5). The Talairach coordinates of the activation maps are shown in Table 3.


Figure 2. Face-specific activation in right FG when comparing faces (fearful/happy/neutral) with houses.

Left: Areas are shown on an inflated right hemisphere. Activation maps of the control subjects are collapsed and displayed by the black contours. Activation of the individual DPs is plotted in color. Right: beta-values by condition, group and DP. Error bars represent one standard error of the mean (SEM). Conditions represent from left to right: fearful faces, happy faces, neutral faces, fearful bodies, happy bodies, neutral bodies and houses. White columns display the average value of the three patients. Black columns show the average value of the controls. Triangles represent the individual values of the DPs.


Figure 3. Face-specific activation in right IOG when comparing faces (fearful/happy/neutral) with houses.

Left: Areas are shown on an inflated right hemisphere. Activation maps of the control subjects are collapsed and displayed by the black contours. Activation of the individual DPs is plotted in color. Right: beta-values by condition, group and DP. Error bars represent one SEM. Conditions represent from left to right: fearful faces, happy faces, neutral faces, fearful bodies, happy bodies, neutral bodies and houses. White columns display the average value of the three patients. Black columns show the average value of the controls. Triangles represent the individual values of the DPs.


Figure 4. Body-specific activation in right FG when comparing bodies (fearful/happy/instrumental) with houses.

Left: Areas are shown on an inflated right hemisphere. Activation maps of the control subjects are collapsed and displayed by the black contours. Activation of the individual DPs is plotted in color. The purple indicates overlap between red (AM) and blue (LW). Right: beta-values by condition, group and DP. Error bars represent one SEM. Conditions represent from left to right: fearful faces, happy faces, neutral faces, fearful bodies, happy bodies, neutral bodies and houses. White columns display the average value of the three patients. Black columns show the average value of the controls. Triangles represent the individual values of the DPs.


Figure 5. Body-specific activation in right EBA when comparing bodies (fearful/happy/instrumental) with houses.

Left: Areas are shown on an inflated right hemisphere. Activation maps of the control subjects are collapsed and displayed by the black contours. Activation of the individual DPs is plotted in color. The purple indicates overlap between red (AM) and blue (LW). Right: beta-values by condition, group and DP. Error bars represent one SEM. Conditions represent from left to right: fearful faces, happy faces, neutral faces, fearful bodies, happy bodies, neutral bodies and houses. White columns display the average value of the three patients. Black columns show the average value of the controls. Triangles represent the individual values of the DPs.


Table 3. Number of voxels (N) and Talairach coordinates (range) of ROIs.

Effects of emotional content.

The analyses were performed on the beta-values of the odd trials of the conditions. To investigate differences between the DP group and the control group, we used independent samples t-tests, corrected for unequal variances (in degrees of freedom).


Figure 2 shows the smoothed face-specific activation (left) and the beta-values of all conditions (right) in FG. The controls show the expected age-dependend higher activation for neutral than for fearful expressions [42]. We calculated the difference between fearful faces and neutral faces and this difference was significantly larger in the control group (t(4.946) = −2.583, p<.05). The difference between happy faces and neutral faces was marginally significantly different between groups (t(4.906) = −2.051, p<.097). Since previous studies showed a lower activation for faces in DPs compared to controls [11], [15], we used one-tailed post-hoc t-tests to compare the activation levels of the three face conditions between both groups. This revealed a marginally significant difference for the neutral faces (t(4.980) = 1.929, p<.051).


Figure 3 shows the smoothed face-specific activation (left) in IOG and the beta-values of all conditions (right). A t-test on the difference between fearful faces and neutral faces showed no significant difference between both groups (t(4.510) = .0233, p<.826). The difference between happy faces and neutral faces was also not significantly different between the DPs and controls (t(4.989) = −1.235, p<.272).


Figure 4 shows the smoothed body-specific activation (left) and the beta-values of all conditions (right) in FBA.The difference between either fearful bodies (t(4.475) = −.088, p<.934) or happy bodies (t(4.567) = .321, p<.762) and instrumental bodies was not significantly different between both groups.


Figure 5 shows the smoothed body-specific activation (left) and the beta-values of all conditions (right) in EBA.The difference between fearful bodies and instrumental bodies was not different between groups (t(3.786) = 1.153, p<.317). A t- test on the difference between happy and instrumental bodies revealed no significant between-group difference (t(3.722) = .339, p<.573).

Effects of categorical selectivity.

To investigate the selectivity of processing faces and bodies in the brain, we calculated the difference between the mean of the three face conditions and the mean of the three body conditions in FFA and IOG. A comparison using t-tests showed that this difference was smaller in the control group in IOG, but it did not reach statistical significance (t(3.961) = 2.122, p<.102). We also calculated the difference between the mean of all body conditions and the mean of all face conditions in FBA and EBA. Independent sample t-tests showed no significant between-group differences.

Processing of neutral faces.

Since the main body of research on DP concerns neutral faces, we compared the activation level of neutral faces between both groups in all four ROIs, using t-tests. In addition to the above mentioned difference in FFA, this revealed a marginally significantly higher activation for neutral faces in EBA in the DP group (t(4.955) = 2.044, p<.097).

Effects of emotion in amygdala.

Finally, we performed a post-hoc analysis, in which we defined the amygdala in each subject, based on the individual anatomy. This ROI consisted in each hemisphere of a cube of 13×13×13 voxels around the center of the amygdala and we performed a second GLM in this area. The results are shown in Table 3. Contrasting fearful faces with neutral faces revealed significant activation in all three patients (left amygdala in AM; bilateral amygdala in HV and right amygdala in LW). Comparing happy with neutral faces showed activation in two patients (left amygdala in HV and right amygdala in LW). Fearful compared with neutral bodies differentially activated the amygdala in two patients (left amygdala in AM and bilateral amygdala HV). Happy bodies triggered significantly more amygdala activity in one DP (left amygdala in HV) compared to neutral bodies.


The first major finding is that compared to the control group, the DP group displays a similar activation level for the emotional faces, but a lower activation in FFA for neutral faces. A lower activation level in DP for neutral face perception in FG is consistent with earlier reports [11], [15]. The present results are compatible with the theoretical perspective on face recognition difficulties argued for previously [18], [21] suggesting a higher threshold for neutral face recognition performance in prosopagnosics. This relative difficulty with neutral faces is based on the notion that faces are more difficult stimuli than many other categories they are routinely compared with.

Emotional stimuli trigger a higher level of arousal e.g. [58], [59] and emotion in a face constitutes an additional feature that carries important communicative information and is therefore more salient. This saliency hypothesis is supported by a number of behavioral studies, with different visual tasks, that have demonstrated that adding emotional information to a face results in a greater tendency to capture attention [60][63]. Note though that the emotion effects we observe are not specific for emotions with a negative valence since we obtain similar effects for both fearful and happy (although less pronounced) expressions.

However, normal FFA activation for facial expressions in the presence of lower than normal activation for neutral faces suggests that the activation boost is triggered more in he emotion processig than in the impaired face processing system in ventro-temporal cortex. Studies on perception of emotional faces in normals have hypothesized the existence of a feedback mechanism between FG and amygdala [38], [39], [64][66]. The possibility that such feedback connections from the amygdala may be active in prosopagnosia and boost face processing was already suggested in an earlier study of emotional faces in prosopagnosia [43]. Two acquired prosopagnosics were presented with both a neutral and emotional part-to-whole face matching task. The patients had lesions in FG and/or IOG, but the results showed normal activation in other face-sensitive area's (amygdala, superior temporal sulcus), for the contrast between emotional and neutral faces. The patients were also more accurate and faster when they performed the task with emotional faces compared to neutral ones. Moreover, the patients showed a normal inversion effect for matching emotional but not for neutral faces.

Lower neural activity in the DPs for neutral faces, but not for emotional faces is compatible with a dual route model of face perception as argued first in de Gelder and Rouw [4] and adapted in de Gelder et al. [43], involving subcortical structures along a pathway that is able to proces facial expressions (the pulvinar-superior colliculus-amygdala route) [67] which in turn may boost face representations in the cortical route in temporal cortex even when face representations in temporal cortex are weak as shown by the lower activation for neutral faces in the DP group [43]. The pattern observed here is in line with this and may also explain why emotional content facilitates the cortical processing of faces in prosopagnosia. Consistent with this, we observed a higher activity level of the amygdala for emotional faces compared to neutral ones. A related and more extreme phenomenon is observed in hemianopic patients, who are unable to consciously report the presentation of a face in the blind visual field and do not show FG activation when presented facial expressions in the blind field but who perform well above chance in tasks where they have to guess the facial expression [68].

Our second main finding concerns the categorical specificity of face vs. body representation in DPs. We compared the activation of body conditions in the face selective regions and of the face conditions in the body selective regions between both groups. On the one hand, our findings indicate that perceiving neutral faces results in a higher activation of EBA in the DP group, compared to the control group. Combined with the lower activation for neutral faces in FFA, this increased activation in EBA might indicate an anomalous cerebral processing route in DP. It may be the case that (neutral) faces are processed in the areas more dominantly dedicated to body perception. On the other hand, we find a higher activation for perceiving bodies in IOG. These combined findings indicate that the neural correlates of perceiving faces and bodies, as manifested in IOG and EBA show a lower degree of specificity in DP.

For body triggered activity we find no difference in neutral vs. emotional expressions between both groups, either in FBA or EBA. This indicates that the anomalous neuro-functional substrate in our DP group for neutral faces does not extent to the processing of bodies and bodily expressions. This is in line with recent behavioral data showing no impairment in recognizing neutral body postures in one DP patient [35]. One of the DPs (HV) in the present study participated in a previous ERP study on perception of neutral faces and neutral bodies [36] and the results of both studies are partly converging. Righart & de Gelder [36] measured the electrical brain correlates of the inversion effect as an index of configural processes (the ability to perceive stimuli as one configuration as opposed to an assemblage of features [69]). HV differed significantly from the control group in face processing on two accounts. He displayed a paradoxical ERP inversion effect (the reverse pattern from the controls) around 100 ms after stimulus presentation (P1 amplitude) and no inversion effect around 170 ms after stimulus presentation (N170 latency). But his results for bodies did not differ from the controls.

An important and relevant difference between face and body perception concerns the coding of identity. A face contains all necessary information about the identity of a person and we are used and trained to recognize identity by the face. A person can be readily identified on the basis of his face, but identification based on the body alone is far less evident. The different pattern in FG for faces and bodies may therefore reflect the possibility that FG is more involved in processing person identity [7] which is typically more based on the face than on the body.

Notwithstanding the well documented involvement of FG in face perception, its precise role of FG in prosopagnosia is still a matter of debate. We do not clearly understand at present how factors like maturation of different cortical areas, like the FG, are important for normal face recognition. Reduced volume of the right temporal lobe has previously been reported in a DP patient [70]. A structural imaging study in six DP subjects investigated volumetric and morphometric properties in occipito-temporal cortex and showed a decreased volume of the FG that correlated with face recognition deficits [71]. At the neuro-functional level, recent data collected from normals show a correlation between the volumetric size of the right FFA and recognition memory for neutral faces [72]. This study also investigated the development of category specific brain areas and the results suggest that the relative size of the FFA increases during development. Moreover, the development of the FFA takes longer compared to that of object selective areas (lateral occipital complex) or face sensitive areas in the superior temporal sulcus see [73] for review and discussion. These findings support the notion that DP may be associated with abnormal development of FG which may be either a consequence or a cause of anomalous face skills. Lesions in acquired prosopagnosia (AP) patients often include the FG e.g. [43], [74], although other cases have also been reported with lesions more posterior than the face sensitive part of the FG e.g. [75], [76]. Besides the heterogeneity across lesion localization in AP, considerable heterogeneity consists in behavioral symptoms in DP [77]. Since successful face-processing is likely to involve a variety of hierarchical and parallel processes, impairments in different processes will result in different types of behavioral and neuro-anatomical correlates. The results from the present study clearly demonstrate the importance of emotional information in face processing and urge (future imaging) studies to take the modulatory effect of emotion into account, in order to further untangle the complex nature of DP.


We thank all the participants, especially the patients for their willingness to participate in the experiment. We are grateful to an anonymous reviewer for helpful suggestions, to the neurologists of the Onze Lieve Vrouw Hospital in Aalst, Belgium for their interpretation of the structural MR-scans, to the staff from Brain Innovation, Maastricht (NL), for advice on representation of the data, to J. Grèzes for advice on the design, to I. Cabello for comments on the manuscript and to T. Caus, R. A. Otte, R. Scheeringa and P. Gaalman for help in data collection.

Author Contributions

Conceived and designed the experiments: JVdS WACvdR RR BdG. Performed the experiments: JVdS WACvdR RR. Analyzed the data: JVdS BdG. Wrote the paper: JVdS BdG.


  1. 1. Bodamer J (1947) Die Prosop-Agnosie. Archiv fur Psychiatrie und Nervenkrankheiten 179: 6–53.
  2. 2. Kennerknecht I, Grueter T, Welling B, Wentzek S, Horst J, et al. (2006) First report of prevalence of non-syndromic hereditary prosopagnosia (HPA). Am J Med Genet A 140: 1617–1622.
  3. 3. Grueter M, Grueter T, Bell V, Horst J, Laskowski W, et al. (2007) Hereditary prosopagnosia: the first case series. Cortex 43: 734–749.
  4. 4. de Gelder B, Rouw R (2000) Configural face processes in acquired and developmental prosopagnosia: evidence for two separate face systems? Neuroreport 11: 3145–3150.
  5. 5. Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural system for face perception. Trends Cogn Sci 4: 223–233.
  6. 6. Kanwisher N, McDermott J, Chun MM (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17: 4302–4311.
  7. 7. Grill-Spector K, Knouf N, Kanwisher N (2004) The fusiform face area subserves face perception, not generic within-category identification. Nat Neurosci 7: 555–562.
  8. 8. Puce A, Allison T, Asgari M, Gore JC, McCarthy G (1996) Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study. J Neurosci 16: 5205–5215.
  9. 9. Gauthier I, Skudlarski P, Gore JC, Anderson AW (2000) Expertise for cars and birds recruits brain areas involved in face recognition. Nat Neurosci 3: 191–197.
  10. 10. Hoffman EA, Haxby JV (2000) Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat Neurosci 3: 80–84.
  11. 11. Hadjikhani N, de Gelder B (2002) Neural basis of prosopagnosia: an fMRI study. Hum Brain Mapp 16: 176–182.
  12. 12. Williams MA, Berberovic N, Mattingley JB (2007) Abnormal fMRI adaptation to unfamiliar faces in a case of developmental prosopamnesia. Curr Biol 17: 1259–1264.
  13. 13. Hasson U, Avidan G, Deouell LY, Bentin S, Malach R (2003) Face-selective activation in a congenital prosopagnosic subject. J Cogn Neurosci 15: 419–431.
  14. 14. Avidan G, Hasson U, Malach R, Behrmann M (2005) Detailed exploration of face-related processing in congenital prosopagnosia: 2. Functional neuroimaging findings. J Cogn Neurosci 17: 1150–1167.
  15. 15. Bentin S, Degutis JM, D'Esposito M, Robertson LC (2007) Too many trees to see the forest: performance, event-related potential, and functional magnetic resonance imaging manifestations of integrative congenital prosopagnosia. J Cogn Neurosci 19: 132–146.
  16. 16. Degutis JM, Bentin S, Robertson LC, D'Esposito M (2007) Functional plasticity in ventral temporal cortex following cognitive rehabilitation of a congenital prosopagnosic. J Cogn Neurosci 19: 1790–1802.
  17. 17. de Gelder B, Bachoud-Levi AC, Degos JD (1998) Inversion superiority in visual agnosia may be common to a variety of orientation polarised objects besides faces. Vision Res 38: 2855–2861.
  18. 18. Damasio AR, Damasio H, Van Hoesen GW (1982) Prosopagnosia: anatomic basis and behavioral mechanisms. Neurology 32: 331–341.
  19. 19. Farah M (1990) Visual agnosia: Disorders of visual recognition and what they tell us about normal vision. Cambridge: MIT Press.
  20. 20. Gauthier I, Behrmann M, Tarr MJ (1999) Can face recognition really be dissociated from object recognition? J Cogn Neurosci 11: 349–370.
  21. 21. Damasio AR, Tranel D, Damasio H (1990) Face agnosia and the neural substrates of memory. Annu Rev Neurosci 13: 89–109.
  22. 22. Fodor J (1983) The Modularity of Mind. Cambridge, MA: MIT Press.
  23. 23. Diamond R, Carey S (1986) Why faces are and are not special: an effect of expertise. J Exp Psychol Gen 115: 107–117.
  24. 24. Aguirre GK, Zarahn E, D'Esposito M (1998) An area within human ventral cortex sensitive to “building” stimuli: evidence and implications. Neuron 21: 373–383.
  25. 25. Epstein R, Kanwisher N (1998) A cortical representation of the local visual environment. Nature 392: 598–601.
  26. 26. Levy I, Hasson U, Avidan G, Hendler T, Malach R (2001) Center-periphery organization of human object areas. Nat Neurosci 4: 533–539.
  27. 27. Downing PE, Jiang Y, Shuman M, Kanwisher N (2001) A cortical area selective for visual processing of the human body. Science 293: 2470–2473.
  28. 28. Hadjikhani N, de Gelder B (2003) Seeing fearful body expressions activates the fusiform cortex and amygdala. Curr Biol 13: 2201–2205.
  29. 29. Peelen MV, Downing PE (2005) Selectivity for the human body in the fusiform gyrus. J Neurophysiol 93: 603–608.
  30. 30. Yin RK (1969) Looking at upside-down faces. J Exp Psychol 81: 141–145.
  31. 31. Reed CL, Stone VE, Bozova S, Tanaka J (2003) The body-inversion effect. Psychol Sci 14: 302–308.
  32. 32. Stekelenburg JJ, de Gelder B (2004) The neural correlates of perceiving human bodies: an ERP study on the body-inversion effect. Neuroreport 15: 777–780.
  33. 33. Peelen MV, Downing PE (2007) The neural basis of visual body perception. Nat Rev Neurosci 8: 636–648.
  34. 34. de Gelder B (2006) Towards the neurobiology of emotional body language. Nat Rev Neurosci 7: 242–249.
  35. 35. Duchaine BC, Yovel G, Butterworth EJ, Nakayama K (2006) Prosopagnosia as an impairment to face-specific mechanisms: Elimination of the alternative hypotheses in a developmental case. Cogn Neuropsychol 23: 714–747.
  36. 36. Righart R, de Gelder B (2007) Impaired face and body perception in developmental prosopagnosia. Proc Natl Acad Sci U S A.
  37. 37. Dolan RJ, Morris JS, de Gelder B (2001) Crossmodal binding of fear in voice and face. Proc Natl Acad Sci U S A 98: 10006–10010.
  38. 38. Rotshtein P, Malach R, Hadar U, Graif M, Hendler T (2001) Feeling or features: different sensitivity to emotion in high-order visual cortex and amygdala. Neuron 32: 747–757.
  39. 39. Vuilleumier P, Armony JL, Driver J, Dolan RJ (2001) Effects of attention and emotion on face processing in the human brain: an event-related fMRI study. Neuron 30: 829–841.
  40. 40. Dolan RJ, Fletcher PC, Morris JS, Kapur N, Deakin JF, et al. (1996) Neural activation during covert processing of positive emotional facial expressions. Neuroimage 4: 194–200.
  41. 41. Breiter HC, Etcoff NL, Whalen PJ, Kennedy WA, Rauch SL, et al. (1996) Response and Habituation Of the Human Amygdala During Visual Processing Of Facial Expression. Neuron 17: 875–887.
  42. 42. Guyer AE, Monk CS, McClure-Tone EB, Nelson EE, Roberson-Nay R, et al. (2008) A Developmental Examination of Amygdala Response to Facial Expressions. J Cogn Neurosci.
  43. 43. de Gelder B, Frissen I, Barton J, Hadjikhani N (2003) A modulatory role for facial expressions in prosopagnosia. Proc Natl Acad Sci U S A.
  44. 44. Duchaine BC, Parker H, Nakayama K (2003) Normal recognition of emotion in a prosopagnosic. Perception 32: 827–838.
  45. 45. Jones RD, Tranel D (2001) Severe developmental prosopagnosia in a child with superior intellect. J Clin Exp Neuropsychol 23: 265–273.
  46. 46. Nunn JA, Postma P, Pearson R (2001) Developmental prosopagnosia: should it be taken at face value? Neurocase 7: 15–27.
  47. 47. Benton AL, Sivan AB, Hamsher K, Varney NR, Spreen O (1983) Contribution to neuropsychological assessment. NY: Oxford University Press.
  48. 48. Warrington EK (1984) Recognition Memory Test. Nelson, Windsor: NFER.
  49. 49. Riddoch MJ, Humphreys GW (1993) Birmingham Object Recognition Battery. Hove: Psychology Press.
  50. 50. de Gelder B, Rouw R (2000) Paradoxical configuration effects for faces and objects in prosopagnosia. Neuropsychologia 38: 1271–1279.
  51. 51. de Gelder B, Rouw R (2000) Structural encoding precludes recognition of parts in prosopagnosia. Cogn Neuropsychol 17: 89–102.
  52. 52. van de Riet WAC, Grèzes J, de Gelder B (in press) Specific and common brain regions involved in the perception of faces and bodies and the representation of their emotional expressions. Soc Neurosci.
  53. 53. Lundqvist D, Flykt A, Öhman A (1998) The Karolinska Directed Emotional Faces - KDEF. Stockholm: Karolinska Institutet.
  54. 54. Johnson-Frey SH, Maloof FR, Newman-Norlund R, Farrer C, Inati S, et al. (2003) Actions or hand-object interactions? Human inferior frontal cortex and action observation. Neuron 39: 1053–1058.
  55. 55. Friston KJ, Zarahn E, Josephs O, Henson RN, Dale AM (1999) Stochastic designs in event-related fMRI. Neuroimage 10: 607–619.
  56. 56. Talairach J, Tournoux P (1988) Co-Planar Stereotaxic Atlas of the Human Brain. New York: Thieme Medical Publishers.
  57. 57. Baker CI, Hutchison TL, Kanwisher N (2007) Does the fusiform face area contain subregions highly selective for nonfaces? Nat Neurosci 10: 3–4.
  58. 58. Mehrabian A, Russell JA (1974) An approach to environmental psychology. Cambridge, MA: MIT Press.
  59. 59. Lang PJ, Greenwald MK, Bradley MM, Hamm AO (1993) Looking at pictures: Affective, facial, visceral and behavioral reactions. Psychophysiology 30: 261–273.
  60. 60. Fox E, Russo R, Bowles R, Dutton K (2001) Do threatening stimuli draw or hold visual attention in subclinical anxiety? J Exp Psychol Gen 130: 681–700.
  61. 61. Pourtois G, Grandjean D, Sander D, Vuilleumier P (2004) Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb Cortex 14: 619–633.
  62. 62. Eastwood JD, Smilek D, Merikle PM (2003) Negative facial expression captures attention and disrupts performance. Percept Psychophys 65: 352–358.
  63. 63. Vuilleumier P, Schwartz S (2001) Emotional facial expressions capture attention. Neurology 56: 153–158.
  64. 64. Surguladze SA, Brammer MJ, Young AW, Andrew C, Travis MJ, et al. (2003) A preferential increase in the extrastriate response to signals of danger. Neuroimage 19: 1317–1328.
  65. 65. Vuilleumier P, Richardson MP, Armony JL, Driver J, Dolan RJ (2004) Distant influences of amygdala lesion on visual cortical activation during emotional face processing. Nat Neurosci 7: 1271–1278.
  66. 66. Rossion B, Caldara R, Seghier M, Schuller AM, Lazeyras F, et al. (2003) A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. Brain 126: 2381–2395.
  67. 67. Morris JS, Öhman A, Dolan RJ (1999) A subcortical pathway to the right amygdala mediating “unseen” fear. Proc Natl Acad Sci U S A 96: 1680–1685.
  68. 68. de Gelder B, Vroomen J, Pourtois G, Weiskrantz L (1999) Non-conscious recognition of affect in the absence of striate cortex. Neuroreport 10: 3759–3763.
  69. 69. Young AW, Hellawell D, Hay DC (1987) Configurational information in face perception. Perception 16: 747–759.
  70. 70. Bentin S, Deouell LY, Soroker N (1999) Selective visual streaming in face recognition: evidence from developmental prosopagnosia. Neuroreport 10: 823–827.
  71. 71. Behrmann M, Avidan G, Gao F, Black S (2007) Structural imaging reveals anatomical alterations in inferotemporal cortex in congenital prosopagnosia. Cereb Cortex 17: 2354–2363.
  72. 72. Golarai G, Ghahremani DG, Whitfield-Gabrieli S, Reiss A, Eberhardt JL, et al. (2007) Differential development of high-level visual cortex correlates with category-specific recognition memory. Nat Neurosci 10: 512–522.
  73. 73. Grill-Spector K, Golarai G, Gabrieli J (2008) Developmental neuroimaging of the human ventral visual cortex. Trends Cogn Sci 12: 152–162.
  74. 74. Barton JJ, Press DZ, Keenan JP, O'Connor M (2002) Lesions of the fusiform face area impair perception of facial configuration in prosopagnosia. Neurology 58: 71–78.
  75. 75. Steeves JK, Culham JC, Duchaine BC, Pratesi CC, Valyear KF, et al. (2006) The fusiform face area is not sufficient for face recognition: evidence from a patient with dense prosopagnosia and no occipital face area. Neuropsychologia 44: 594–609.
  76. 76. Sorger B, Goebel R, Schiltz C, Rossion B (2007) Understanding the functional neuroanatomy of acquired prosopagnosia. Neuroimage 35: 836–852.
  77. 77. Le Grand R, Cooper PA, Mondloch CJ, Lewis TL, Sagiv N, et al. (2006) What aspects of face processing are impaired in developmental prosopagnosia? Brain Cogn 61: 139–158.