Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Positive Facial Affect – An fMRI Study on the Involvement of Insula and Amygdala

  • Anna Pohl ,

    anpohl@ukaachen.de

    Affiliations Department of Psychiatry, Psychotherapy, and Psychosomatics, Rheinisch-Westfälische Technische Hochschule Aachen University, Aachen, Germany, Central Service Facility “Functional Imaging” at the Interdisciplinary Center for Clinical Research, Rheinisch-Westfälische Technische Hochschule Aachen University, Aachen, Germany, Jülich Aachen Research Alliance – Translational Brain Medicine, Jülich/Aachen, Germany, Department of Neurology, Rheinisch-Westfälische Technische Hochschule Aachen University, Aachen, Germany

  • Silke Anders,

    Affiliation Department of Neurology, University Lübeck, Lübeck, Germany

  • Martin Schulte-Rüther,

    Affiliations Jülich Aachen Research Alliance – Translational Brain Medicine, Jülich/Aachen, Germany, Cognitive Neuroscience, Institute of Neuroscience and Medicine, Research Center Jülich, Jülich, Germany, Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Rheinisch-Westfälische Technische Hochschule Aachen University, Aachen, Germany

  • Klaus Mathiak,

    Affiliations Department of Psychiatry, Psychotherapy, and Psychosomatics, Rheinisch-Westfälische Technische Hochschule Aachen University, Aachen, Germany, Jülich Aachen Research Alliance – Translational Brain Medicine, Jülich/Aachen, Germany, Structural and Functional Organisation of the Brain, Institute of Neuroscience and Medicine, Research Center Jülich, Jülich, Germany

  • Tilo Kircher

    Affiliation Department of Psychiatry and Psychotherapy, Phillips-University Marburg, Marburg, Germany

Abstract

Imitation of facial expressions engages the putative human mirror neuron system as well as the insula and the amygdala as part of the limbic system. The specific function of the latter two regions during emotional actions is still under debate. The current study investigated brain responses during imitation of positive in comparison to non-emotional facial expressions. Differences in brain activation of the amygdala and insula were additionally examined during observation and execution of facial expressions. Participants imitated, executed and observed happy and non-emotional facial expressions, as well as neutral faces. During imitation, higher right hemispheric activation emerged in the happy compared to the non-emotional condition in the right anterior insula and the right amygdala, in addition to the pre-supplementary motor area, middle temporal gyrus and the inferior frontal gyrus. Region-of-interest analyses revealed that the right insula was more strongly recruited by (i) imitation and execution than by observation of facial expressions, that (ii) the insula was significantly stronger activated by happy than by non-emotional facial expressions during observation and imitation and that (iii) the activation differences in the right amygdala between happy and non-emotional facial expressions were increased during imitation and execution, in comparison to sole observation. We suggest that the insula and the amygdala contribute specifically to the happy emotional connotation of the facial expressions depending on the task. The pattern of the insula activity might reflect increased bodily awareness during active execution compared to passive observation and during visual processing of the happy compared to non-emotional facial expressions. The activation specific for the happy facial expression of the amygdala during motor tasks, but not in the observation condition, might reflect increased autonomic activity or feedback from facial muscles to the amygdala.

Introduction

Imitation is an ability that includes a wide range of different phenomena. Infants imitate nearly from birth on [1] and learn through imitation during development [2]. In social situations, humans perceive the interaction partner as more likable, and the interaction as more successful, if their non-verbal behaviour is imitated by their partner [3]. Being imitated in contrast to not being imitated increases neural activity in brain regions associated with reward [4]. Thus, the automatic tendency to mimic an interaction partner's body posture, tone of voice, and facial expression is likely to increase the success of social interactions and functions as ‘social glue’ [5]. Emotional cues are of high importance for imitation. It was shown that participants mimic emotional facial expressions automatically even when they are instructed to observe emotional faces without moving [6], or when the emotional faces were presented subliminally [7]. Performing incongruent facial expressions suppresses this automatic tendency and slows the reaction times for the execution of the facial expressions [8].

In previous studies, to examine the neural basis of imitation, participants were explicitly asked to reproduce facial expressions (e.g. [9], [10]). Explicit imitation of movements was shown to involve several brain regions beyond motor and sensorimotor cortex. Visual input is claimed to be forwarded via the superior temporal sulcus to the rostral inferior parietal lobule (IPL) and then to the pars opercularis of the inferior frontal gyrus (IFG op) [9], [11]. The latter two regions are believed to form the human ‘mirror neuron system’ and to play a key role during the transfer of perceptual information into motor output [12]. During observation of facial expressions IPL and IFG op are believed to start to ‘resonate’ which means that an internal motor representation of the expression is generated [13], [14]. The ‘resonance’ may serve different functions such as action understanding [14] and facilitation of motor output during imitation [15].

Beyond the putative human mirror neuron system and motor-, as well as sensorimotor cortices, the pre-supplementary motor area (pre-SMA) was reported to be involved in imitation of emotional facial expressions [9], [13], [16]. Furthermore, the insula and the amygdala were hypothesized to be part of an emotional perception-action matching system [11], [17] and therefore to “extend” the classical MNS (which was shown to respond to goal directed hand movements) during emotion processing [13]. Both regions were previously reported to be involved in observation and imitation of emotional facial expressions (angry, happy, fearful, sad, disgusted, surprised [9]). Interestingly, right insula activation was predicted by the magnitude of facial movements of participants (angry, sad and happy; when exclusively masked with non-emotional facial expressions). Moreover, left amygdala activation was predicted by extent of movement during imitation of happy facial expressions [10]. In contrast, no activation of insula and amygdala was reported during observation (angry, happy, fearful, disgusted, neutral [18]) and imitation of emotional facial expressions in two further fMRI studies (angry and happy [16]). Van der Gaag and colleagues [13] found increased bilateral anterior insula activation but did not find increased amygdala activation during observation of emotional facial expressions (happy, disgusted and fearful, all emotions pooled together) when contrasted with observation of non-emotional facial expressions (blowing cheeks). In a further analysis of their data the authors compared the BOLD signal of the amygdala during observation between disgusted, fearful, happy and non-emotional (blowing cheeks) facial expressions but found no significant differences. This was true for passive observation, observation for discrimination of facial expressions and observation for delayed imitation (motor aspects were not included in the general linear model). Significant differences of amygdala activation were noted only when observation of facial expressions was compared to observation of patterns but amygdala activation was not specific to emotional processing [19]. Hennenlotter and colleagues [14] showed involvement of the left anterior insula during both observation and execution of pleasant facial affect. The bilateral amygdala was also involved in both tasks, but activation sites differed. During observation the bilateral ventral amygdala was activated, whereas the bilateral dorsal amygdala was involved in smile execution. Contrary, shared representations of observation and execution of happy facial expressions were found in the same part of the bilateral amygdala [20].

In sum, both regions seem to be involved during imitation, perception and execution of emotional facial expressions, but results of studies differ, which may be due to differences in the task (observation with different tasks, execution, (delayed) imitation), stimulus material (pictures versus video clips of different length), control condition (fixation cross, neutral face without movement, non-emotional facial expression) or analysis (e.g. conjunction of two t maps versus separate listing of two t-tests, different thresholds).

We aimed at examining brain regions with increased BOLD response during imitation of positive facial affect. We wanted to control for effects due to motion and therefore compared the happy facial expression with a non-emotional facial expression. We hypothesized to find amygdala and insula involved in affective imitation. In subsequent analyses we intended to compare the BOLD responses of the target regions also during perception and execution of facial expressions, respectively. We focused on the insula and the amygdala, because these structures were assumed to be central for the “extended” MNS [13] and respond consistently to emotional stimuli (for a review see [21]).

Methods

Ethics Statement

The study was approved by the local Ethics Committee (Medical Faculty of the RWTH Aachen University; code: EK 099/08) based on the declaration of Helsinki and all participants gave written informed consent prior to participation. They were paid for their participation.

Participants

Thirty-two healthy, right-handed [22] volunteers (17 women) with no history of neurological or psychiatric disease participated in the study. Participants had normal or corrected-to-normal vision. Five participants had to be excluded from the analysis because of intolerable head-movements or inaccurate completion of the task (see also Design/Task and fMRI Acquisition Parameters and Data Analyses). Accordingly, 27 out of 32 participants were included in the final analyses (mean age: 24.6 years old (SD = 5.4); averaged school education: 12.9 years (SD = 0.6)). Data from all subjects were published in a previous study focusing on shared representations during observation and execution of facial expressions [20].

Stimuli

We used video clips depicting facial expressions as dynamic stimulus material. The clips were recorded in an in-house media centre with a commercial video camera (Sony DVX 2000, spatial resolution 720×576 pixels). The video clips depicted 24 actors executing happy facial expressions (smile), non-emotional facial expressions (lip protrusion), or neutral faces (relaxed face without motion). Each video clip began with the actor having a neutral face. After one second, the actor began to produce a facial expression. The video clip ended again with a neutral face (see Figure 1; the actors have given informed consent to publish the case details.). The total length of each video clip was 5 seconds. We also created pixelated versions of each video, where moving squares created a ‘scrambled’ version of the faces (down sampled pictures had 1/40 of the original resolution) so that only the silhouette of the actor was recognizable (Photoshop CS3 v.10.0 ® and Adobe Premiere Pro CS3 ®) (Figure 1). After one second, a colored fixation cross (red, blue, or green) was projected onto the scrambled video for 3 seconds to cue the participant for execution of the facial expression (see below). Altogether, 72 video clips showing facial expressions and 24 video clips with scrambled facial silhouettes were used in the present study. The stimuli were presented with MR-compatible goggles (Resonance Technology, Inc. Northridge, CA) using the Presentation© software package v.11.0 (Neurobehavioral Systems, Inc., Albany, CA). The evaluation of the stimulus material is described in detail Kircher and colleagues [20].

thumbnail
Figure 1. Video clips were presented in all conditions.

In the first and second run (A) participants observed and executed facial expressions (if an actor or scrambled faces were displayed, respectively), in the third and fourth run (B) participants were to imitate facial expressions if an actor was presented, and again, to execute facial expressions in case of scrambled video clips. Within one block, four video clips à five seconds were shown. Abbreviations: A) N_OBS: non-emotional observation, E_OBS: emotional observation, C_EXP: control execution, B) E_EXP: emotional execution, C_IMI: control imitation, E_IMI: emotional imitation.

https://doi.org/10.1371/journal.pone.0069886.g001

Design/Task

Before the fMRI experiment participants were told that they would see either video clips depicting actors showing a facial expression (smile, lip protrusion, neutral face), or scrambled faces. For run 1 and 2, they were instructed to (i) attentively observe video clips depicting actors, and to (ii) execute a facial expression (for the duration of the colored fixation cross) when scrambled faces were presented. The color of the fixation cross indicated which facial expression participants were to execute (smile, lip protrusion, neutral face). In advance to the third and forth run, participants were instructed to immediately (iii) imitate the facial expressions in the video clips (instead of solely observing it). They were also told that the task in case of the scrambled videos remained the same as before (execute) (Figure 1).

Accordingly, the imitation task was presented separately from the observation task in the last two runs of the experiment and participants did not know about the imitation condition until run 3. This order was chosen to prevent participants from unintentionally imitating already during the observation task.

The fMRI experiment was designed as a two factorial design with the factors task (Observation, Execution, Imitation) and facial expression (Happy Facial Expression, Non-Emotional Facial Expression, Neutral Face). Videos of neutral faces without motion served as high-level baseline for happy and non-emotional facial expressions. We included only one emotional facial expression, because of time restrictions of fMRI measurements. We chose to examine happy facial expressions representative for emotional facial expressions, because the happy facial expression may be the easiest to identify and perform. As the variability to execute angry and sad facial expressions is rather high, the movement congruency between observation and execution is highest for happy facial expressions (see also [14]). Thus, differences in brain activation between tasks would not be due to effects of incongruence of movements.

The videos were presented in blocks of four 5 s-video clips. Each run consisted of 18 blocks of video clips (3 blocks of each condition). The task order within the single runs was pseudo-randomized across participants. A low-level baseline (fixation cross) was presented between the blocks for 6.4 seconds (one block plus low level baseline corresponded to 12 scans). After every sixth block an additional baseline (30.8 seconds corresponding to 14 scans) was included to allow for the hemodynamic response to return to baseline. All together, the fMRI scanning lasted for approximately 45 min.

The faces of the participants were video-taped during the entire fMRI experiment to ensure that participants followed the task instructions. The video tapes were judged online and after the experiment by a certified Facial Action Coding System rater (FACS; [23]). The experiment was interrupted if a participant mimicked the actor's facial expression during the observation task or executed the wrong facial expression during the execution task. One participant had to be excluded from further analysis because of repeated imitation reactions during the observation task.

After fMRI scanning, participants completed a post-scanning rating of their subjective feeling of happiness during all conditions. The rating was presented on a 7-point Likert scale (1 = ‘not at all’ until 7 = ‘very strong’). A paired t-test was calculated to test if participants felt happier in the emotional than in the non-emotional imitation condition.

fMRI Acquisition Parameters and Data Analyses

Functional T2* weighted images were obtained with a Siemens 3-Tesla MR-scanner using Echo planar imaging (EPI) (TR = 2200 ms, TE = 30 ms, flip angle 90°, FoV 224 mm, base resolution 64, voxel size 3.3×3.3 mm2, 36 slices with slice thickness 3.5 mm, and distance factor 10%).

High-resolution T1-weighted anatomical 3-D Magnetization Prepared Rapid Gradient Echo (MP-RAGE) images (TR 1900 ms, TE 2.52 ms, TI 900 ms, flip angle 9°, FoV 250 mm, 256×256 matrix, 176 slices per slab) were acquired at the end of the experimental runs.

Data processing and statistic analyses (see also [20]) of the imaging data were performed using SPM5 (Wellcome Department of Imaging Neuroscience, London, UK) implemented in Matlab 7.2 (Mathworks Inc., Sherborn, MA, USA). The first five EPI volumes were discarded to allow for T1 equilibration. The remaining functional images were realigned to the first image to correct for head motion [24]. Head motion of less than 4 mm and less than 3° was accepted. Four subjects did not meet these requirements and were therefore excluded from further analyses (see above). For each participant, the T1 image was co-registered to the mean image of the realigned functional images. The mean functional image was normalized to the MNI template (Montreal Neurological Institute, [25], [26] using a segmentation algorithm [27]. All images were normalized, resampled to 1.5×1.5×1.5 mm3 voxel size and spatially smoothed with an 8-mm full-width half-maximum (FWHM) isotropic Gaussian kernel.

Data were subsequently analyzed by a two-level approach. Using a general linear model (GLM), each experimental condition was modeled on the single-subject level with a separate regressor convolved with a canonical hemodynamic response function and its first temporal derivative. The following conditions were included:

  1. Happy_Observation (H_OBS)
  2. Non-Emotional_Observation (NE_OBS)
  3. Neutral_Observation (N_OBS)
  4. Happy_Execution (H_EX)
  5. Non-Emotional_Execution (NE_EX)
  6. Neutral_Execution (N_EX)
  7. Happy_Imitation (H_IMI)
  8. Non-Emotional_Imitation (NE_IMI)
  9. Neutral_Imitation (N_IMI)

The low-level baseline between the experimental conditions (fixation cross) was implicitly modeled as baseline. Realignment parameters were included as six additional regressors in the GLM as head motion nuisance covariates. The execution task was present in all four runs and therefore twice as often as the other tasks. However, to avoid effects due to habituation and repetition, only the execution blocks of the first and second run were included in the second level analysis resulting in an equal amount of blocks of each experimental condition.

Parameter estimates for each voxel were calculated using maximum likelihood estimation and corrected for non-sphericity. First-level contrasts were fed into a flexible factorial second level group analysis with the factors condition and subjects. The factor condition was modeled as fixed effect and encompassed all 9 levels of the single-subject analysis. The subjects-factor was modeled as random effect.

The first six conditions of the experiment are described detailed in Kircher et al. [20]. In this previous study, conjoint activations of observation and execution were presented. Shared representations for the happy facial expressions were contrasted with shared representations for the non-emotional facial expressions to examine the specificity of shared circuits of positive affect.

First, to show the reliability of our experimental design, we examined activations during imitation of happy and non-emotional facial expressions, separately. Accordingly, two contrasts were computed: imitation of happy facial expressions was contrasted with the high level baseline (H_IMI>N_IMI), likewise imitation of the non-emotional facial expressions was contrasted with the high level baseline (NE_IMI>N_IMI). We hypothesized to replicate activation patterns of previous studies examining the imitation of facial expressions.

Next, we tested for activations specific for the imitation of happy facial expressions, namely, activation during imitation of the happy facial expressions contrasted with the activation during imitation of the non-emotional facial expressions (H_IMI>NE_IMI). All whole brain analyses at group level are reported as significant at a threshold of p<.05, FWE corrected (k>15). Brain structures were labeled using the Anatomy Toolbox v 1.6 [28], [29] and the WFU PickAtlas software (Wake Forest University, Winston-Salem, NC; [30]). Moreover, we were interested in examining the involvement of the insula and the amygdala in facial expression observation and execution. Therefore, we extracted data at the peak voxel of the contrast H_IMI>NE_IMI for six conditions of interest. The conditions encompassing neutral facial expressions served as high-level baseline for the reliability analysis and analyses presented in Kircher et al. [20] and were not of interest for the continuative analyses, because participants did not move during execution and imitation. A repeated-measurement two-way ANOVA (within-subject factors task (observation, execution, imitation) and expression (happy, non-emotional) was calculated for each region using SPSS 15.0 (Statistical Packages for the Social Sciences, SPSS Inc., USA).

Results

Rating

To identify differences in emotional experience during the imitation tasks, subjective ratings of happiness were compared using a paired t-test. Participants reported a stronger feeling of happiness during imitation of emotional facial expressions (M = 5.27, SD = 1.31) than during imitation of non-emotional facial expressions (M = 2.88, SD = 1.42); t(26) = 6.48, p<0.001). The same was true for the observation end execution conditions. Results of these conditions were already presented in Kircher and colleagues [20].

FMRI Data: Whole Brain Analyses

Imitation of both facial expressions revealed a distributed network mainly including frontal and parietal cortices (see Table 1 and Figure 2). Activations were found in the bilateral temporal, (pre-) motor and somatosensory cortex. In particular, the post- and precentral gyrus (BA 3, 4, 6) and the pre-supplementary motor area (pre-SMA, BA 6) extending to medial cingulate cortex were involved. Furthermore, imitation was associated with bilateral activation in the posterior middle temporal gyrus (MTG), the cerebellum, and the bilateral insula. For emotional facial expressions, insula activation extended into the right pars opercularis of the inferior frontal gyrus (IFG op, BA 44), as well as the into right pars triangularis (BA 45). Additionally, the posterior parietal cortex including area PFm of the inferior parietal lobule (IPL), 7PC, and hiP2 of the superior parietal lobule (SPL) (for localization of the cytoarchitectonic regions PFm, 7PC, and hiP2 see [31]) was activated.

thumbnail
Figure 2. Neural network underlying imitation of emotional and non-emotional facial expressions.

The analysis was FWE-corrected at a threshold of p<0.05 (k>15). Both facial expressions involved a widespread bilateral network including (pre-)motor areas, the insula, temporal areas, the brain stem, and the cerebellum.

https://doi.org/10.1371/journal.pone.0069886.g002

thumbnail
Table 1. Activation clusters for imitation of emotional and non-emotional facial expressions.

https://doi.org/10.1371/journal.pone.0069886.t001

Imitation of happy in contrast to non-emotional facial expressions (H_IMI>NE_IMI) was associated with right hemispheric activation in the pre-SMA, the right insula extending to the IFG op (BA 44) and the pars triangularis (BA 45), and in the right amygdala extending to the parahippocampal gyrus. Furthermore, increased activation was found in the right middle temporal gyrus (MTG) extending to the inferior temporal gyrus and to the visual area V5, in the right thalamus, in the right pallidum and in the right cuneus (see Table 2 and Figure 3)

thumbnail
Figure 3. Affect-specific activation during imitation as revealed by direct comparison of the emotional and the non-emotional facial expression (FWE-corrected at a threshold of p<0.05; k>15).

Significant right-hemispheric activation differences were found, amongst others, right-hemispheric in the pre-SMA, the insula, the amygdala, and the MTG (A). Average parameter estimates extracted from the main peak and depicted including 90% confidence interval (CI). The emotional conditions (E) are colored in red, the non-emotional (N) in rose, and the control (neutral face without movement (C)) in grey. Significant post-hoc t-tests (p<0.0055 with Bonferroni correction) are marked with an asterisk (B).

https://doi.org/10.1371/journal.pone.0069886.g003

thumbnail
Table 2. Activation clusters specific for the emotional facial expressions (Contrast: Happy_Imitation>Non-Emotional_Imitation).

https://doi.org/10.1371/journal.pone.0069886.t002

FMRI Data: Analyses of Insula and Amygdala Activation

To compare the activation of the right anterior insula in six conditions of interest, contrast estimates were extracted at the peak voxel in whole brain analysis H-IMI>NE_IMI (MNI [46 8 2]) from the normalized and 8 mm smoothed images and entered into a repeated measures ANOVA with factors task (observation, execution, imitation) and facial expression (happy, non-emotional). This analysis revealed a significant main effect for the factor facial expression, F(1, 26) = 38.6, MSE = 0.45, p<0.001, as well as for the factor task, F(2, 52) = 30.96, MSE = 0.70, p<0.001. There was no significant interaction between the two factors, F(2, 52) = 0.64, MSE = 0.33, p = 0.53. The Bonferroni-corrected threshold of dependent post-hoc t-tests was p<0.0055. Post-hoc tests between conditions revealed significant stronger activation when facial expressions were imitated or executed than when they were only observed. Furthermore, execution of the non-emotional facial expressions revealed stronger insula activation compared with imitation of the non-emotional facial expression. Observation of the happy facial expressions resulted in significantly stronger insula activation than observation of the non-emotional facial expressions. The same was true for imitation, which was already shown in the whole brain analysis (for means, standard deviations and post-hoc tests see Table 3).

For the amygdala, contrast estimates were extracted from the same contrast (H_IMI>N_IMI) at the peak voxel in the whole brain analysis (MNI [28 −2 −30]) from the normalized and 8 mm smoothed images and entered into a repeated measures ANOVA with factors task (observation, execution, imitation) and facial expression (emotional, non-emotional). The repeated measures ANOVA revealed a significant main effect for the factor facial expression, F(1, 26) = 41.053, MSE = 0.14, p<0.001 but not for the factor task, F(2, 52) = 0.93, MSE = 0.23, p = 0.401. There was a significant interaction between facial expression and task, F(2, 52) = 6.77, MSE = 0.1, p = 0.002. The post hoc t-tests (Bonferroni corrected p<0.0055) indicated that the right amygdala showed significantly stronger activation during observation of the non-emotional facial expressions than during execution of the non-emotional facial expressions. Execution of happy facial expressions led to significantly stronger right amygdala activation than execution of the non-emotional facial expressions. The same was true for the imitation condition, which was already shown by the whole brain analysis but not for the observation condition, t(26) = 1.89, p = 0.07.

No other comparison survived the Bonferroni correction (but there were two trends: Insula activation tended to be stronger during execution of the happy, compared with execution of the non-emotional facial expressions, t(26) = 2.96, p = 0.006. Amygdala activation tended to be stronger during observation of the non-emotional facial expressions in comparison to imitation of the non-emotional facial expressions, t(26) = 2.48, p = 0.020) (see also Table 3).

In summary, analyses of the fMRI data revealed the following results (see Figure 3): First, imitation and execution of facial expressions involved the right insula significantly stronger than observation of facial expressions did. Secondly, the right insula showed significant stronger activity for the happy than for the non-emotional facial expressions, irrespective if facial expressions were observed or imitated. Third, the right amygdala showed an increased difference between the happy and the non-emotional facial expressions during imitation and execution in comparison to observation.

Discussion

We examined neural correlates specific for happy in comparison to non-emotional facial expressions during imitation. In line with previous research, happy and non-emotional facial expressions activated a similar bilateral imitation network encompassing (pre-)motor, somatosensory, and superior and middle temporal cortices. Imitation of happy facial expressions contrasted with imitation of non-emotional facial expressions revealed right hemispheric activity in the pre-SMA, insula extending to premotor cortices, the amygdala and the medial temporal cortex.

Insula and IFG op

Involvement of the insula in emotional tasks has been shown in several studies (for reviews see [21], [32]). Anders and colleagues [33] for example showed that increased activation of the right insula is correlated with an increase in the experience of negative valence. There is growing consensus that the insular cortex mediates the experience of emotions by representation of changes of internal physical states [21]. Critchley and colleagues [34] showed insula involvement during the explicit access to visceral information. Activation of the right insula correlated positively with the accuracy to detect one's own heart beats and efficiency in the heart beat task was associated with trait ratings of negative affect. The involvement of the right anterior insula during subjective ratings of changes in bodily states (here temperature changes) was also shown by Craig and colleagues [35].

We found significantly stronger insula activation during both imitation and observation of happy in comparison to the non-emotional facial expressions (with a trend concerning the execution of facial expressions). The awareness of bodily states might be stronger, when an emotionally salient stimulus is presented (here the actor depicting a happy facial expression). Summing up our results, insula activation was significantly stronger during motor tasks and when an emotionally salient stimulus was presented. Both effects are in line with Craig [32] claiming a specific role of this region during awareness in general.

In a previous Magnetencephalography study Chen and colleagues [36] asked participants to put themselves in the emotion of visually presented facial expressions. An increase of insula activation was found for happy and disgusted facial expression exposure in comparison to neutral faces. Significant stronger activation of right anterior insula during disgust in comparison to happiness was also shown but relatively late after stimulus onset. The authors speculated that insula activation may decline earlier during visually induced pleasant emotions than during observation of emotional facial expressions with negative valence. A faster decline of insula response during observation in comparison to execution/imitation of facial expressions might also be due to differences of internal states between tasks. This maybe explains the significant differences between perceptive and active tasks in our study. Further MEG studies with high temporal resolution would be needed to examine differences in time response of the insula in the different tasks.

Interestingly, activation of the insula specific for happy affect extended to the IFG op (BA 44) and pars triangularis (BA45). Several studies have found activation in IFG op during imitation of hand-movements (e.g. [37]) or during imitation of emotional facial expressions [9], [13], [16]. IFG op is part of the putative human mirror neuron system, which is believed to mediate action understanding [12], [14] by decoding the goal of an observed action [38], [39]. Evidence for the importance of the IFG op for imitation came from a transcranial magnetic stimulation (TMS) study, which found that perturbation of left and right IFG op significantly increased error rates in a finger imitation task [15]. But it is still under debate if this area plays a pivotal role in imitation (e.g. [9], [37], [40], [41]), or if its involvement in imitation is overestimated [42]. For example, IFG op activity might be confounded by other cognitive functions like execution timing [43]. Importantly, in our study execution timing was needed during imitation of both facial expressions. Therefore, timing issues cannot explain the increased activation for the happy (as compared to the non-emotional) facial expressions. In a recent study, a cluster encompassing the right anterior insula and the adjacent frontal operculum was triggered by IFG (BA 45) activation during observation and experience of disgust [44]. Combining these results and the model of neural correlates of imitation where visual input is forwarded through STS, IPL and then IFG op [9], [11], we hypothesize that IFG might trigger increased insula activation during affective imitation.

Amygdala

The amygdala, a central part of the emotion-circuitry (for a review see [45]), was similarly activated during observation of emotional and non-emotional facial expressions. Thus, the finding of van der Gaag et al. [19] was replicated by our results. A meta-analysis of Sergerie and colleagues [45] revealed general stronger effect sizes of amygdala activation during observation of faces compared with pictures. The amygdala might act as a ‘relevance detector’ during observation of biologically relevant stimuli [46].

While the difference between the emotional and non-emotional facial expressions was small during observation, we found affect-specific increase of the right amygdala during imitation and execution. Executing emotional facial expressions has been claimed to increase autonomic arousal and emotional experience [47]. Phillips and colleagues [48] argued that the autonomic response during the experience of an affective state might be mediated by amygdala activation. In line with this, infusing a γ-aminobutyric acid (GABA-A) antagonist into the amygdala of rats led to increased blood pressure and heart rate [46]. Moreover, the amygdala is reciprocally connected to the hypothalamus and brain stem regions involved in autonomic control [49]. In the current study, the autonomic response might have been stronger during the execution of emotional facial expressions than during the execution of non-emotional facial expressions.

In another line of evidence, a recent study by Hennenlotter and colleagues [50] showed that facial feedback during imitation of emotional facial expressions modulates amygdala activation. When feedback of facial muscles was inhibited by treatment with botulinum toxin, left amygdala activation decreased. Additionally, contraction intensity of facial muscles correlated with left amygdala activity when muscles were not treated with botulinum toxin. Also Lee and colleagues [10] found correlations between the magnitude of facial movements during imitation of happy facial expressions and activation of the left amygdala. Thus, increased affect-specific amygdala activity during the motor tasks in the current study could also be a result of increased facial feedback during emotional motor conditions.

Finally, it should also be mentioned that affect-specific activation of amygdala during the emotional motor conditions might also be due to faster habituation of the amygdala during observation of facial expressions. While amygdala response has been shown to habituate during observation of emotional facial expressions (e.g. [51]), experiencing emotions did not lead to amygdala habituation [52]. Thus, affect-specific amygdala activation during the emotional motor tasks in comparison with observation may be caused by habituation of amygdala during observation, especially in blocked stimulus presentation designs.

Lateralization

We found bilateral activations during imitation of positive and non-emotional facial expressions. Although possible lateralization during imitation has been debated controversial [53], a bilateral network is in line with a recent meta-analysis on imitation. Notwithstanding, it has to be noted that most studies included in this meta-analysis examined hand movements (29 out of 35) and only few explored facial expressions [53].

When imitation of happy facial expressions was compared with imitation of non-emotional facial expressions, we found significant activation peaks in the right hemisphere only. Concerning the insula, previous studies on imitation, execution or observation of emotional facial expressions report controversial results. Carr et al. [9] reported bilateral insula activation during observation but only left hemispheric activation during imitation of emotional facial expressions. This finding was underlined by van der Gaag and colleagues [13], who found bilateral insula activation when observation of emotional facial expressions was contrasted with observation of non-emotional facial expressions. Contrary, left hemispheric activation of the insula was found during observation and execution of pleasant facial affect by Hennenlotter and colleagues [14]. Lee et al. [10] noted significant left insula involvement during imitation of angry and licking, but right insula involvement during imitation of chewing facial expressions. Nevertheless, right insula activation in our study is in line with the finding that the magnitude of facial movement during imitation of pleasant affect predicts activation of the right insula [10] and with studies relating activation of the right insula to interoception [34], [35].

Previous imaging studies examining emotion processing reported increased amygdala activation more often in the left than in the right hemisphere (for a meta analysis see [54]; for a review see [55]). Contrary to this, we found increased activation of the right amygdala. Studies examining the putative human MNS often reported activation of the right amygdala during observation [9], [13], [14], execution [14] or imitation [9] of emotional facial expressions as well. Note that our previous analyses of shared representations of observation and execution of positive facial affect revealed increased activation of the right amygdala, too [20]. Moreover, right amygdala activation was assumed to be associated with rather automatic emotion processing (here mood induction), whereas increased left amygdala activation was found during cognitive, intentional emotion regulation [52].

Apart from these results, recent research suggests an influence of acquisition parameters of EPI sequences on laterality of amygdala activation. Mathiak and colleagues [56] investigated the influence of phase encoding direction on laterality of amygdala activation, assuming that lateralization in previous studies might have been due to intense medial temporal lobe susceptibility artefacts [56]. In this study, a significant interaction between hemisphere and phase encoding direction was noted when participants observed fearful faces masked with neutral faces. Increase of left amygdala activation was found for left to right but not for right to left side phase encoding. Authors stated that amygdala lateralization has to be discussed carefully until the EPI sequences are optimized. However, we found the same lateralization pattern also for the insula, which is not affected by susceptibility artefacts due to phase encoding direction. In total, lateralization of the activation of the amygdala and insula as part of the perception-execution matching system might be influenced by valence, task or cognitive load.

Limitations

A limitation of this study is the use of a blocked design as we cannot exclude bias due to differential habituation of the amygdala response in different tasks. Furthermore, we compared a happy facial expression (a smile) to a non-emotional facial expression (lip protrusion). We tried to choose similar movements to compare happy with non-emotional facial expressions (in both expressions the lips are moved). Nevertheless, the amount of motion or differential usage of facial muscles to produce the two facial expressions may be possible confounds that are not excluded by our study design. Likewise, we cannot generalize the results to other emotions. Although both structures have been shown to be involved during observation (e.g. [13]) and imitation of different emotions [9], subsequent studies directly comparing the emotions in all three tasks are needed to clarify this issue. Particularly merest evidence concerns differences between the emotions during pure execution of emotional facial expressions as most studies investigated observation and/or imitation of facial expressions (e.g. [9], [19]).

Conclusion

Our findings support the notion that insula and amygdala activation is increased in addition to activation in the putative human mirror neuron system during the imitation of positive facial expressions. We found increased activation of the right insula during imitation and observation of positive facial expressions, which may be related to enhanced emotional awareness during the presentation of emotionally salient stimuli. In contrast, the increase of right amygdala activation to happy in comparison to non-emotional facial expressions was larger during imitation and execution than to observation. This might be due to an enhanced autonomic reaction in response to feedback from facial muscles of the amygdala, or to differences in habituation of amygdala response. In addition, we found that the insula, but not the amygdala, showed enhanced activity during execution and imitation compared with observation of happy facial expressions.

Acknowledgments

We thank all participants, Sebastian Kath for help with preparing the stimulus material, Simon Eickhoff and Thilo Kellermann for statistical advice, Gina Joue for editing the manuscript, and Cordula Kemper for assistance during scanning.

Author Contributions

Conceived and designed the experiments: TK AP SA. Performed the experiments: AP. Analyzed the data: AP KM. Contributed reagents/materials/analysis tools: AP MSR. Wrote the paper: AP. Revision of the manuscript: SA MSR KM TK.

References

  1. 1. Meltzoff AN, Moore MK (1997) Explaining facial imitation: A theoretical model. Early Development & Parenting 6: 179–192.
  2. 2. Williamson RA, Jaswal VK, Meltzoff AN (2010) Learning the Rules: Observation and Imitation of a Sorting Strategy by 36-Month-Old Children. Developmental Psychology 46: 57–65.
  3. 3. Stel M, Vonk R (2010) Mimicry in social interaction: Benefits for mimickers, mimickees, and their interaction. British Journal of Psychology 101: 311–323.
  4. 4. Kuhn S, Muller BCN, van Baaren RB, Wietzker A, Dijksterhuis A, et al. (2010) Why do I like you when you behave like me? Neural mechanisms mediating positive consequences of observing someone being imitated. Social Neuroscience 5: 384–392.
  5. 5. Dijksterhuis A, Bargh JA (2001) The perception-behavior expressway: Automatic effects of social perception on social behavior. Advances in Experimental Social Psychology, Vol 33. San Diego: Academic Press Inc. pp. 1–40.
  6. 6. Dimberg U, Thunberg M, Grunedal S (2002) Facial reactions to emotional stimuli: Automatically controlled emotional responses. Cognition & Emotion 16: 449–471.
  7. 7. Dimberg U, Thunberg M, Elmehed K (2000) Unconscious facial reactions to emotional facial expressions. Psychol Sci 11: 86–89.
  8. 8. Lee TW, Dolan RJ, Critchley HD (2008) Controlling emotional expression: behavioral and neural correlates of nonimitative emotional responses. Cereb Cortex 18: 104–113.
  9. 9. Carr L, Iacoboni M, Dubeau MC, Mazziotta JC, Lenzi GL (2003) Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. Proc Natl Acad Sci U S A 100: 5497–5502.
  10. 10. Lee T-W, Josephs O, Dolan RJ, Critchley HD (2006) Imitating expressions: emotion-specific neural substrates in facial mimicry. Social Cognitive and Affective Neuroscience 1: 122–135.
  11. 11. Iacoboni M, Dapretto M (2006) The mirror neuron system and the consequences of its dysfunction. Nat Rev Neurosci 7: 942–951.
  12. 12. Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27: 169–192.
  13. 13. van der Gaag C, Minderaa RB, Keysers C (2007) Facial expressions: what the mirror neuron system can and cannot tell us. Soc Neurosci 2: 179–222.
  14. 14. Hennenlotter A, Schroeder U, Erhard P, Castrop F, Haslinger B, et al. (2005) A common neural basis for receptive and expressive communication of pleasant facial affect. Neuroimage 26: 581–591.
  15. 15. Heiser M, Iacoboni M, Maeda F, Marcus J, Mazziotta JC (2003) The essential role of Broca's area in imitation. European Journal of Neuroscience 17: 1123–1128.
  16. 16. Leslie KR, Johnson-Frey SH, Grafton ST (2004) Functional imaging of face and hand imitation: towards a motor theory of empathy. Neuroimage 21: 601–607.
  17. 17. Keysers C, Gazzola V (2006) Towards a unifying neural theory of social cognition. In: Anders S, Ende G, Junghoffer M, Kissler J, Wildgruber D, editors. Understanding Emotions. pp. 379–401.
  18. 18. Montgomery KJ, Haxby JV (2008) Mirror neuron system differentially activated by facial expressions and social hand gestures: a functional magnetic resonance imaging study. J Cogn Neurosci 20: 1866–1877.
  19. 19. van der Gaag C, Minderaa RB, Keysers C (2007) The BOLD signal in the amygdala does not differentiate between dynamic facial expressions. Soc Cogn Affect Neurosci 2: 93–103.
  20. 20. Kircher T, Pohl A, Krach S, Thimm M, Schulte-Ruther M, et al. (2012) Affect-specific activation of shared networks for perception and execution of facial expressions. Social Cognitive and Affective Neuroscience
  21. 21. Phan KL, Wager T, Taylor SF, Liberzon I (2002) Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage 16: 331–348.
  22. 22. Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9: 97–113.
  23. 23. Ekman P, Friesen WV (1978) Facial action coding system. Palo Alto, CA: Consulting Psychologists Press.
  24. 24. Ashburner J, Friston KJ (2003) Rigid Body Registration. In: Frackowiak RS, editor. Human Brain Function. 2nd ed. San Diego: Academic Press.
  25. 25. Collins DL, Neelin P, Peters TM, Evans AC (1994) Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. J Comput Assist Tomogr 18: 192–205.
  26. 26. Evans AC, Marrett S, Neelin P, Collins L, Worsley K, et al. (1992) Anatomical mapping of functional activation in stereotactic coordinate space. Neuroimage 1: 43–53.
  27. 27. Ashburner J, Friston KJ (2005) Unified segmentation. Neuroimage 26: 839–851.
  28. 28. Eickhoff SB, Paus T, Caspers S, Grosbras MH, Evans AC, et al. (2007) Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage 36: 511–521.
  29. 29. Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, et al. (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25: 1325–1335.
  30. 30. Maldjian JA, Laurienti PJ, Kraft RA, Burdette JH (2003) An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets. Neuroimage 19: 1233–1239.
  31. 31. Caspers S, Geyer S, Schleicher A, Mohlberg H, Amunts K, et al. (2006) The human inferior parietal cortex: cytoarchitectonic parcellation and interindividual variability. Neuroimage 33: 430–448.
  32. 32. Craig AD (2009) How do you feel - now? The anterior insula and human awareness. Nature Reviews Neuroscience 10: 59–70.
  33. 33. Anders S, Lotze M, Erb M, Grodd W, Birbaumer N (2004) Brain activity underlying emotional valence and arousal: A response-related fMRI study. Human Brain Mapping 23: 200–209.
  34. 34. Critchley HD, Wiens S, Rotshtein P, Ohman A, Dolan RJ (2004) Neural systems supporting interoceptive awareness. Nature Neuroscience 7: 189–195.
  35. 35. Craig AD, Chen K, Bandy D, Reiman EM (2000) Thermosensory activation of insular cortex. Nat Neurosci 3: 184–190.
  36. 36. Chen YH, Dammers J, Boers F, Leiberg S, Edgar JC, et al. (2009) The temporal dynamics of insula activity to disgust and happy facial expressions: a magnetoencephalography study. Neuroimage 47: 1921–1928.
  37. 37. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, et al. (1999) Cortical mechanisms of human imitation. Science 286: 2526–2528.
  38. 38. Koski L, Wohlschlager A, Bekkering H, Woods RP, Dubeau MC, et al. (2002) Modulation of motor and premotor activity during imitation of target-directed actions. Cereb Cortex 12: 847–855.
  39. 39. Rizzolatti G, Sinigaglia C (2010) The functional role of the parieto-frontal mirror circuit: interpretations and misinterpretations. Nat Rev Neurosci 11: 264–274.
  40. 40. Bien N, Roebroeck A, Goebel R, Sack AT (2009) The brain's intention to imitate: the neurobiology of intentional versus automatic imitation. Cereb Cortex 19: 2338–2351.
  41. 41. Fabbri-Destro M, Rizzolatti G (2008) Mirror neurons and mirror systems in monkeys and humans. Physiology (Bethesda) 23: 171–179.
  42. 42. Molenberghs P, Cunnington R, Mattingley JB (2009) Is the mirror neuron system involved in imitation? A short review and meta-analysis. Neurosci Biobehav Rev 33: 975–980.
  43. 43. Makuuchi M (2005) Is Broca's area crucial for imitation? Cereb Cortex 15: 563–570.
  44. 44. Jabbi M, Keysers C (2008) Inferior frontal gyrus activity triggers anterior insula response to emotional facial expressions. Emotion 8: 775–780.
  45. 45. Sergerie K, Chochol C, Armony JL (2008) The role of the amygdala in emotional processing: a quantitative meta-analysis of functional neuroimaging studies. Neuroscience & Biobehavioral Reviews 32: 811–830.
  46. 46. Sander D, Grafman J, Zalla T (2003) The human amygdala: an evolved system for relevance detection. Rev Neurosci 14: 303–316.
  47. 47. Adelmann PK, Zajonc RB (1989) FACIAL EFFERENCE AND THE EXPERIENCE OF EMOTION. Annual Review of Psychology 40: 249–280.
  48. 48. Phillips ML, Drevets WC, Rauch SL, Lane R (2003) Neurobiology of emotion perception I: The neural basis of normal emotion perception. Biological Psychiatry 54: 504–514.
  49. 49. LeDoux JE (2000) Emotion circuits in the brain. Annual Review of Neuroscience 23: 155–184.
  50. 50. Hennenlotter A, Dresel C, Castrop F, Ceballos-Baumann AO, Wohlschlager AM, et al. (2009) The link between facial feedback and neural activity within central circuitries of emotion–new insights from botulinum toxin-induced denervation of frown muscles. Cereb Cortex 19: 537–542.
  51. 51. Wright CI, Fischer H, Whalen PJ, McInerney S, Shin LM, et al. (2001) Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli. Neuroreport 12: 379–383.
  52. 52. Dyck M, Loughead J, Kellermann T, Boers F, Gur RC, et al. (2010) Cognitive versus automatic mechanisms of mood induction differentially activate left and right amygdala. Neuroimage
  53. 53. Caspers S, Zilles K, Laird AR, Eickhoff SB (2010) ALE meta-analysis of action observation and imitation in the human brain. Neuroimage 50: 1148–1167.
  54. 54. Wager TD, Phan KL, Liberzon I, Taylor SF (2003) Valence, gender, and lateralization of functional brain anatomy in emotion: a meta-analysis of findings from neuroimaging. Neuroimage 19: 513–531.
  55. 55. Baas D, Aleman A, Kahn RS (2004) Lateralization of amygdala activation: a systematic review of functional neuroimaging studies. Brain Research Reviews 45: 96–103.
  56. 56. Mathiak KA, Zvyagintsev M, Ackermann H, Mathiak K (2012) Lateralization of amygdala activation in fMRI may depend on phase-encoding polarity. MAGMA 25: 177–182.