Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Neural Signals Evoked by Stimuli of Increasing Social Scene Complexity Are Detectable at the Single-Trial Level and Right Lateralized

  • Carlos P. Amaral,

    Affiliation IBILI—Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal

  • Marco A. Simões,

    Affiliation IBILI—Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal

  • Miguel S. Castelo-Branco

    mcbranco@fmed.uc.pt

    Affiliations IBILI—Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal, ICNAS, Brain Imaging Network of Portugal, Coimbra, Portugal

Abstract

Classification of neural signals at the single-trial level and the study of their relevance in affective and cognitive neuroscience are still in their infancy. Here we investigated the neurophysiological correlates of conditions of increasing social scene complexity using 3D human models as targets of attention, which may also be important in autism research. Challenging single-trial statistical classification of EEG neural signals was attempted for detection of oddball stimuli with increasing social scene complexity. Stimuli had an oddball structure and were as follows: 1) flashed schematic eyes, 2) simple 3D faces flashed between averted and non-averted gaze (only eye position changing), 3) simple 3D faces flashed between averted and non-averted gaze (head and eye position changing), 4) animated avatar alternated its gaze direction to the left and to the right (head and eye position), 5) environment with 4 animated avatars all of which change gaze and one of which is the target of attention. We found a late (> 300 ms) neurophysiological oddball correlate for all conditions irrespective of their complexity as assessed by repeated measures ANOVA. We attempted single-trial detection of this signal with automatic classifiers and obtained a significant balanced accuracy classification of around 79%, which is noteworthy given the amount of scene complexity. Lateralization analysis showed a specific right lateralization only for more complex realistic social scenes. In sum, complex ecological animations with social content elicit neurophysiological events which can be characterized even at the single-trial level. These signals are right lateralized. These finding paves the way for neuroscientific studies in affective neuroscience based on complex social scenes, and given the detectability at the single trial level this suggests the feasibility of brain computer interfaces that can be applied to social cognition disorders such as autism.

INTRODUCTION

Investigating the sensitivity of neurophysiological responses to complex social scenes is now becoming an increasingly recognized topic in affective neuroscience [1,2]. It is also of paramount importance in important fields such as developmental neuroscience [3] and autism research, where complex social attention deficits are present [2,4]. Moreover, if these responses could be studied at the single or near single-trial level, this might pave the way to develop brain computer interfaces to train social cognition deficits in these disorders, which are characterized by deficits in social attention. The problems associated with low signal to noise of neurophysiological responses are now overcome by advanced statistical classification methods that can classify these signals at the single-trial level [5,6].

Attention to social stimuli such as faces has often been studied with oddball paradigms which use simple face presentation as target stimuli and the average of many responses for the analysis (e.g., [710]). As an example, the P300 oddball signal is a well-known neural signature of attention processes for detection of rare items in a series of distinct stimuli types. The P300 has been classically reported as an enhanced positive-going component with a latency of about 300 milliseconds (ms) and normally a scalp distribution over the midline electrodes (for a review see [1113]).

The main goal of this manuscript was to study attention to social complex stimuli and scenes at the single-trial level, and the relevance of hemispheric laterality in this process. This is an important qualitative step, because we focus on single or few trials in addition to average neurophysiological responses. The suitability of oddball paradigms for such single-trial analyses, has been empirically proven and is well documented in the literature. The reason for its use is that it is possible to quickly “calibrate” and model the P300 in individual subjects and use it in statistical classification approaches [5,6]. Conflict paradigms or attentional-bias paradigms have not yet been proven to be suitable for single-trial analyses, unlike P300 approaches. Accordingly, P300 based oddball paradigms are often used in brain-computer interfaces (BCI) which are systems that allow individuals to communicate without having to use verbal or motor means of communication [1417]. The fact that P300 is a robust signal that can be identified even at single-trial level makes it a favourable neurophysiological component to provide good communication speeds for BCIs. Wang and colleagues in [18] were able to achieve an information transfer rate of 12 characters per minute with this type of paradigm, which is very significant in this field. These approaches do therefore take advantage of the recent progress in statistical classification methods (for review see [19]) to identify P300 waveforms even at the single-trial level (e. g., [6,20]).

The usage of faces as target of attention was already successful in oddball-based BCIs [2123] and has also been used to study healthy social cognition [24] and disorders such autism [25,26], prosopagnosia [27] and social phobia [28]. Moreover, a few studies aiming at BCI applications tried to integrate three-dimensional (3D) stimuli in oddball paradigms [2931]. They showed that it is possible to measure a P300 response to 3D stimuli though none used realistic or complex social scenes as targets of attention.

The eyes are a powerful route of non-verbal information and draw most of our attentional resources during social interactions. They can be used to determine the focus of someone's attention, which is called joint attention. This ability is already established very early in development (for review see [32]) which indicates the high relevance this kind of gaze processing occupies in the human evolutionary process. Gaze shifts of others towards any point in the environment can trigger a reflexive redirection of one's own attentional focus [33,34]. Thus, we believe that this reflexive attentional process can be studied in terms of the mechanisms involved in novelty processing embedded in oddball paradigms. Therefore, we envisaged the introduction of complex social scenes containing non-natural (flashed) or natural eye/head-gaze shifts as target of attention in oddball paradigms.

Several studies have already described the involvement of the superior temporal sulcus (STS) in the processing of relevant and familiar types of biological motion such as human body motion [35,36], expression of emotions [3739], facial motion due to speech production [40,41], or in complex scenes such as movies [42,43]. Additionally, this region have been shown to respond to natural images of facial motion [44]. Taken together, such studies suggest that initial analysis of social cues occurs in the STS region, which is in a privileged anatomical location to integrate information derived from both the ventral “what” and the dorsal “where” visual pathways [45]. Hein and Knight in [46] agreed that the function of STS depends largely upon the co-activations of connected areas. On the other hand, Haxby et al. in [47] postulate that the posterior STS is responsible for the processing of quickly changing social features, such as facial expressions. Nummenmaa & Calder in [48] believe that the posterior STS is related to the processing of the intentionality of others’ actions. Lahnakoski et al. in [49] suggested that the posterior STS region is functionally tightly coupled with other brain regions and might work as a convergence (integration) point of social information processed in other functionally connected sub-systems.

To our knowledge, studies of complex social cognition close to the single-trial level have not been attempted from the cognitive neuroscience point of view. We based the current study on the introduction of hierarchically complex and realistic stimuli with social content. In this way we could dissect the cognitive networks underlying normal attention to stimuli of complex social significance. We believe, as Kingstone proposed in [50], that the study of the neural correlates of increasingly complex representations of social interactions can provide critical insights into the nature of cognitive processing in the domain of social attention. Furthermore, Mattout in [51]highlighted the strong need for experiments that could help identify realistic and efficient models of social interactions that BCI could then use to instantiate more productive interactions between an adaptive machine and a patient. If one could detect the neural signals related to complex social cognition processing at the single-trial level, it would potentially pave the way for the use of these kind of stimuli in future approaches of cognitive training in diseases of social cognition such autism. An interesting and updated review about the use of innovative computer technology for the development of social skills to individuals [52] reveals the promising potential of this type of approach, in particular if realistic scenes are used. The studies mentioned in this review reported significant improvements in the addressed social skills, however, altogether, the same studies did not verify the transfer of these skills to more realistic and meaningful contexts. On the other hand, some other studies have in fact explored methods for systematically teaching joint attention to children with autism [5355]. These studies included embedding motivating social interactions into the interventions, which effectively improved children’s social competences. However, the infrequent implementation of the protocols compromised the carryover after the end of the interventions. Thereby, the use of BCI interfaces that provide detection of complex social cognition related neural signals would enable the use of structured, well-controlled, realistic and immersive social interactions in computerized systems. This would in turn facilitate the repetition of the interventions as many times as required.

Thus, we directly attempted to test the lateralization and detect the neurophysiological correlates of attention to complex social stimuli at a single-trial level as a way to prove the usability of this concept in BCI applications.

METHODS

Ethics Statement

This study and all the procedures were reviewed and approved by the Ethics Commission of the Faculty of Medicine of the University of Coimbra (Comissão de Ética da Faculdade de Medicina da Universidade de Coimbra) and was conducted in accordance with the declaration of Helsinki. All participants were recruited from our database of voluntary participants, with no monetary compensation. All of them agreed and signed a written informed consent.

Participants

All participants (n = 17, 11 males, 6 females, average age 22.8 years (SD = 4.1), range 20–33 years) had normal or corrected-to normal vision and no history of neurological disorders nor any other major health problems. All participants were naive regarding the purpose of the study. Participants took part in EEG recordings during five different experimental paradigms.

Experimental paradigms

We constructed five oddball experimental paradigms using the Vizard Virtual Reality Toolkit, from WorldViz. They ranged from simple flashing stimulus paradigms to realistic animations of human models as targets for focus of attention. The flashing paradigms were labelled as follows: ‘Flashed Schematic Eyes’, ‘Flashed Face—Eye position change’, ‘Flashed Face—Eye and Head position change’ (flashing paradigms). The paradigms with animations as target of attention were labelled as: ‘Animated 3D body—gaze change in 1 avatar’ and ‘Animated 4 avatar environment—gaze change in 4 avatars’.

Flashing paradigms description (Fig. 1):

  • ‘Flashed Schematic Eyes’: The non-target events of this paradigm consisted in the appearance of two 3D models of “balls” (resembling eyes) in a grey background screen, during 500 ms. In the target event the balls appeared slightly rotated in relation to the position it had in the non-target events. The task was to mentally count the occurrence of these target events;
  • ‘Flashed Face—Eye position change’: The non-target events of this paradigm consisted in the appearance of a 3D model of a face in a grey background screen, “looking” to the participant during 500 ms. In the target event the face appeared with the eyes gazing to its right (left of the participant). The task was to mentally count how many times the face appeared with the eyes gazing to participants’ left;
  • ‘Flashed Face—Eye and Head position change’: The non-target events of this paradigm consisted in the appearance of a 3D model of a face looking to participant during 500 ms. In the target event the face appeared facing towards the left side. The participant task was to mentally count the times the face appeared facing to participants’ left.

thumbnail
Fig 1. Schematic illustration of the 3 flashing oddball paradigms.

Top—‘Flashed Schematic Eyes’ paradigm: The subjects were asked to count the occurrence of slightly rotated balls. Middle—‘Flashed Face—Eye position change’ paradigm: the target event is a change in the direction of the eyes. Bottom—‘Flashed Face—Eye and Head position change’: The target event is the slight head rotation.

https://doi.org/10.1371/journal.pone.0121970.g001

Animated paradigms description:

  • ‘Animated 3D body—gaze change in 1 avatar’: One 3D model of a human-like avatar is facing the participant (presented from the shoulders up), in a scenario with a grey background. The events are the animation of this avatar. The non-target event is the rotation of avatar’s head to the left side (continuous realistic animation that lasted 900 ms). The target event was the rotation of the head to the right side (also a continuous realistic animation of 900 ms). The task of the participant was to mentally count “how many times the person looks to its right” (Fig. 2).
  • ‘Animated 4 avatar environment—gaze change in 4 avatars’: Four different avatars are arranged in diamond, in a scenario with a grey background. The events consisted in the rotation of the head of one of the four avatars to the right (continuous realistic animation that lasted 900 ms). The target event was the rotation of the head of the diamond’s top edge avatar, to the right (see Fig. 3). The task here was to mentally count “how many times the person in the top looks to its right side”.

thumbnail
Fig 2. ‘Animated 3D body—gaze change in 1 avatar’ paradigm.

The participants were instructed to pay attention to the turning of the head of the avatar to the left of the participant. Non-target animation is the rotation of the head to the right of the participant.

https://doi.org/10.1371/journal.pone.0121970.g002

thumbnail
Fig 3. ‘Animated 4 avatar environment—gaze change in 4 avatars’ paradigm.

The target of attention is the animation of top avatar. The task was to count how many times the top avatar averted its gaze.

https://doi.org/10.1371/journal.pone.0121970.g003

We ran 10 blocks of 50 events for each paradigm. We allowed the participant to rest at each 5 blocks. In all the paradigms the target events were displayed randomly among the non-target. In each block the number of target events was 10, which gives an occurrence probability of 1/5. Two target events never occurred consecutively. In the ‘Animated 4 avatar environment—gaze change in 4 avatars’ paradigm, the same avatar never turned its head two times consecutively. The stimulus onset asynchrony (SOA) of the flashing paradigms was 1000 ms and the inter-stimulus interval (ISI) was 500 ms. In the animated paradigms the SOA was 1100 ms and the time between the animations of the avatars was 200 ms (thus animation time is 900 ms). The animation time of these two paradigms is the balance between the time needed to maintain a realistic movement of the avatar head and the need to maintain the time as small as possible to avoid uncomfortable total experiment times.

We planned to introduce hierarchical social scene complexity in these paradigms to uncover the degree of complexity that can be introduced in oddball paradigms such that the neural correlates for attention to social stimuli can still be detected at single-trial level. The social scene complexity of these paradigms was organized according to a matrix which ordered three objective parameters present in the scene. This matrix takes into account the number of items in the scene (including multipart social objects), elements that define the trajectory of social object and presence of movement. See Table 1.

thumbnail
Table 1. Matrix describing ordinal criteria for paradigms social scene complexity. These criteria define the hierarchy of social complexity of the paradigms.

https://doi.org/10.1371/journal.pone.0121970.t001

After each block the participants were asked how many target events they counted.

Data Acquisition

Participants were sat at about 60 centimetres from the screen (HP L1710 17-inch LCD Monitor; frame rate of 60 Hz), and the EEG was recorded using a Brain Products Package.

The individuals scalp was first cleaned using abrasive gel and then the actiCAP cap was placed on their heads. The data was recorded from 16 Ag/AgCl active electrodes (Brain Products), placed in Fp1, F3, Fz, F4, FCz, C3, Cz, C4, T4, P7, P3, Pz, P4, P8, O1, and O2 locations according to the international 10–20 standard system. The ground electrode was placed at AFz position and the reference electrode at T3 position. Their impedance was kept under 10 kΩ. The electrodes were connected directly to the 16 channels Brain Products V-Amp Amplifier and sampled at 1000 Hz. EEG data was recorded using the Brain Products Brain Recorder software with notch filter at 50 Hz, while the stimuli were presented to the subjects. For each paradigm the individuals were informed about the respective tasks. Each experimental procedure (preparation + 5 paradigms) took around 70 minutes to complete.

Data analysis

We performed an off-line analysis with Brain Vision Analyzer 2 from Brain Products software. The average of T4 and T3 channels was used to form a new reference as a way to simulate as close as possible the linked ears reference due to the proximity of these electrodes to the ears. This averaged re-reference was applied to all the remaining electrodes.

The data were filtered with a low pass filter at 30 Hz (24 dB/octave) and a high pass filter at 0,16 Hz (24 dB/octave). The data segmentation was based in the SOA of each paradigm. For the flashed stimuli paradigms the segmentation was performed in epochs of 1100 ms with a 100 ms pre-stimulus interval and a 1000 ms post-stimulus interval. For the animated stimuli paradigms the segmentation was performed in epochs of 1200 ms with a 100 ms pre-stimulus interval and an 1100 ms post-stimulus interval. Segments contaminated with eye blinks or excessive muscular activity were excluded from further analysis. Artefact rejection was set at 100 microvolts (μV). Both Target and Non-target conditions yielded more than 60 segments after artefact rejection for each condition. Next, a DC trend correction was performed in each individual segment using the first 100 ms at segment start and the last 100 ms at segment end [56]. A baseline correction procedure was done using the first pre-stimulus 100 ms.

An average of the target and non-target segments was then calculated and a conventional P300 analysis was performed. For this purpose the largest positive peak occurring within 250–800 ms that increases in amplitude from Frontal to Parietal scalp areas was identified as the P300 peak. We selected the Non-Target waveforms amplitudes with the same latency as the P300 peak to make the amplitude comparisons.

General statistical analysis was performed with the software IBM SPSS Statistics 19 (SPPS, Inc.) after verifying normality assumptions, with the significance level set at 0.05 level. If the normality assumption were met we performed a 3 (area: frontal (F3, Fz, F4), central (C3, Cz, C4), parietal (P3, Pz, P4)) × 3 (location relative to midline: left (F3, C3, P3), midline (Fz, Cz, Pz) and right (F4, C4, P4)) × 2 (stimulus type: non-target, target) repeated measures ANOVA for all paradigms and a more detailed 3 (area: frontal, central, parietal) × 2 (location relative to midline: left, right) repeated measures ANOVA of the target averaged amplitudes. The post-hoc tests were then performed with Bonferroni correction. When the normality assumptions were not met we performed the Friedman tests and the post-hoc Wilcoxon signed-rank tests with Bonferroni correction.

The automatic classification was performed in MathWorks Matlab, using the PRTools toolbox [57]. The EEG data was split in segments from 200 ms to 800 ms after stimulus onset. Each epoch was decimated by a factor of 20 and the 16 channels were joined, forming the feature vector. Classification of the event as target or non-target was performed using a Support Vector Machine (SVM) classifier with a polynomial kernel of one degree [58]. Data were classified using several values of averaged ERPs (1, 2, 3, 4, 5, 10 and 15). Classification performance measures for target detection were obtained through leave one out 6-fold cross validation. The metrics used were accuracy—(TP + TN) / N, specificity—TN / (TN + FP), sensitivity—TP / (TP + FN) and balanced accuracy—(SP + SS) / 2, having TP, TN, FP, FN, SP, SS and N as True Positives, True Negatives, False Positives, False Negatives, Specificity, Sensitivity and total number of events, respectively. The unbalanced nature of the data set (the non-target segments are four times more than the target ones, because of the different occurrence probability) makes the balanced accuracy the more reliable metric for assessing the classifier performance.

Permutation tests were performed in order to evaluate the statistical significance of the classification’s results, following the permutation-based p-value definition presented in [59]:

Permutation-based p-value—Let D* = {D1, …, Dk} be the set of k randomized versions of the data matrices, , where Xi are the observations of a series of features, and yi the class labels associated to each observation, sampled from a given null distribution. The empirical p-value for the function learned by the classification algorithm, f, is calculated by e being the error function, which is the ratio of wrong classified observations. This p value represents the fraction of randomized versions where the classifier had better performance in the random labelled data than in the original data. If the p-value is small enough the null hypothesis is rejected. In this case the null hypothesis was that the classifier is performing at the chance level.

RESULTS

Statistical analysis revealed a significant main effect for stimulus type in all paradigms. The lateralization analysis revealed significant main effects of location relative to midline only in the target peaks of the animated paradigms, being significantly higher at right electrode sites (for control analyses concerning gaze directions see below). Regarding the area effects, the amplitudes of P300 peaks were generally significantly higher at parietal sites in all paradigms. Latency analysis showed that latencies were significantly higher at frontal and central sites comparing to parietal sites in the less salient condition (‘Flashed Face—Eye position change’ paradigm). Detailed statistical results are shown next.

Concerning the significant main effect of stimulus type that emerged in all conditions it can be summarized as follows: ‘Flashed Schematic Eyes’—F(1,16) = 73.2, p < 0.0001; ‘Flashed Face—Eye position change’—F(1,16) = 25.9, p < 0.0001; ‘Flashed Face—Eye and Head position change’—F(1,16) = 25.5, p < 0.0001; ‘Animated 3D body—gaze change in 1 avatar’—F(1, 16) = 52.2, p < 0.0001; ‘Animated 4 avatar environment—gaze change in 4 avatars’—F(1,16) = 110.6, p < 0.0001. Accordingly, the waveform amplitudes of P300 to target stimuli were significantly higher than the amplitudes of the Non-Target waveforms for all the paradigms (see Table 2).

thumbnail
Table 2. Means (SEM) of the non-target and target peak amplitude (in microvolts) responses for the different paradigms.

https://doi.org/10.1371/journal.pone.0121970.t002

Grand-average responses in parietal sites of the flashing paradigms are shown in Fig. 4.

thumbnail
Fig 4. Target and non-target grand-average ERP waveforms at parietal sites.

Top: ‘Flashed Schematic Eyes’ paradigm. Middle: ‘Flashed Face—Eye position change’. Bottom: ‘Flashed Face—Eye and Head position change’.

https://doi.org/10.1371/journal.pone.0121970.g004

More detailed analysis of the target averaged amplitudes revealed marginal effects of area (F(2, 32) = 3.513, p = 0.073) and location (F(1, 16) = 4.439, p = 0.051) for the ‘Flashed Schematic Eyes’ paradigm. Concerning the ‘Flashed Face—Eye position change’ paradigm we observed area effects (F(2, 32) = 18.953, p < 0.0001), but not location (F(1, 16) = 1.457, p = 0.245). Areal effects for the ‘Flashed Face—Eye and Head position change’ paradigm were also detected (F(2, 32) = 16.857, p < 0.0001), contrary to location (F(1, 16) = 4.965, p = 0.321). The area effects were also significant for the ‘Animated 4 avatar environment—gaze change in 4 avatars’ paradigm (F(2, 32) = 19.350, p < 0.0001), but not for the ‘Animated 3D body—gaze change in 1 avatar’ condition. Still, as visible in Fig. 5 and Fig. 6, a location effect was indeed found in both animated paradigms: ‘Animated 3D body—gaze change in 1 avatar’—F(1, 16) = 20.518, p < 0.0001); ‘Animated 4 avatar environment—gaze change in 4 avatars’—F(1, 16) = 14.549, p = 0.002.

thumbnail
Fig 5. Target and non-target grand-average ERP waveforms for the ‘Animated 3D body—gaze change in 1 avatar’ paradigm.

https://doi.org/10.1371/journal.pone.0121970.g005

thumbnail
Fig 6. Target and non-target grand-average ERPs waveforms for the ‘Animated 4 avatar environment—gaze change in 4 avatars’ paradigm.

https://doi.org/10.1371/journal.pone.0121970.g006

Post hoc tests with Bonferroni correction for ‘Flashed Schematic Eyes’ paradigm revealed that peak amplitudes were larger at parietal sites (7.09 ± 0.82 μV) in comparison to the central sites (5.26 ± 0.44 μV, p = 0.038). For ‘Flashed Face—Eye position change’ the amplitudes were higher at parietal sites (8.33 ± 0.93 μV) in comparison to the central sites (3.93 ± 0.40 μV, p < 0.0001) and frontal sites (3.99 ± 0.52μV, p = 0.004). For ‘Flashed Face—Eye and Head position change’ we observed greater P300 peak amplitudes at parietal sites (9.54 ± 1.22 μV) in comparison to the central sites (4.49 ± 0.62 μV, p < 0.0001) and frontal sites (3.78 ± 0.65 μV, p = 0.003). The analysis showed equivalent hemispheric responses, which are consistent with the expected symmetry from more conventional P300 paradigms.

For the animated paradigms (‘Animated 3D body—gaze change in 1 avatar’ and ‘Animated 4 avatar environment—gaze change in 4 avatars’) the post hoc tests confirmed an unexpected asymmetry in the amplitude distribution. For ‘Animated 3D body—gaze change in 1 avatar’ paradigm the P300 peaks amplitudes were higher at right (6.98 ± 0.65 μV) sites compared to left (5.01 ± 0.44 µV, p < 0.0001) sites. Also for ‘Animated 4 avatar environment—gaze change in 4 avatars’ the amplitudes were superior at right (4.54 ± 0.45 μV) electrode positions compared to Left (3.34 ± 0.36 μV, p = 0.002) sites. The amplitudes were superior at Parietal sites (5.41 ± 0.542 μV), comparing to Central (3.65 ± 0.40 μV, p = 0.001) and Frontal areas (3.77 ± 0.39 μV, p = 0.001). In sum, the P300 peak amplitudes of the animated paradigms (‘Animated 3D body—gaze change in 1 avatar’ and ‘Animated 4 avatar environment—gaze change in 4 avatars’) were significantly higher at right sites suggesting that a new component was superimposed to P300 signals in these social animated stimuli. Grand-averages concerning these paradigms are shown in Fig. 5 and Fig. 6, respectively.

Concerning latencies, Friedman tests of the mean latencies at the defined areas (frontal—F3, Fz, F4; central—C3, Cz, C4; parietal—P3, Pz, P4) and locations relative to midline (left—F3, C3, P3; right—F4, C4, P4) showed no effects of area and location for ‘Flashed Schematic Eyes’ paradigm (χ2(2) = 2.471, p = 0.291, χ2 (1) = 0.059, p = 0.808).

For ‘Flashed Face—Eye position change’ the area effects were significant (χ2(2) = 8.588, p = 0.014) but location effects were not observed (χ2 (1) = 2.882, p = 0.090).

For ‘Flashed Face—Eye and Head position change’ paradigm neither area effects (χ2(2) = 3.294, p = 0.193), nor location effects were significant (χ2 (1) = 0.250, p = 0.617).

The ‘Animated 3D body—gaze change in 1 avatar’ condition showed significant differences between areas (χ2 (2) = 9.882, p = 0.007). Location effects were not observed (χ2 (1) = 0.059, p = 0.808). We observed area effects in the ‘Animated 4 avatar environment—gaze change in 4 avatars’ paradigm (χ2(2) = 6.706, p = 0.035), but not location effects (χ2 (1) = 1.471, p = 0.225).

The post hoc analysis with Wilcoxon signed-rank tests with Bonferroni correction were applied, resulting in a significance level set at p < 0.017. For ‘Flashed Face—Eye position change’ the post hoc analysis revealed faster latencies at Parietal areas (374.78 ± 107.38 ms) comparatively to Central (505.65 ± 164.42 ms, Z = -2.675, p = 0.007) and Frontal (566.96 ± 201.55 ms, Z = - 2.438, p = 0.015) sites. For the ‘Animated 4 avatar environment—gaze change in 4 avatars’ and ‘Animated 3D body—gaze change in 1 avatar’ there were no significant differences in any of the mean latencies of the defined Areas.

Fig. 7 provides an overall summary of the main results reported in this study.

thumbnail
Fig 7. Summary of results.

Top: target and non-target waveform amplitudes comparison. Bottom: P300 amplitudes were significantly higher in the right hemisphere for realistic animations (‘Animated 3D body—gaze change in 1 avatar’ and ‘Animated 4 avatar environment—gaze change in 4 avatars’). Error bars are the standard error of the mean.

https://doi.org/10.1371/journal.pone.0121970.g007

All participants were able to detect the 10 target events of each block in more than 99% of the cases. We had already shown that no detections imply the absence of a P300 matching the absence of behavioural report [60]. Therefore, the potential confound of no detection of target events is not present in our experiment.

We also performed control analyses comparing conditions whereby avatars gaze either to the left or to the right. Amplitude values for both types of gaze were virtually identical and were therefore not significantly different.

Waveform classification

Significant classification performance was found already at the signal trial level (see Fig. 8). The metrics used were accuracy (TP + TN) / N, specificity TN / (TN + FP), sensitivity TP / (TP + FN) and balanced accuracy (SP + SS) / 2 (TP, TN, FP, FN, SP, SS and N as True Positives, True Negatives, False Positives, False Negatives, Specificity, Sensitivity and total number of events, respectively). Permutations tests for each subject and paradigm, yielded p-values bellow 0.05 in 99,9% of the cases which means that the classifier performs well and is reliable. An improvement in classification performance was observed, as expected, when increasing the number of averaged EEG single-trial epochs due to the noise reduction effect of averaging. Yet, this increase is no longer relevant after 5 averaged epochs because of the probable loss of relevant information in averaging and the decreasing of the dataset size. As expected, classification results are worse but still significant for animated paradigms due to the complexity of the scene. The results are presented in Fig. 8.

thumbnail
Fig 8. Classification results.

Top: Results of several metrics with all paradigms averaged, for the different number of trials used. Bottom: Comparison of balanced accuracy between paradigms for the different number of events average.

https://doi.org/10.1371/journal.pone.0121970.g008

Classification results comparison between stimuli conditions revealed a significant main effect (F(4, 80) = 2.483, p = 0.050). Post hoc analysis with Bonferroni correction showed that classification results were significantly better for ‘Flashed Schematic Eyes’ (0.91 ± 0.02) comparing to the more complex ‘Animated 4 avatar environment—gaze change in 4 avatars’ conditions (0.82 ± 0.19, p = 0.039).

DISCUSSION

The goal of the present study was to study and identify the neurophysiological correlates of attention to realistic social scenes which degree of complexity was defined in a defined ordinal manner (Table 1), taking into account the number of items in the scene (including multipart social objects), elements that define the trajectory of social object and presence of movement. This was attempted at a single or near single-trial level, which would potentially allow pinpointing complex directed attention responses at the single event level. We found an oddball response for all the conditions and, importantly, we proved that the processing of realistic multi-agent actions (that can or not be interpreted as social) can be detected in human oddball responses both in average responses and at the single-trial level.

We found an oddball response for all the tested paradigms, irrespective of their complexity. Our waveforms analysis revealed a distinct P300-like waveform when it was elicited by animated stimuli representing realistic gestures. This signal does differ from the classic P300 by being right lateralized, which is not explained by low level features of the stimuli, given that it was neither found in our study, using the simpler paradigms, nor reported in the literature (see below discussion of potential confounds).

We hypothesize that the right lateralization is due to high level characteristics introduced by the realism of the animated paradigms, such as the reflexive attention generated by social gaze orientation. This hypothesis is supported by recent findings in fMRI studies [6163] about the regions involved in gaze orienting. These studies described the influence of the right STS and other regions in processing of dynamic social attention cues (for a review see [48,64].

There is solid evidence for hemispheric asymmetries underlying the domain of social perception(for a review see [65]. Our results are consistent with the idea that the neural substrates of the perception of gaze, faces and related gestures are characterized by a general pattern of right-hemispheric functional asymmetry. It has been postulated that such substrates might benefit from other aspects of hemispheric lateralization in affective processing, instead of constituting an isolated specialization for social information. Individual recognition or social judgment is prioritized by the human brain and may benefit if it were localized only in one hemisphere [65]. Even simple face detection is already lateralized to the right hemisphere [66,67]. Processing of facial expressions of emotion, which is also relevant for social cognition, is known to be lateralized [68,69].

Concerning social cognition, gaze plays a central role in decoding others’ attention, goals and intentions, supporting the idea of a right-hemispheric bias for eye gaze perception. Hemispheric asymmetries in the neural substrates of gaze perception are present even in domestic chicks [70] and confirmed by human data [71]. In the latter work it was shown that eye gaze is processed better when presented in the left visual field. Following this idea, a right hemisphere involvement in social responses seems to be well-established among all vertebrates studied so far [72,73].

Biological motion was also a feature in our displays, in particular in the more complex animated versions. These stimuli are of evolutionary importance since social animals, such as humans, take decisions based on the interpretation of the actions of others. Saygin in [74] examined 60 unilateral stroke patients, and found no evidence suggesting lateralization for basic biological motion perception. Nevertheless, Pelphrey and colleagues in [75] found such a lateralization when studying perception of naturalistic social movements in complex context, which led Saygin et al. to infer that the right lateralization of biological motion perception may be explained by the ‘social’ aspects elicited by human motion rather than by body movement per se. It is demonstrated that BOLD response to dynamic faces is higher than to static faces in right STS [38]. Puce et al. in [44], described higher right STS activation during perception of moving eyes and mouth within a face. This is consistent with our results, since lateralization emerges for the more complex displays. Our data support the idea that social gaze orientation characteristics coupled to the ecological nature of our conditions elicited an increased asymmetric temporo-parietal response that adds to the reflexive attentional processes inherent to the ‘oddball’ structure of the paradigm.

Altogether the available evidence indicates that social orienting relies on asymmetric cortical mechanisms. Future studies should elucidate the nature of the realistic social processing network model underlying the identified neurophysiological component. Anyway, our results suggest that these signals are highly generalizable to realistic oddball contexts.

One could argue that the P300-like ERP lateralization is not related to the complexity of the stimulus presented, but to the direction where in the target stimuli the gaze is directed. It should be noted that both viewer and avatar frames of reference (left-right reversed) are distinct and simultaneously present, which makes predictions based on gaze direction not obvious (and if present would mask rather than emphasize our results). In any case, our new data and analyses comparing conditions where avatars gaze either to the left or to the right revealed that the amplitude values for both types of gaze were virtually identical. This potential confound is therefore ruled out under the conditions of the experiment (where both viewer and avatar frames of reference are distinct and simultaneously present).

An important validation of the relevance of these oddball signals was their data driven identification with high fidelity with few or even single-trials. The success of single-trial classification of P300 signals evoked by realistic gestures is important not only in cognitive neuroscience but also for the potential development of clinical applications including BCIs. Since cognitive processes depend critically on the specific situational context in which a subject is embedded [50], we can use this kind of stimuli to create realistic, structured and efficient models of social interactions that can be detectable with excellent temporal resolution. It potentiates the use of BCIs in clinical applications of adaptive social behaviour in normal subjects and disorders of social cognition such as in autism [11,51].

The usage of 16 electrodes can be viewed as a potential limitation of this study, but on the other hand the identification of these neural signals with very few electrodes at the single-trial level reinforces the idea that it is possible to use these signals as markers of attention within complex social events/scenes in BCI applications, with as few electrodes as possible. Further work should be made to find the best features that define this signal and the electrode positions that are best suited to provide such features. With this, one might drastically reduce the number of electrodes to use in clinical applications such as in autism, which will reduce the preparation time of the sessions, usually one of the major drawbacks of EEG applications in clinical BCI.

To conclude, we have shown that realistic animated oddballs with social content generate a specific response that can be successfully classified even at the single-trial level. We verified that this specific response is right lateralized for more complex scenes. We do believe that this work paves the way to study social cognition at the single or near single-trial and opens the door for future forms of cognitive training in diseases of social cognition.

Author Contributions

Conceived and designed the experiments: MCB CA MS. Performed the experiments: CA MS. Analyzed the data: CA MS MCB. Contributed reagents/materials/analysis tools: CA MS. Wrote the paper: CA MS MCB.

REFERENCES

  1. 1. Kaspar K. What Guides Visual Overt Attention under Natural Conditions? Past and Future Research. ISRN Neurosci. 2013;2013: 868491. pmid:24959568
  2. 2. Semrud-Clikeman M, Fine JG, Zhu DC. The role of the right hemisphere for processing of social interactions in normal adults using functional magnetic resonance imaging. Neuropsychobiology. 2011;64: 47–51. pmid:21606658
  3. 3. Stoesz BM, Jakobson LS. Developmental changes in attention to faces and bodies in static and dynamic scenes. Front Psychol. 2014;5: 193. pmid:24639664
  4. 4. Rice K, Moriuchi JM, Jones W, Klin A. Parsing heterogeneity in autism spectrum disorders: visual scanning of dynamic social scenes in school-aged children. J Am Acad Child Adolesc Psychiatry. 2013;51: 1–17.
  5. 5. Pires G, Nunes U, Castelo-Branco M. Comparison of a row-column speller vs. a novel lateral single-character speller: assessment of BCI for severe motor disabled patients. Clin Neurophysiol. International Federation of Clinical Neurophysiology; 2012;123: 1168–81. pmid:22244868
  6. 6. Pires G, Nunes U, Castelo-Branco M. Statistical spatial filtering for a P300-based BCI: Tests in able-bodied, and patients with cerebral palsy and amyotrophic lateral sclerosis. J Neurosci Methods. 2011;195: 270–281. pmid:21129404
  7. 7. Chai H, Chen WZ, Zhu J, Xu Y, Lou L, Yang T, et al. Processing of facial expressions of emotions in healthy volunteers: an exploration with event-related potentials and personality traits. Neurophysiol Clin. 2012;42: 369–75. pmid:23181967
  8. 8. Campanella S, Gaspard C, Debatisse D, Bruyer R, Crommelinck M, Guerit J-M. Discrimination of emotional facial expressions in a visual oddball task: an ERP study. Biol Psychol. 2002;59: 171–186. pmid:12009560
  9. 9. Fishman I, Ng R, Bellugi U. Do extraverts process social stimuli differently from introverts? Cognitive Neuroscience. 2011. pp. 67–73. https://doi.org/10.1080/17588928.2010.527434
  10. 10. Susac A, Ilmoniemi RJ, Pihko E, Supek S. Neurodynamic studies on emotional and inverted faces in an oddball paradigm. Brain Topogr. 2004;16: 265–268. pmid:15379225
  11. 11. Duncan CC, Barry RJ, Connolly JF, Fischer C, Michie PT, Näätänen R, et al. Event-related potentials in clinical research: guidelines for eliciting, recording, and quantifying mismatch negativity, P300, and N400. Clin Neurophysiol. International Federation of Clinical Neurophysiology; 2009;120: 1883–908. pmid:19796989
  12. 12. Patel SH, Azzam PN. Characterization of N200 and P300: Selected studies of the Event-Related Potential [Internet]. International Journal of Medical Sciences. 2005. pp. 147–154. https://doi.org/10.7150/ijms.2.147
  13. 13. Polich J. Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol. 2007;118: 2128–48. pmid:17573239
  14. 14. Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol. 1988;70: 510–23. Available: http://www.ncbi.nlm.nih.gov/pubmed/2461285 pmid:2461285
  15. 15. Mak JN, Arbel Y, Minett JW, McCane LM, Yuksel B, Ryan D, et al. Optimizing the P300-based brain-computer interface: current status, limitations and future directions. J Neural Eng. 2011;8: 025003. pmid:21436525
  16. 16. Wolpaw J, Wolpaw EW. Brain–Computer Interfaces: Principles and Practice [Internet]. Oxford University Press; 2012. https://doi.org/10.1093/acprof:oso/9780195388855.001.0001
  17. 17. Kleih SC, Kaufmann T, Zickler C, Halder S, Leotta F, Cincotti F, et al. Out of the frying pan into the fire-the P300-based BCI faces real-world challenges. Prog Brain Res. 2011;194: 27–46. pmid:21867792
  18. 18. Wang PT, King CE, Do AH, Nenadic Z. Pushing the Communication Speed Limit of a Noninvasive BCI Speller. arXiv Prepr arXiv12120469. 2012; 1–9. Available: http://arxiv.org/abs/1212.0469
  19. 19. Tangermann M, Müller K-R, Aertsen A, Birbaumer N, Braun C, Brunner C, et al. Review of the BCI Competition IV. Front Neurosci. 2012;6: 55. pmid:22811657
  20. 20. Pires G, Nunes U, Castelo-Branco M. Comparison of a row-column speller vs. a novel lateral single-character speller: Assessment of BCI for severe motor disabled patients. Clin Neurophysiol. 2012;123: 1168–81. pmid:22244868
  21. 21. Zhang Y, Zhao Q, Jin J, Wang X, Cichocki A. A novel BCI based on ERP components sensitive to configural processing of human faces. J Neural Eng. 2012;9: 026018. pmid:22414683
  22. 22. Kaufmann T, Schulz SM, Grünzinger C, Kübler A. Flashing characters with famous faces improves ERP-based brain-computer interface performance. J Neural Eng. 2011;8: 056016. pmid:21934188
  23. 23. Onishi A, Zhang Y. Fast and reliable P300-based BCI with facial images. Proceedings of the 5th International Brain-Computer Interface Conference. 2011. pp. 192–195. Available: http://www.researchgate.net/profile/Andrzej_Cichocki2/publication/228993222_Fast_and_Reliable_P300-Based_BCI_with_Facial_Images/links/0912f5149804de44d2000000.pdf
  24. 24. Conty L, N’Diaye K, Tijus C, George N. When eye creates the contact! ERP evidence for early dissociation between direct and averted gaze motion processing. Neuropsychologia. 2007;45: 3024–37. pmid:17644145
  25. 25. Senju A, Tojo Y, Yaguchi K, Hasegawa T. Deviant gaze processing in children with autism: an ERP study. Neuropsychologia. 2005;43: 1297–306. pmid:15949514
  26. 26. Gunji A, Goto T, Kita Y, Sakuma R, Kokubo N, Koike T, et al. Facial identity recognition in children with autism spectrum disorders revealed by P300 analysis: A preliminary study. Brain Dev. 2013;35: 293–8. pmid:23398956
  27. 27. Bobes M a, Lopera F, Comas LD, Galan L, Carbonell F, Bringas ML, et al. Brain potentials reflect residual face processing in a case of prosopagnosia. Cogn Neuropsychol. 2004;21: 691–718. pmid:21038228
  28. 28. Sachs G, Anderer P, Margreiter N, Semlitsch H, Saletu B, Katschnig H. P300 event-related potentials and cognitive function in social phobia. Psychiatry Res. 2004;131: 249–61. pmid:15465294
  29. 29. Bayliss JD, Ballard DH. Single trial P3 epoch recognition in a virtual environment. Neurocomputing. 2000;32–33: 637–642.
  30. 30. Donnerer M, Steed A. Using a P300 Brain–Computer Interface in an Immersive Virtual Environment. Presence Teleoperators Virtual Environ. 2010;19: 12–24.
  31. 31. Piccione F, Priftis K, Tonin P, Vidale D. Task and stimulation paradigm effects in a P300 brain computer interface exploitable in a virtual environment: a pilot study. PsychNology. 2008;6: 99–108. Available: http://www.psychnology.org/File/PNJ6(1)/PSYCHNOLOGY_JOURNAL_6_1_PICCIONE.pdf
  32. 32. Emery NJ. The eyes have it: The neuroethology, function and evolution of social gaze [Internet]. Neuroscience and Biobehavioral Reviews. 2000. pp. 581–604. https://doi.org/10.1016/S0149-7634(00)00025-7
  33. 33. Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S. Gaze Perception Triggers Reflexive Visuospatial Orienting. Vis cogn. 1999;6: 509–540.
  34. 34. Friesen CK, Kingstone A. Covert and overt orienting to gaze direction cues and the effects of fixation offset. Neuroreport. 2003;14: 489–493. pmid:12634510
  35. 35. Bonda E, Petrides M, Ostry D, Evans A. Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J Neurosci. 1996;16: 3737–3744. Available: http://www.ncbi.nlm.nih.gov/pubmed/8642416 pmid:8642416
  36. 36. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, et al. Brain areas involved in perception of biological motion. J Cogn Neurosci. 2000;12: 711–20. Available: http://www.ncbi.nlm.nih.gov/pubmed/11054914 pmid:11054914
  37. 37. LaBar KS, Crupain MJ, Voyvodic JT, McCarthy G. Dynamic perception of facial affect and identity in the human brain. Cereb Cortex. 2003;13: 1023–1033. pmid:12967919
  38. 38. Schultz J, Pilz KS. Natural facial motion enhances cortical responses to faces. Exp brain Res. 2009;194: 465–75. pmid:19205678
  39. 39. Pelphrey KA, Morris JP, Mccarthy G, Labar KS. Perception of dynamic changes in facial affect and identity in autism. Soc Cogn Affect Neurosci. 2007;2: 140–149. pmid:18174910
  40. 40. Campbell R, MacSweeney M, Surguladze S, Calvert G, McGuire P, Suckling J, et al. Cortical substrates for the perception of face actions: An fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning). Cogn Brain Res. 2001;12: 233–243. pmid:11587893
  41. 41. Hall D a, Fussell C, Summerfield AQ. Reading fluent speech from talking faces: typical brain networks and individual differences. J Cogn Neurosci. 2005;17: 939–53. pmid:15969911
  42. 42. Bartels A, Zeki S. Functional brain mapping during free viewing of natural scenes. Hum Brain Mapp. 2004;21: 75–85. pmid:14755595
  43. 43. Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R. Intersubject synchronization of cortical activity during natural vision. Science. 2004;303: 1634–40. pmid:15016991
  44. 44. Puce A, Allison T, Bentin S, Gore JC, McCarthy G. Temporal cortex activation in humans viewing eye and mouth movements. J Neurosci. 1998;18: 2188–2199. Available: http://www.ncbi.nlm.nih.gov/pubmed/9482803 pmid:9482803
  45. 45. Allison T, Puce A, McCarthy G. Social perception from visual cues: Role of the STS region [Internet]. Trends in Cognitive Sciences. 2000. pp. 267–278. https://doi.org/10.1016/S1364–6613(00)01501–1
  46. 46. Hein G, Knight RT. Superior temporal sulcus—It’s my area: or is it? J Cogn Neurosci. 2008;20: 2125–36. pmid:18457502
  47. 47. Haxby J V., Hoffman EA, Gobbini MI. The distributed human neural system for face perception [Internet]. Trends in Cognitive Sciences. 2000. pp. 223–233. https://doi.org/10.1016/S1364-6613(00)01482-0
  48. 48. Nummenmaa L, Calder AJ. Neural mechanisms of social attention. Trends Cogn Sci. 2009;13: 135–43. pmid:19223221
  49. 49. Lahnakoski JM, Glerean E, Salmi J, Jääskeläinen IP, Sams M, Hari R, et al. Naturalistic FMRI mapping reveals superior temporal sulcus as the hub for the distributed brain network for social perception. Front Hum Neurosci. 2012;6: 233. pmid:22905026
  50. 50. Kingstone A. Taking a real look at social attention. Curr Opin Neurobiol. 2009;19: 52–6. pmid:19481441
  51. 51. Mattout J. Brain-computer interfaces: a neuroscience paradigm of social interaction? A matter of perspective. Front Hum Neurosci. 2012;6: 114. pmid:22675291
  52. 52. Wainer AL, Ingersoll BR. The use of innovative computer technology for teaching social communication to individuals with autism spectrum disorders. Res Autism Spectr Disord. 2011;5: 96–107.
  53. 53. Jones EA, Carr EG, Feeley KM. Multiple effects of joint attention intervention for children with autism. Behav Modif. 2006;30: 782–834. pmid:17050765
  54. 54. Whalen C, Schreibman L, Ingersoll B. The collateral effects of joint attention training on social initiations, positive affect, imitation, and spontaneous speech for young children with autism. J Autism Dev Disord. 2006;36: 655–64. pmid:16810564
  55. 55. Koegel RL, Vernon TW, Koegel LK. Improving social initiations in young children with autism using reinforcers with embedded social interactions. J Autism Dev Disord. 2009;39: 1240–51. pmid:19357942
  56. 56. Hennighausen E, Heil M, Rösler F. A correction method for DC drift artifacts. Electroencephalogr Clin Neurophysiol. 1993;86: 199–204. pmid:7680996
  57. 57. van der Heijden F, Duin RPW, de Ridder D, Tax DMJ. Classification, parameter estimation and state estimation, an engineering approach using Matlab. Journal of Time Series Analysis. Chichester: John Wiley & Sons, Ltd; 2004.
  58. 58. Cortes C, Vapnik V, Saitta L. Support-vector networks. Mach Learn. 1995;297: 273–297.
  59. 59. Ojala M, Garriga GC. Permutation Tests for Studying Classifier Performance. 2009 Ninth IEEE Int Conf Data Min. IEEE; 2009;11: 908–913. https://doi.org/10.1109/ICDM.2009.108
  60. 60. Teixeira M, Pires G, Raimundo M, Nascimento S, Almeida V, Castelo-Branco M. Robust single trial identification of conscious percepts triggered by sensory events of variable saliency. PLoS One. 2014;9: e86201. pmid:24465957
  61. 61. Mosconi MW, Mack PB, McCarthy G, Pelphrey KA. Taking an “intentional stance” on eye-gaze shifts: a functional neuroimaging study of social perception in children. Neuroimage. 2005;27: 247–52. pmid:16023041
  62. 62. Carlin JD, Rowe JB, Kriegeskorte N, Thompson R, Calder AJ. Direction-sensitive codes for observed head turns in human superior temporal sulcus. Cereb Cortex. 2012;22: 735–44. pmid:21709175
  63. 63. Laube I, Kamphuis S, Dicke PW, Thier P. Cortical processing of head- and eye-gaze cues guiding joint social attention. Neuroimage. Elsevier Inc.; 2011;54: 1643–53. pmid:20832481
  64. 64. Carlin JD, Calder AJ. The neural basis of eye gaze processing. Curr Opin Neurobiol. Elsevier Ltd; 2013;23: 450–5. pmid:23266245
  65. 65. Brancucci A, Lucci G, Mazzatenta A, Tommasi L. Asymmetries of the human social brain in the visual, auditory and chemical modalities. Philos Trans R Soc Lond B Biol Sci. 2009;364: 895–914. pmid:19064350
  66. 66. Graewe B, Lemos R, Ferreira C, Santana I, Farivar R, De Weerd P, et al. Impaired processing of 3D motion-defined faces in mild cognitive impairment and healthy aging: an fMRI study. Cereb Cortex. 2013;23: 2489–99. pmid:22879351
  67. 67. Rebola J, Castelhano J, Ferreira C, Castelo-Branco M. Functional parcellation of the operculo-insular cortex in perceptual decision making: an fMRI study. Neuropsychologia. 2012;50: 3693–701. pmid:22771785
  68. 68. Demaree H a, Everhart DE, Youngstrom E a, Harrison DW. Brain lateralization of emotional processing: historical roots and a future incorporating “dominance”. Behav Cogn Neurosci Rev. 2005;4: 3–20. pmid:15886400
  69. 69. Narumoto J, Okada T, Sadato N, Fukui K, Yonekura Y. Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Cogn Brain Res. 2001;12: 225–231. pmid:11587892
  70. 70. Rosa Salva O, Regolin L, Vallortigara G. Chicks discriminate human gaze with their right hemisphere. Behav Brain Res. 2007;177: 15–21. pmid:17174412
  71. 71. Ricciardelli P, Ro T, Driver J. A left visual field advantage in perception of gaze direction. Neuropsychologia. 2002;40: 769–777. pmid:11900727
  72. 72. Siniscalchi M. Divided brains. The biology and behaviour of brain asymmetries. [Internet]. Laterality. Cambridge University Press; 2013. pp. 1–3. https://doi.org/10.1080/1357650X.2013.833214 pmid:23387932
  73. 73. Salva O, Regolin L, Mascalzoni E, Vallortigara G. Cerebral and Behavioural Asymmetries in Animal Social Recognition. Comp Cogn Behav Rev. 2012;7: 110–138.
  74. 74. Saygin AP. Superior temporal and premotor brain areas necessary for biological motion perception. Brain. 2007;130: 2452–2461. pmid:17660183
  75. 75. Pelphrey KA, Viola RJ, Mccarthy G. When strangers pass: processing of mutual and averted social gaze in the superior temporal sulcus. Psychol Sci. 2004;15: 598–603. pmid:15327630