International Workshop on Facial Expression, Gaze, and Emotion

Program and Abstructs: The 1st day(October, 19),  The 2nd day(October, 20)

Schedule for the 1st day

Place: Shiran Kaikan (芝蘭会館)
Program
10:00-10:15 Eiko Shimojo (Bunkyo Gakuin Univ. Japan) Opening Remarks
10:15-11:15 Claudiu Simion (California Institute of Technology, USA)
11:15-12:15 Shoji Itakura (Kyoto Univ. Japan)

13:30-14:30 Jamin Halberstadt (Univ. of Otago, New Zealand)
14:30-15:30 Sakiko Yoshikawa (Kyoto Univ. Japan)
15:45-16:45 David Perrett (Univ.of St. Andrews, UK)
16:45-17:15 Shinsuke Shimojo (California Institute of Technology, USA)
Closing Remarks

Organizers Eiko Shimojo (Bunkyo Gakuin Univ. Japan)
Sakiko Yoshikawa (Kyoto Univ. Japan)
Shinsuke Shimojo (California Institute of Technology USA)
Titles and Abstracts of the Speakers
Seeing and liking -interaction between cognition and gaze in preference decisions.
Claudiu Simion (California Institute of Technology, USA)
We have examined the intricate relationship between bodily orienting responses and attractiveness judgments. Our early studies revealed a bias in gaze direction when participants compared two faces for attractiveness. The faces in a pair were always matched for baseline attractiveness, to make the task difficult. The participants were more likely to gaze at the chosen face, with the bias gradually increasing from chance level (50%) to 84% just prior to decision, in what we called a "gaze cascade effect". Thus, a cognitive causal pathway (preference for an object draws gaze) and a perceptual-motor pathway (exposure to a stimulus renders it more attractive) seem to coexist and form a positive feedback loop, which is necessary to make a conscious preference decision. We further examined the robustness and generality of the cascade effect. We show that although the effect is present in alternate tasks (roundness and unattractiveness of faces), it is of significantly lower magnitude. We also found the effect with abstract shapes (Fourier descriptor generated figures) both attractiveness and complexity. In all conditions described, the size of the effect was proportional with the difficulty of the task and enhanced when the task involved stimulus attractiveness. We propose a model in which the gaze bias is a necessary part of the preference / selection mechanism.
Understanding other's gaze and mind by human infants and nonhuman primates
Shoji Itakura (Kyoto Univeristy)

In the recent studies, developmental psychologist and primatologist focused on human infants' ability of gaze following or joint visual attention, and demonstrated that they have a sophisticated skill for them. In my presentation, I will talk about gaze following behavior and understanding of seeing-knowing relationship in human infants and nonhuman primates as follows: 1) human infants start gaze following behavior quite early and establishing joint attention with their mother has very important meaning. 2) Nonhuman primates especially apes, showed their gaze following even with human experimenter. Chimpanzees showed their ability of understanding seeing-knowing relationship in social competitive situation. Submissive individual approached to the food when the food was not seen by dominant individual. Such behavior was interpreted that the chimpanzee knew what the dominant chimpanzee saw and did not see. 3) Human young children postulated False Belief even in a humanoid robot in a standardized False Belief Task. However, they did not seem to contribute mental verb, such as think or consider, to the robot.
Effects of explanation on memory for emotional expressions.
Jamin Halberstadt (University of Otago)

People like to talk about and explain the emotions of others, and different explanations of the same ambiguous emotional expression will motivate andjustify different responses to it. However, results of several studies in my laboratory show that people's explanations can also bias their memory for the emotions they are explaining. Experimental participants viewed faces expressing emotion blends (e.g., of anger and sadness) and explained why each target person was expressing one of the two blended emotions. Later, participants attempted to identify the facial expressions in computer movies, in which the previously seen faces changed continuously from anger to sadness. Faces explained in terms of anger were remembered as more angry than the same faces explained in terms of sadness. I propose a model, supported by new data, in which explanation decomposes configural face representation, leaving facial features vulnerable to reintegration with emotion concepts available in the explanation itself. 
Anger face advantage is not the whole story: Interaction between emotional facial expression and face/gaze direction
Sakiko Yoshikawa (Kyoto University)

An angry face is felt to be much more threatening for a person when the gaze is directed directly toward him/her, as opposed to when the gaze is averted. Human beings seem to possess a psychological mechanism by which 'social messages' (e.g. "I am angry at you, not him") are extracted from another person's face by means of processing the combination of facial expression and face/gaze direction. Our behavioral studies showed that interactive processing of emotional facial expression and face/gaze direction occurred. While briefly presented angry faces were perceived more accurately than were faces showing other emotions, the level of accuracy for perceiving angry faces when the face/gaze was directed towards the observer was yet higher. In a rating task, in which subjects assessed the intensity of their own emotions when looking at angry faces, subjects registered a higher fearful emotion score, when the gaze was directed, as opposed to averted. Consistent with the evidence from these behavioral studies, an fMRI study demonstrated that observing angry faces that were directed toward the observers produced higher activation in the amygdala than did observation of the other conditions. The result suggests that threatening expressions evoke a stronger emotional activation when the face/gaze is directed toward an observer. Based on these findings, I will discuss psychological mechanisms that participate in processing biologically relevant stimuli such as angry faces.
Interpreting a person's face when we think that person is communicating with us.
D.I. Perrett, B. Wicker, J. O'Doherty, M. Burt, & E. Frigerrio (University of St. Andrews, UK)

Facial signals cues interact to specify an observed person's intention to interact positively or negatively with us. Studies of facial expression examining only faces making eye contact confound the ability to understand emotional states (independent of where they are directed) with the ability to understand who is the recipient of expressed emotions. This distinction may have implications for clinical populations with difficulties in social interactions. Frigerrio, al. found that (alcoholic) subjects were inclined to interpret negative emotions as representing hostility when emotions were expressed towards the observer. Functional imaging (Wicker et al. Neuropsychologia in press) reveals brain regions that are active depending on interpretation of emotion and direction of facial expression. The temporal pole was activated when subjects judged emotion and were recipients of expressed hostility and friendship. We find it particularly pleasurable when an attractive person smiles and makes eye contact with us since these signals indicate an intention to interact positively. This experience is reflected in brain systems evaluating reward (ventral thalamus, striatum; Kampe et al. 2001, Nature 413:589 and medial orbito-frontal cortex; O'Doherty et al., Neuropsychologia in press). Activity in these systems depends on facial attractiveness but is also enhanced by eye contact and smiling expression. Thus behavioural and imaging studies demonstrate the importance of considering gaze direction when interpreting emotion and intentions.

↑Top


Schedule for the 2nd day

Place: Faculty of Education Building Kyoto University
Program
10:00 a.m.-
Hiroyuki Sasaki (Tohoku University)
Effects of gaze perception on response to location and feature.
Rutuko S. Nagayama (Hiroshima Prefectural College of Health Sciences)
Body direction influences the judgment on the face/eye direction
Takashi Okada (Kyoto University)
Gaze-triggered reflexive attentional orientation in individuals with autism
Wataru Sato (Kyoto University)
Neuro-cognitive system underlying the perception of dynamic facial expressions
Eiko Shimojo (Bunkyo Gakuin University)
Emotional Priming by Recognition of Facial Expressions.
Miyuki Kamachi (Advanced Telecommunication Research Institute)
Can we predict faces from voices, and vice versa?
Jamin Halberstadt (University of Otago)
Averageness and attractiveness
Shinsuke Shimojo (California Institute of Technology & NTT Communication Science Laboratories.)
Bias in Preference Induced by Gaze Manipulation
Titles and Abstracts of the Speakers
Sasaki, H., Ishi, H., & Gyoba, J.(Tohoku University)
Effects of gaze perception on response to location and feature.

Some psychological studies have shown that the gaze direction triggers reflexive shifts of attention. The cortical region responding to gaze direction has connections with both of the two visual pathways responsible for spatial orienting and feature analysis. The purpose of the present study was to examine the effect of attention triggered by gaze on the two visual functions. Participants engaged in both location and orientation discrimination tasks with uninformative gaze cues. The gaze direction facilitated the performance in the location discrimination task, but not in the orientation discrimination task. These results suggest a specific contribution of gaze perception to information processing involved in spatial orienting. Moreover, further analysis of the data revealed that the cue-compatibility would be retained in an implicit short-term memory.
Ruth S. Nagayama (Hiroshima Prefectural College of Health Sciences) & Jun'ichiro Seyama (Tokyo University)
Body direction influences the judgment on the face/eye direction

In our psychophysical experiments, we presented subjects the human figures whose eyes, head, and torso were visible. The eyes, the head, and the torso were either directed toward the subjects or to the right. Thus, the combinations of the two directions for the three body-components (eyes, head, and torso) yielded eight poses (= 2 x 2 x 2). In one series of experiments, subjects judged whether the eyes were directed to the front or rightwards as fast as possible, and the RTs for the judgments were measured. Subjects were instructed to neglect the directions of the head and the torso. Nevertheless, the directions of the head and the torso affected the judgments on the eye direction. Stroop-type interference theory (Langton, 2000, Quart. J. Exp. Psych.) predicts that the shortest RT should be obtained for the pose(s) in which the directions of the eyes, the head, and the torso are identical. However, this prediction was not confirmed. Subjects' judgments were much faster for the 'twisted' poses. Recently, we started an fMRI study in which subjects observed the human figures noted above. Are the brain activities for the congruent poses and those for the twisted poses different? We will present the preliminary fMRI data and analyses.
Takashi Okada, Wataru Sato, and Toshiya Murai (Kyoto University)
Gaze-triggered reflexive attentional orientation in individuals with autism

Defective joint attention behaviors is one of the most prominent social disabilities in individuals with autism. Few studies, however, have carefully examined this impairment, thus its biological basis remains unclear. On the other hands, recent psychological studies have identified a rapid and automatic component of joint attention phenomena in healthy adults. In this preliminary study, we examined whether or not the gaze-triggered reflexive attentional orientation was impaired in autism. Experiment 1: The subjects were three adults with autism who were found to be lacking apparent joint attention behaviors in interpersonal situations and found to be defective in false belief tasks. Targets were presented either to the left or the right side of the photographs of a gazing face. The subjects were instructed to localize the targets by pressing the keys as quickly and accurately as possible, regardless of the direction of the gaze. Gazes and targets were presented with a stimulus onset asynchrony (SOA) of 300 or 700 msec. The mean reaction times were shorter in the valid (targets were presented to the direction of gazes) than the invalid conditions for both SOA (p < 0.05). Experiment 2: Individuals with autism (three males and one female) and mental retardation (four males) were examined on the same gaze-cuing localization tasks as Experiment 1 with a SOA of 100, 300, 500, 700, 900 or 1100 msec. The mean reaction times in the valid condition were shorter than those in the invalid condition for the SOA of 100 or 300 msec in both groups (all ps < 0.05). These results suggest that individuals with autism, even those lacking joint attention behaviours in interpersonal situmations, respond reflexively to another person's gaze directions just as non-autistic individuals do. Recent neuropsychological and neuroimaging studies have indicated that a neural circuitry including the superior temporal sulcus, amygdala and intraparietal sulcus would be responsible for this process. Our current results suggest that such a neural circuitry is intact in individuals with autism.
Wataru Sato and Sakiko Yoshikawa (Kyoto University)
Human brain areas involved in the analysis of dynamic facial expressions of emotion

 Dynamic facial expressions of emotions are natural and powerful media of emotional communications between individuals. However, little is known about the specific neural substrate underlying the processing of dynamic facial expressions of emotions. We depicted the brain areas by using functional magnetic resonance imaging (fMRI) with twenty right-handed healthy subjects. The facial expressions are dynamically morphed from neutral faces to fearful or happy faces. Two types of control stimuli were presented: (i) dynamic mosaic images, which provided dynamic information with no facial features; and (ii) static facial expressions, which provided sustained emotional (fearful or happy) expressions. Subjects passively viewed these stimuli. Dynamic facial expressions elicited stronger emotional experiences of fear and happiness than static expressions. When dynamic faces were compared with mosaics and static faces, the bilateral broad region of occipital and temporal cortices, the right premotor cortex, and the intra-parietal sulcus were activated irrespective of emotional types. Observation of dynamic expressions of fear, but not happiness, activated the right amygdala higher than under the two control conditions. These results suggest that dynamic facial expressions evoke stronger emotional impressions by enhancing the brain activity. The activity in motor areas during observation of dynamic facial changes may reflect the human functions of mimicry when we communicate with each other.
Eiko Shimojo (Bunkyo Gakuin University)
Emotional Priming by Recognition of Facial Expressions.

Emotional states are said to be affected by experiencing or role-playing emotionally activating materials. However, the majority of evidence so far has been obtained with verbal materials. Could we demonstrate the same type of emotional modulation by merely having observers experiencing emotionally biased facial expressions? Considering allegedly non-verbal, autonomous effects of emotional contagion, it is likely. Here, we show such evidence by manipulating exposure to facial emotional expression (happy/sad/neutral), and comparing measures of emotional states in the participants before and after. We used a sets of adjectives questionnaires (adopted from MCCL) to measure the participants' baseline emotional states one month before the emotional manipulation stage. In this stage, the participants (N=120) were asked to perform emotionally neutral dummy tasks (such as to judge sizes of mouth or eyes, and roundness of face). There are three conditions with regard to ratio of different emotional expressions: (1) mainly happy (80% happy, and 20% sad faces), (2) mainly sad (80% sad, and 20% happy faces), (3) neutral (80% neutral, and 20% happy and sad faces). As a result, the emotional priming effects were highly significant in both directions: positive mood was enhanced after exposure to mainly happy faces and reduced after exposure to mainly sad faces (p<0.001). Negative mood was modulated in a mirror-symmetrical fashion as expected, and the overall effect was somewhat weaker yet significant (p<0.01). Thus, exposure to facial expressions strongly modulates the observer's own mood.
Miyuki Kamachi(ATR HIS), Harold Hill (ATR HIS), Karen Lander (Univ. of Manchester) & Eric V-Bateson (ATR-HIS)
Can we predict faces from voices, and vice versa?

We explore whether there is common audio and visual information for speaker identification. As a basic scheme, we utilized XAB tasks where a face (or a voice) speaking a short sentence was learned as X, and people choose between two voices (or faces) at test. Sentences used at learning and test were similar but not identical. Experiment 1 showed that performance was significantly better than chance, for both face and voice learning. However, in experiment 2, performances dropped to chance when the stimuli were presented backwards, suggesting that the critical information is spatiotemporal and direction specific. In experiment 3, we used sinewave speech to limit the information available in the auditory signal. People were still able to match this to a previously seen face, again consistent with the importance of coarse grain spatiotemporal information for this task. In this experiment people were at chance when going from face to voice, suggesting that it is difficult to encode identity specific information from sinewave speech although it can be recovered.
Jamin Halberstadt (University of Otago) & Gillian Rhodes (University of Western Australia)
Preference for prototypes and its implications for an evolutionary account of facial attractiveness.

People find average faces to be attractive, a phenomenon sometimes attributed to an evolved psychological mechanism to identify high quality mates. We challenge this simple account on the basis of a series of studies in which participants judged the averageness, attractiveness, and familiarity of a wide variety of positive and negative, natural and artificial categories. Almost every category revealed a strong averageness-attractiveness relationship, which was fully explained by subjective familiarity only in the case of artificial categories. We speculate that at least two mechanisms contribute to the attractiveness of average exemplars: a general preference for familiar stimuli, and, for natural stimuli only, a preference for averageness per se, possibly signaling genetic quality in living organisms.
Shinsuke Shimojo1),2) & Claudiu Simion1).
1)California Institute of Technology, 2)NTT Communication Science Laboratories.
Bias in Preference Induced by Gaze Manipulation.

We showed elsewhere that gaze shifts towards a face (or a geometric shape) that will be chosen as more attractive in two-alternative forced-choice, preference judgment (the "gaze cascade" effect). We interpreted this finding in that there are perhaps bi-directional, feedforward interactions between orienting and facilitation of visual memory trace, and that this provides a basis for conscious liking. To further obtained positive evidence, we attempted to affect the observer's preferential decision by introducing a bias in prior gaze direction/face to be exposed. As results; (a) we obtained up to 60% choice of the longer-gazed face, (b) repeted cycles of biased gaze/exposure strengthened the effect, and (c)control experiments indicated that this could not be attributed to the classical mere exposure effect. Thus, the brain relies on the bodily orienting responses to make a feeling felt.

↑Top