![Tool Tool](/uploads/1/2/5/4/125470310/237413993.jpg)
From Berlin to Shanghai. Everywhere the same: People are more successful, if they can read others. Micro Expressions are split-second facial expressions. Almost everyone shows this phenomenon.With the app 'Micro Expression Training' you can train to read and judge this instinctive mimik. You can gain deep insights into the real feelings of your opposite by reading Micro Expressions.' Micro Expression Training' is the first complete training tool for iPhone/iPad - with all of the 7 fundamental emotions and 15 different faces. What Micro Expressions are good forMicro Expressions flash for a short period of 40 to 200 milliseconds.
Micro Expression Training Tool 3.0 Download. If you want to download Paul Ekman Micro Expression Training Tools for free, without registration and SMS then you have not made a mistake and ended up at the desired in the category you want and you can download.
Almost instantly after the true emotion was shown, the face changes to a conscious facial expression.The benefit of recognizing mimik in a split second is the ability to react more emphatic. Wouldn't it be awesome to have the right instinct in every social situation? Anybody who can detect real emotions will not ask the boss for a raise if he/she is in a crumpy mood. The body can't lie.
Professional lie experts from the FBI use non-verbal signals to solve their cases for a long time.Learn to read Micro ExpressionsThe App 'Micro Expression Training' trains reading other's unconscious facial expressions. Fifteen people flash an emotion for a quick moment.
Your task is recognize and judge these expressions. The emotion library guides the way and teaches what to look for.Watch how you get better in reading peoples true emotions in a snap. Full Specifications General Publisher Publisher web site Release Date September 14, 2011 Date Added September 15, 2011 Version 1.0 Category Category Subcategory Operating Systems Operating Systems iOS Additional Requirements Compatible with iPhone, iPod Touch, and iPad.
Requires iOS 3.1 or later. ITunes account required Download Information File Size 13.13MB File Name External File Popularity Total Downloads 493 Downloads Last Week 1 Pricing License Model Purchase Limitations Not available Price $4.99.
Abstract Micro-expressions are often embedded in a flow of expressions including both neutral and other facial expressions. However, it remains unclear whether the types of facial expressions appearing before and after the micro-expression, i.e., the emotional context, influence micro-expression recognition. To address this question, the present study used a modified METT (Micro-Expression Training Tool) paradigm that required participants to recognize the target micro-expressions presented briefly between two identical emotional faces.
The results of Experiments 1 and 2 showed that negative context impaired the recognition of micro-expressions regardless of the duration of the target micro-expression. Stimulus-difference between the context and target micro-expression was accounted for in Experiment 3. Results showed that a context effect on micro-expression recognition persists even when the stimulus similarity between the context and target micro-expressions was controlled.
Therefore, our results not only provided evidence for the context effect on micro-expression recognition but also suggested that the context effect might result from both the stimulus and valence differences. Citation: Zhang M, Fu Q, Chen Y-H, Fu X (2014) Emotional Context Influences Micro-Expression Recognition. PLoS ONE 9(4): e95018. Editor: Philip Allen, University of Akron, United States of America Received: November 9, 2013; Accepted: March 21, 2014; Published: April 15, 2014 Copyright: © 2014 Zhang et al. This is an open-access article distributed under the terms of the, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This research was supported in part by grants from the National Basic Research Program of China (2011CB302201) and the National Natural Science Foundation of China (61375009). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist. Introduction Micro-expressions are extremely quick facial expressions that usually last for 1/25 s to 1/5 s. Like facial expressions, micro-expressions also include some basic emotions, such as anger and fear.
Normally, a micro-expression is embedded in the flow of expressions and occurs when people try to conceal or repress their emotions. Previous research suggests that micro-expressions are important cues for revealing true feelings and detecting deceptive behaviors. However, people usually have difficulties detecting or recognizing micro-expressions. Synthesized micro-expressions refer to artificially created micro-expressions in which an emotional expression is inserted between two neutral expressions (See ).
Synthesized micro-expressions are commonly used in micro-expression recognition research as well as training materials, such as those in Ekman’s micro-expression training tool (METT), which aimed to improve people’s ability to recognize expressions. In addition, synthesized micro-expressions are also employed to investigate the characteristics and other influencing factors in micro-expression recognition research,. For example, Shen et al. (2012), using the neutral-emotional-neutral paradigm, found that recognition accuracy rates gradually increased as presentation duration became longer but was within 200 ms. Although previous studies have employed neutral expressions before and after the emotional expression, research has indicated that micro-expressions may be embedded not only in neutral expressions but also in other facial expressions, such as happiness and sadness.
To date, it remains unknown whether the recognition of micro-expressions is influenced by the types of facial expressions appearing before and after the micro-expression, i.e., the emotional context. A disgust micro-expression occurred between facial expressions (created according to METT ). Previous studies have demonstrated that the current context influences recognition of facial expressions. For example, negative facial expressions are recognized more quickly and accurately with negative context than with positive context. Moreover, it has also been shown that the emotional valence information appearing before the facial expression influences facial expression recognition,. Some affective priming studies have found that the primes have different roles in the recognition of the different types of facial expressions,. For example, anger expressions are recognized more quickly and accurately when the prime is an angry face than when it is a happy face.
It has also been observed that happy faces are recognized more accurately after positive primes than after negative ones, whereas sad expressions are recognized more accurately after negative primes. The effects of emotional context on facial expression recognition have also been observed and confirmed in numerous cognitive neuroscience research (See, for a detailed overview). In priming tasks, the prime is often presented for a short duration and the target for a long duration, whereas the reverse is employed in synthesized micro-expression tasks. Both priming and synthesized micro-expression tasks, however, involve the processing of previous emotional stimuli and target stimuli.
Primes presented for long durations may lead to greater priming effect; moreover, emotional information has been observed to influence attention according to the emotional regulation theory. These findings lead us to predict that micro-expression recognition would be influenced by emotional context. The purpose of the current study was to investigate whether emotional context plays a role in micro-expression recognition.
To achieve this goal, we extended upon the METT design to determine whether emotional context influences micro-expression recognition. Furthermore, we also assessed the underlying factors (i.e. Valence difference and stimuli difference) that may lead to the context effect on micro-expression recognition.
Experiment 1 Experiment 1 was designed to investigate whether there was an effect of emotional context on micro-expressions recognition. We chose the facial expressions of anger, disgust, happiness, fear, and surprise as the five target micro-expressions and the facial expressions of sad, neutral, and happy as the three context expressions in this experiment. We hypothesized that the recognition accuracy for target micro-expression would be influenced by the context, on the basis of previous findings of context influence on facial expression recognition and affective priming,.
One hundred and twenty-eight images of 16 models with basic facial expressions of anger, disgust, fear, happiness with mouth closed, happiness with mouth opened, sadness, surprise, and neutral were chosen from the MUG face database. Taking into account the potential difficulty for people to judge facial expressions of cultural outgroup members, we first run an expression recognition test to select the most easily recognizable facial expressions. Thirty subjects voluntarily took part in the test, during which an expression was presented for 2000 ms and they were asked to report the expression in each image. To select materials as target expressions, subjects were asked to report by choosing one from five emotion terms (anger, disgust, fear, happiness, and surprise) in the first session of the test. To select material as context expressions, subjects were asked to report by choosing one from three emotion terms (sadness, happiness, and neutral) in the second session of the test. With a criterion of mean accuracy rate more than 80%, 48 images of 6 models (3 females, 3 males) were selected as the experiment materials. Among them, the images of happiness with mouth closed, sadness, and neutral expressions were used as positive, negative, and neutral contexts, respectively.
The remaining images (anger, disgust, fear, happiness with mouth opened, and surprise) were used as the target micro-expressions. All of the images were displayed on a uniform silver-gray background in the center of the screen with a visual angle of 13.4°×13.4°. Stimuli were presented at the center of a 17-inch cathode-ray tube (CRT) monitor (frequency 100 Hz, resolution 1024×768) with the E-Prime 2.0 software package. On each trial, a black fixation cross was first presented for 500 ms in the center of the screen, followed by either a happy (with mouth closed), sad, or neutral expression context for 2000 ms.
Then, one of the five target micro-expressions (anger, disgust, fear, surprise, or happiness with mouth opened) was presented for 200 ms. After that, the same context was presented for 2000 ms again. Finally, the labels of the five target expressions (anger, happiness, fear, disgust, and surprise) were presented. Participants were asked to indicate the fleeting expression by clicking one of the five labels with the mouse. The locations of the five labels on each trial were randomly presented.
There were three types of contexts for each of the five target expressions of 6 models, for a total of 90 different trials. The 90 different trials were repeated three times, for a total of 270 trials. All of the trials were randomly presented. There was at least a 1-minute break after a block of 90 trials. Results and Discussion shows the accuracy rates for each target micro-expression with three contexts. To examine whether there was an effect of emotional context on micro-expression recognition, a two-way repeated ANOVA with the context and target micro-expressions as the within-subject variables was used. A significant effect of context, F(2, 58) = 9.18, p.
The mean accuracy rates for each target micro-expression with different contexts in Experiment 1. The results demonstrated an effect of emotional context on micro-expression integration. That is, recognition of the target micro-expression was overall lower with negative (sad) context than with neutral or positive (happy) contexts. It seems like the effect was stronger on anger than others. Low recognition of the target micro-expression with negative context might have been because there were three types of negative target micro-expressions amounting to 60% of the target micro-expressions, whereas there was only one positive and one neutral target micro-expression, each amounting to only 20% of the total. Thus, to examine this possibility, we chose only anger, happiness, and neutral as the target micro-expressions in Experiment 2.
Experiment 2 To explore whether the context effect was limited to specific materials, the stimuli used in Experiment 2 were selected from NimStim instead of the MUG face database. To account for the disproportion of negative vs. Positive expression, only three expressions (anger, happiness, and neutral; all with mouth opened) were selected as the target micro-expressions. Images of the three expressions anger, happiness, and neutral (all with mouth closed) were selected as the contexts. In addition, to investigate whether the context effect was influenced by the duration of the micro-expression, the presentation durations were reduced to 40 ms, 60 ms, and 80 ms in Experiment 2 on the basis of previous research. We predicted that the effect of emotional context would still be observed on the basis of previous findings,.
Moreover, this effect might be modulated by the duration of the target micro-expressions. Two hundred and ten images of 35 models (16 females, 19 males) with three types of basic facial expressions (mouth closed: anger, neutral, and happiness; mouth opened: anger, neutral, and happiness) were chosen from the NimStim database. To ensure most Chinese subjects could recognize the facial expressions, we first run an expression recognition test to select the most easily recognizable facial expressions. Thirty subjects voluntarily took part in the test, during which an expression was presented for 2000 ms and they were asked to report the expression in each image. To select material as target expressions and context expressions, subjects were asked to respond by clicking one of the emotional labels (happiness, anger, or neutral) with the mouse. One hundred and twenty images of 20 models (10 females, 10 males) were selected as the experiment materials, with the criterion of mean accuracy rate more than 85%.
The closed mouthed versions of the expressions for anger, happiness and neutral were used as contexts. The remaining images (facial expressions with mouth opened: anger, happiness, and neutral) were used as the target micro-expressions. All of the images were displayed on a uniform silver-gray background in the center of the screen with a visual angle of 11.8°×15.1°. The experimental procedures in Experiment 2 were identical to those in Experiment 1, except that the target micro-expression was presented for 40 ms, 60 ms, or 80 ms in each trial and participants were asked to indicate the fleeting expression by clicking one of the three labels (anger, happiness, or neutral). There were three types of contexts for each of the three target expressions of the 20 models, for a total of 180 different trials. Participants in each group were examined in only one of the three presentation durations. There was at least a 1-minute break after a block of 60 trials.
Results and Discussion The mean accuracy rates for each micro-expression with different contexts for each presentation duration group is shown below (see ). To explore the effect of emotional context on micro-expression recognition, a three-way mixed ANOVA with the context and target micro-expressions as the within-subject variables and presentation duration as a between-subject variable was used. It revealed a significant effect of context, F(2, 86) = 41.09, p. The mean accuracy rates for each target micro-expressions with different contexts in Experiment 2. (A) The mean accuracy rates for the 40 ms presentation group. (B) The mean accuracy rates for the 60 ms presentation group.
(C) The mean accuracy rates for the 80 ms presentation group. There was a significant main effect of presentation duration, F(2, 87) = 22.16, p0.05). The three-way interaction was also significant, F(8, 170) = 2.57, p. Experiment 3 Although it was observed that micro-expression recognition was influenced by emotional context in both Experiments 1 and 2, it remains unclear why the context effect occurred.
The results of Experiment 2 indicated that the context effect might have resulted from the differences between the context and target expressions. However, previous research on facial expression recognition found that emotional valence of context could influence recognition performance –, thus the emotional valence of context might also contribute to the context effect. In order to examine this possibility, in Experiment 3 we used morphed facial expressions as target micro-expressions that could be considered anger, happiness, or anger plus happiness. If the recognition performance had been purely based on the differences between the context and target micro-expressions, the response proportions would vary as the stimulus differences between them.
Especially, when the similarity between the target expression and negative context and between the target expression and positive context was both 50%, there would be no difference on the response rates for the target micro-expressions with the two different contexts. The stimuli were identical to those used in Experiment 2 except that the target micro-expressions were morphed expressions. The morphed expressions were created using forty target micro-expressions (open-mouthed anger and happiness) images of 20 models in Experiment 2. We used FantaMorph 5.8 to generate three groups of morphed facial expressions based on the image’s similarity to happiness and anger: expressions with a morph ratio of 75% happiness plus 25% anger (75% happiness), expressions with a morph ratio of 50% happiness plus 50% anger (50% happiness), and expressions with a morph ratio of 25% happiness plus 75% anger (25% happiness). We asked thirty subjects voluntarily to recognize the expression in each picture by selecting one of the three emotion labels (happiness, anger, or happiness plus anger) and to evaluate the intensity of each expression by rating the similarity of a morphed expression to the selected emotional label (ranging from 0% to 100%).
Eight images with a morph proportion of 75% happiness and 25% anger, a choice rate of 90%–100% for happiness, and happiness intensity between 65%–80% were selected. Eight images with a morph proportion of 50% happiness and 50% anger, a choice rate of 47%–60% for happiness plus anger, and anger intensity between 45%–55% were selected. Eight images with a morph proportion of 25% happiness and 75% anger, a choice rate of 80%–100% for anger, and anger intensity between 65%–80% were selected.
In total, 24 images of 8 models (4 females, 4 males) were selected from the forty images according to the response proportions and the intensity ratings. All of the images were displayed on a uniform silver-gray background in the center of the screen with a visual angle of 10°×12.9°. The experimental procedures were identical to those in Experiment 1, except that the target micro-expression was morphed expressions and participants were asked to indicate the fleeting expression by clicking one of the three labels (happiness, anger, and happiness plus anger) on each trial. There were three types of emotional contexts for each of the three target expressions of the 8 models, for a total of 72 different trials. The 72 different trials were repeated three times, for a total of 216 trials. All of the trials were randomly presented. There was at least a 1-minute break after a block of 72 trials.
The Mean Response Proportions of Morphed Micro-expressions in Experiment 3. First, a repeated ANOVA on happiness response proportions with context (negative, neutral, positive) and target micro-expression (75% happiness, 50% happiness, 25% happiness) as within-subject variables was performed. It revealed a significant effect of context, F(2, 28) = 71.62, p. The mean response proportions for happiness and anger for each target micro-expression in Experiment 3. (A) The mean response proportions for happiness. (B) The mean response proportions for anger.
General Discussion The results of Experiments 1 and 2 showed that negative context impaired micro-expression recognition regardless of the duration of the target micro-expression. In addition, the context effect on micro-expression recognition could have resulted from the stimulus differences between the context and target micro-expressions. The results of Experiment 3 showed that there was still a context effect on micro-expression recognition even when the stimulus similarity between the context and target micro-expressions was controlled. Therefore, our results suggest that the context effect on micro-expression recognition might be attributable to both the stimulus and valence differences. Our findings are consistent with previous findings that facial expression recognition is influenced by emotional context,. More notably, lower micro-expression recognition accuracy was observed in negative context condition than positive or neutral context conditions.
This might be due to the negative context appearing before the micro-expression capturing more attention,. Previous research showed that attention allocation is related to the emotional valence of stimuli and more attentional resources are directed to negative facial expressions even though the emotional expressions are irrelevant to the task.
The results in Experiment 3 also showed that participants recognized the target micro-expressions with a morph ratio of 50% happiness plus 50% anger as anger more frequently than they did as happiness, confirming that negative context can attract more attention than can positive context. In addition, the individual’s expectation stemming from emotional context may also interfere with judgment of the target. We found that micro-expressions were better recognized when the emotional valences of context and target were inconsistent, that is, anger was easier to recognize with positive context, whereas happiness was easier to recognize with negative context. However, previous research found that happy faces were recognized more accurately when primed by a happy face than by an angry face, whereas sad expressions were recognized more accurately when primed by an angry face than by a happy face.
Similar results were also observed when the facial expressions were primed by affective scenes. These seemingly contradictory findings might be due to the different presentation duration of prime and target. The prime was displayed for a relatively shorter duration than were the target faces in previous studies, whereas the context expression was displayed much longer than was the target micro-expression in our study. Hence, it is likely that the briefly flashed prime facilitated recognition of a similar target facial expression, whereas the longer presentation of the context facial expression impaired the recognition of the similar target because of the smaller changes between the context and target expressions. Moreover, the lower accuracy rates for inconsistent than for consistent valences trials might have been owing to the differences between stimuli. Consistent context expressions differed from target expressions only in the mouth region (closed vs.
Opened), whereas the differences between the inconsistent context and target expressions were in both the mouth regions and the other parts of the face. That is, the stimulus differences between the context and target micro-expressions might have led to the context effect. Previous studies have shown that the target was more easily recognized when the differences between the targets and non-targets were obvious. However, it is important to note that when the target was a micro-expression with a morph ratio of 50% happiness plus 50% anger, negative context led to more happiness responses and fewer anger responses than did neutral context, whereas positive context led to more anger responses than did neutral context. These results revealed that the valence differences between contexts also contributed to the effect of emotional context. Therefore, the context effect on micro-expression recognition might be owing to not only the stimulus differences between the context and target micro-expressions but also the valence differences between contexts.
Previous research has shown that the processes of facial expression recognition are not simple classification but are cognitive processes including the results of sequential and cumulative stimulus evaluations that took the context information into account. However, it remains unclear exactly how emotional context influences micro-expression recognition. The current study has provided behavioral evidence for the role of emotional context information in micro-expression recognition. Further studies should use neuroimaging techniques to reveal the stages in micro-expression processing that are influenced by the emotional context. In summary, the present study provides evidence that emotional context influences micro-expression recognition.
The context effect on micro-expression recognition might be attributable to both the stimulus and valence differences.