The problem with imitating others, though, is that unless the other person in the conversation is a mirror, we cannot visually know what we are expressing. In other words, our expressions are felt, but unseen. When we look at the other person's facial expression, we can visually perceive that this person is smiling, without actually feeling what the person is feeling. How, then, does one imitate?
There are two models that attempt to explain how we match what we see to what we do. The first is the Associated Sequence-Learning (ASL) model which explains that what we see is directly linked with its proprioceptive representation by association; we learn that our feeling matches an action when we experience contingencies between them. This occurs when we are given visual feedback of our performance, either by a mirror or a mirroring person. On the other hand, the Active Intermodal-Mapping Model (AIM) explains that this imitation is mediated by an internal mechanism that does this matching up for us (Meltzoff & Moore, 1997).
In a recent Psychological Science report, a team of researchers studied how visual feedback was related to facial self-imitation (Cook, Heyes & Johnston, 2012). They filmed participants carrying out a joke sequence for 16 seconds, then selected four target expressions from the clip for the participants to imitate. Participants captured their attempts on camera by clicking on a computer mouse. In the first experiment, half of the participants received visual feedback after making their imitation attempts – after each attempt, they were immediately shown their screen capture next to the target expression. The other half received no visual feedback and were only shown the target expression after each attempt.
In Experiment 2, half of the participants similarly received visual feedback, whereas the other half received non-contingent visual feedback. This meant that they were given visual feedback after each attempt, but this feedback came from a different attempt altogether. The accuracy of self-imitation was measured by principal component analysis (PCA) – targets and attempts were mapped out onto a multidimensional space. The closer the distance between the target expression and the attempt, the more accurate the imitation.
The results of both experiments gave support to the ASL model of imitation. The imitation accuracy improved for those who received visual feedback, but deteriorated for those who depended on proprioception alone (no visual feedback). The experiments also demonstrated the need for relevant visual feedback – the group given visual feedback from a different attempt showed no improvement in accuracy, either.
In conclusion, we imitate and get better at imitating by experiencing relevant visual feedback, the kind that directly associates visual representations with our proprioceptive representations in real life. With this new finding, certain skill training and rehabilitative programs may be improved by providing patients useful, contingent feedback. As for the rest of us (and Kristen Stewart), our mimicking skills can be improved by imitating those who give us the best visual feedback, or simply by practicing in front of a mirror.
Cook, R.; Johnston A. & Heyes C. (2012). Facial Self-Imitation: Objective Measurement Reveals No Improvement without Visual Feedback. Psychological Science, XX (X), 1-6.
Explorable.com (2010). “Chameleon Effect.” Retrieved 28 Dec 2012 from Explorable: http://explorable.com/chameleon-effect.html
Shelton, L.; Yamamoto, N. (2005). Visual and Proprioceptive Representations in Spatial Memory. Memory and Cognition, 33 (1), 140-50.