Humans inherently focus on the face to understand other people's emotional states. Facial expressions are used to communicate worldwide, making them innate and universal. This essay addresses the various ways in which the brain processes facial information. Specific evidence is given for the part-based, gestalt and configurable models. Evidence presented also analyzes the time frame for processing identification, gaze, and expression. This essay also grapples with categorical emotion perception as opposed to two dimensional emotion theories. Computational models shed light on the inner workings of our perception of emotion in categories.
Keywords: Russell, Fernández-Dols, Ellison, Massaro
[...] In truth, subjects did place sharp boundaries between the categories, which made some researchers say that people cannot help but see a face without perceiving a distinct emotion (p. 1159). Other scholars say that emotion is perceived most basically in two dimensions, arousal-sleep and displeasure- pleasure; the categories are derived from these dimensions (Russell p. 304). If these dimensions were visualized as a set of axes, depression expressions would fall in the quadrant that is a mixture of sleep and displeasure, while excited expressions would fall in the quadrant mixing arousal and pleasure. [...]
[...] Three models have been developed to explain the way that emotional information is portrayed in a face. The purely categorical model explains individual expressions as acting like words in a language. By moving an eyebrow or crinkling a nose, one is showing “arbitrary symbols whose meanings are determined by convention” (p. 230). Disproof of this theory comes from the appropriate reactions of newborns to different facial expressions. A second model, the componential model, theorizes that some components contributing to a given expression are inherently meaningful. [...]
[...] Similarly, in a study by Wallbott and Ricci-Bitti that manipulated neutral faces with single facial muscle movements and combinations thereof showed that the meaning of most single muscle movements changes when presented in a combination. Only a few muscle movements retain the same emotional meaning across different contexts (p. 529). In the actual experiment run by Calder, Young, Keane and Dean, they found more evidence to support the configural model. This team of scientists put together a series of faces by slicing images in half, horizontally, through the nose. [...]
[...] Meanings in motion and faces: Developmental associations between the processing of intention from geometrical animations and gaze detection accuracy. Development and Psychopathology 99-118. Dailey, M.N., Cottrell, G.W., Padgett, C., & Adolphs, R. (2002). EMPATH: A neural network that categorizes facial expressions. Journal of Cognitive Neuroscience 1158- 1173. Retrieved Nov from Academic Search Premier, EbscoHost. Ganel, T. Goshen-Gottstein, Y., &Goodale, M.A. (2005). Interactions between the processing of gaze direction and facial expression. Vision Research 1191-1200. Retrieved Nov from Academic Search Premier, EbscoHost. [...]
[...] Infants perceive emotions on a continuous spectrum by recognizing patterns of facial movements (as cited in Russell p.309). Children do not recognize distinct emotions until the age of 3 or 4 (p. 309). The last key to social intelligence, gaze perception, develops around the age of seven (Campbell et al p.107). Gaze perception allows observers to witness the recipient of an expression. Once all of these skills are obtained, children over seven years old process faces like adults through mentalization. [...]
APA Style reference
For your bibliographyOnline reading
with our online readerContent validated
by our reading committee