Classifying Facial Expressions (Funded 2009-2012)

TUBITAK, project number 109E061.

Facial Action Coding System (FACS)


Human face shelters most of the sense organs and important expression mechanisms. Face as an interface makes interpersonal communication possible, therefore it is very popular in various fields, especially in psychology.

Facial expressions are most of the time involuntary, and they are the most direct and natural means of expression of the emotions. For that reason analysis of these expressions is the most prominent method of emotion recognition. Accurate analysis of the the system users’ emotions will facilitate quick and effective human-computer interaction.¬†Detection of the psychological conditions such as boredom, fatigue and stress will help us develop warning systems for the staff who have critical duties. When we reach the success level of the human security experts in facial expression recognition, we will be able to ‘suspect’ deception just by analyzing the video of a subject.

Photographer and neurophysiologist Duchenne de Boulogne was one of the first to study facial expressions (See “The Mechanism of Human Facial Expression” that was published in 1862). The real potential of this area is realized in 1990s with the studies of Ekman and Friesen (1978) . Ekman and Friesen offered a systematical study on the facial actions (Facial Action Coding System or FACS) which later became an important milestone in automatic facial expression recognition research.

Figure 1 – The experiment of induced facial expressions (Duchenne de Boulogne, 1862)

One interesting finding of the psychology researchers is that there are at least six universal facial expressions. These are happiness, sadness, surprise, anger, fear and disgust. With FACS, Ekman and Friesen provide a method for systematical measurement or coding of facial behaviors. They identify and code individual facial behaviors with Action Units (AUs).

The premise of the FACS coding system is that all forms of expressions can be expressed with various combinations and intensities of AUs. You can see examples of AUs here.

We argue that FACS AUs have important limitations. We propose to extract and use facial muscle forces as features in facial expression recognition.

Facial Expression Recognition Based on Facial Anatomy


In our studies in PILAB we develop algorithms to analyze facial expressions without human intervention. The analysis is done by tracking feature points on the human face and decomposing these deformations into muscle forces. In other words, we map the observed expression to muscle forces under the constraints of a high precision anatomical face model. Our eventual goal is to be able to automatically recognize genuine and subtle facial expressions.

 Figure 2 РHappy Uraan and her muscle activations

Journals and Conferences


  • Image and Vision Computing
  • Pattern Recognition
  • Image Processing, IEEE Transactions on
  • Pattern Analysis and Machine Intelligence, IEEE Transactions on
  • IEEE Transactions on Systems, Man, and Cybernetics
  • The First International Conference on Advances in Computer-Human Interaction (ACHI 08)
  • IEEE Workshop on Neural Networks for Signal Processing MLSP 2007
  • International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007)
  • International Conference on Image Processing (ICIP 2006)
  • IEEE International Conference on Image Processing (ICIP 2005)
  • Visual Communications and Image Processing (VCIP 2005)
  • IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2005
  • Reference Web Sites


  • Face Detection– Face Detection Homepage
  • Face Recognition– Face Recognition Homepage
  • FACS– Facial Action Coding System Homepage
  • OpenCV (Open Source Computer Vision)– A library of programming functions for real time computer vision.
  • CANDIDE (a parametrized face)– Model-based coding of human faces