Deep learning for facial expression and/or emotion recognition

Description

This project aims to analyse high volume emotional videos and images to recognise human facial expression and/or emotion using deep learning methods. The emotional videos and images are collected from the literature (such as UvA-NEMO [1], Cohn-Kanade [2], AFEW [3], MMI [4], and/or MAHNOB [5]). Initially, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) were applied to distinguish between posed and genuine smile from UvA-NEMO smile database, and 3D Mesh model and CNN were applied to recognise emotion from Cohn-Kanade (CK and CK+) database. In addition to the preliminary results [1, 2] more experiments are needed to develop high performing models for recognising or classifying human emotions and/or facial expressions from the emotional videos and images.

 

Requirements

Recommended Reading:

  1. Liwei Hou (2019). Distinguishing genuine and posed smiles using computer vision deep learning approaches. Link: http://courses.cecs.anu.edu.au/courses/CSPROJECTS/19S2/reports/u6343089_report.pdf
  2. Jialin Yang (2019). Detecting Human Emotions From 3D Mesh. Link: http://courses.cecs.anu.edu.au/courses/CSPROJECTS/19S2/reports/u5894100_report.pdf

Updated:  10 August 2021/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing