Abstract

The face is a powerful channel of non-verbal communication. Anatomically-based facial action units, alone and in combinations, can convey nearly all possible facial expressions, regulate social behavior, and communicate emotion and intention. Human-observer based approaches for measuring facial actions are labor-intensive, qualitative, and thus not feasible for real-time applications or large data. For these reasons, objective, reliable, valid, and efficient automated approaches to facial action measurement are needed. Moreover, synthesizing realistic expressions can be useful to generate large and balanced facial expression databases. Recent advances in machine learning and computer vision offer a powerful way to automatically detect and synthesize facial actions. In this talk, I will present my work on novel deep learning based computational approaches for automated facial action detection and synthesis, and use of automated facial annotation in two applications. One is deep brain stimulation (DBS) of human subcortex for treatment-resistant obsessive-compulsive disorder. The other is infant response to perturbations of mothers’ behavior in mother-infant interaction. These findings suggest that automated measurement of facial actions can directly reflect stimulation of neural circuits related to reward and abilities of human infants to cope with interactive challenges.

Biography

Itır Önal Ertuğrul is an Assistant Professor at the Cognitive Science and Artificial Intelligence Department at Tilburg University. Previously, she was a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University and Affect Analysis Group at University of Pittsburgh. She was a visiting Ph.D. student at the Pattern Recognition and Bioinformatics Group at Delft University of Technology. She received her B.Sc., M.Sc. and Ph.D. degrees from the Department of Computer Engineering at Middle East Technical University in 2011, 2013 and 2017, respectively. Her research interests are in the broad areas of machine learning and affective computing, with a specific focus on automated analysis and synthesis of facial actions to understand human behavior, emotion and psychopathology.

Please click the link below to join the webinar:
https://zoom.us/j/91617631819?pwd=d21zOU5pKzlVN1ltcHc4ZU1jSGJOdz09

Passcode: 173144