Michel Valstar is an Associate Professor at the University of Nottingham, School of Computer Science, and a researcher in Automatic Visual Understanding of Human Behaviour. He is a member of both the Computer Vision Lab and the Mixed Reality Lab. Automatic Human Behaviour Understanding encompasses Machine Learning, Computer Vision, and a good idea of how people behave in this world.

Dr Valstar is currently coordinator of the H2020 LEIT project
ARIA-VALUSPA, which will create the next generation virtual humans. He has also recently been awarded a prestigious Melinda & Bill Gates foundation award to automatically estimate babies' gestational age after birth using the mobile phone camera, for use in countries where there is no access to ultrasound scans. Michel was a Visiting Researcher at the Affective Computing group at the Media Lab, MIT, and a research associate with the iBUG group, which is part of the Department of Computing at Imperial College London. Michel’s expertise is facial expression recognition, in particular the analysis of FACS Action Units. He recently proposed a new field of research called 'Behaviomedics', which applies affective computing and Social Signal Processing to the field of medicine to help diagnose, monitor, and treat medical conditions that alter expressive behaviour such as depression.

Dr Valstar has co-organised the first of its kind and premier Audio/Visual Emotion Challenge (AVEC) and Workshop series from 2011-2015 that advanced the field of Affective Computing.
He further initiated and organised the FERA competition series (two editions up to know) - another first of its kind event in the Computer Vision community focussing on Facial Action Recognition.
He further serves as an Associate Editor for the IEEE Transactions on Affective Computing. Dr Valstar also made great efforts to contribute to freely accessible data (e.g., SEMAINE, MMI Facial Expression, or GEMEP-FERA) and code (e.g., LAUD and BoRMan) that repeatedly advanced the field.
In the United Kingdom, he kicked of AC.UK in 2012 - a meeting of the Affective Computing community in the UK - a huge success ever since.
Lastly, his >80 highest quality publications so far led to >3200 citations (h-index = 27, source: Google Scholar).
Dr Valstar is further frequently seen on Technical Program Committees in the field.

20/11/2015: Shashank Jaiswal's WACV paper has attained the highest scores on the FERA 2015 AU detection sub-challenge using a dynamic Deep Learning architecture, combining shape, appearance, CNNs and LSTMs. See results below:

Weighted average performance on BP4D and SEMAINE - Occurrence Sub-Challenge (Nottingham)

01/09/2015: As part of the ARIA-VALUSPA EU project we are collecting the largest-ever crowd-sourced face database. Come be a Citizen Scientist and teach social robots and virtual humans to understand faces at the Faces of the World Zooniverse page!

01/09/2015: Two ICCV 2015 papers accepted: Wang et al. 'TRIC-track: Tracking by Regression with Incrementally Learned Cascades' and Almaev et al. 'Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection'.

06/07/2015: Dr Valstar is giving a tutorial in facial expression recognition during the Porto-based Visum Summer School on Computer Vision. Please download the package with the tutorial manual, toy data, and LGBP-TOP Matlab code before starting the tutorial. This is a self-contained tutorial that anyone can do if they want to get started with automatic facial expression recognition.

16/11/2012: AVEC test labels publicly available: Now that AVEC 2012 has concluded successfully, we are releasing the test labels for both AVEC 2011 and 2012. Please download them from the AVEC database site using your existing account.

11/06/2012: MayFest activities: The School of Computer Science had a booth during the University of Nottingham’s 2012 MayFest, where we let children and parents wear an Emotiv EEG reader to control a player in a tug-of-war contest. Using your mental power to pull your opponent towards you!