Michel Valstar is an Assistant Professor at the University of Nottingham, School of Computer Science, and a researcher in Automatic Visual Understanding of Human Behaviour. He is a member of both the Computer Vision Lab and the Mixed Reality Lab. This encompasses Machine Learning, Computer Vision, and a good idea of how people behave in this world. He is currently coordinator of the H2020 LEIT project ARIA-VALUSPA, which will create the next generation virtual humans. He has also recently been awarded a prestigious Melinda & Bill Gates foundation award to automatically estimate babies' gestational age after birth using the mobile phone camera, for use in countries where there is no access to ultrasound scans. Michel was a Visiting Researcher at the Affective Computing group at the Media Lab, MIT, and a research associate with the iBUG group, which is part of the Department of Computing at Imperial College London. Michel’s expertise is facial expression recognition, in particular the analysis of FACS Action Units. He recently proposed a new field of research called 'Behaviomedics', which applies affective computing and Social Signal Processing to the field of medicine to help diagnose, monitor, and treat medical conditions that alter expressive behaviour such as depression.
18/12/2014: Open Positions: There are two 3-year positions open on our recently awarded ARIA-VALUSPA EU project to build the next generation of Virtual Humans. We are looking for a top-rate Facial Expression Analysis Research Fellow and a Research Assistant/Scientific Programmer to design and build the virtual human architecture.
28/01/2014: eMax released! eMax is both an API and a set of tools to analyse faces. Currently it comes with models that predict the six basic emotions and a set of FACS Action Units (AUs) using Local Gabor Binary Patterns from Three Orthogonal Planes (LGBP-TOP). It is freely available for non-commercial use. To request a copy, contact Timur Almaev with your GitHub account and a signed EULA along with a brief description what you intend to use MaxE for.
22/01/2014: AVEC 2014 announced. Fourth Audio/Visual Emotion Recognition Challenge out: AVEC 2014, 3D Dimensional Affect and Depression, to be held as a satellite workshop of ACM-Multimedia 2014.
12/08/2013: We’re organising BMVC 2014 in Nottingham! Webpage now up and running.
12/08/2013: I’m giving an atelier in facial expression recognition during the Cambridge-based social robotics summer school . Please download the package with the tutorial manual, toy data, and LGBP-TOP Matlab code before starting the atelier.
28/02/2013: AVEC 2013: Programme for AVEC 2013 is up, including an exciting keynote by Jeff Cohn and four papers on challenge submissions. People interested in the recognition of level of depression in data of over 150 recordings of clinically depressed people can still use that.
24/01/2013: G53MLE: From next year, I’ll be teaching the third-year course in Machine Learning, G53MLE, which unfortunately isn’t running this year.
16/11/2012: AVEC test labels publicly available: Now that AVEC 2012 has concluded successfully, we are releasing the test labels for both AVEC 2011 and 2012. Please download them from the AVEC database site using your existing account.
11/06/2012: MayFest activities: The School of Computer Science had a booth during the University of Nottingham’s 2012 MayFest, where we let children and parents wear an Emotiv EEG reader to control a player in a tug-of-war contest. Using your mental power to pull your opponent towards you!