Lecturing

I'm currently convenor the third-year spring module in Machine Learning, G53MLE. From 2014/2015 Machine Learning will be a 20 credit module.

If you want to make an appointment to see me about projects or otherwise, please suggest a date and time that at least doesn't conflict with my formal diary.

Student projects

The following project ideas can be used as inspiration for final BSc/MSc student projects. They are geared towards individual projects, but with some tinkering I'm sure you could create a project for two people out of most of these as well.


Human-eyed Robotic Rover


keywords: Robot vision, data collection, planning, control

key modules: Computer vision, Robotics

The human vision system uses two eyes. By virtue of having two rather than one eye, we have the ability to sense depth. However, 2-4% ofthe population has a problem coordinating the two eyes properly. This is called Amblyopia, or lazy eye. Together with a clinician at QMC we are developing models to find out how Amblyopia develops in the first place. To do this, we would like to collect a database of binocular images collected by a robotic rover (i.e. a robot car).

Your task would be to equip our existing lego robotic rover with two cameras, which can swivel independently just like human eyes. You will then need to program the robot to rove around the building taking images of certain objects and people. This will include programming basic visual object detection, face detection, and ofcourse robot planning and control modules. This project would assume you use off-the-shelve software wherever possible. The challenge is to integrate it into a working human-eyed robotic rover.


Mirror my expression


keywords:Computer vision, machine learning, facial expression recognition, data collection, human-computer interaction

key modules:Machine Learning, HCI, Computer vision

Training facial expression recognition systems is hard. Part of the reason for that is that all people look different, and they can produce around 7,000 different expressions! That makes for a lot of different faces. One way to improve facial expression recognition software is to build person-specific machine learning models.

This project proposes to use a mirror and a comuter screen to collect data and automatically learn better Machine Learning models. On a screen, a digital face would show you what expression to make, and you can then use the mirror to practise your expression until you're happy with the result. Using a camera, you can then store that expression and update your Machine Learning model. The result of this project would be a physical system that can help train personalised systems. All hardware is available for the project. You are expected to use existing Machine Learning algorithms, which you should learn or understand. The project has a heavy focus on HCI, i.e. the final system must be proven to be highly usable.


Kinect to your respiration


keywords: 3D Computer Vision, human sensing, byosignals

key modules: Computer vision, Image Processing

The Kinect was a revolution to the computer vision research community. From one day to the next, sensing depth was possible at very low cost with reasonable accuracy. The new Kinect 2.0, which was released in 2014, is even more accurate than its predecessor. People have used it for many things, including measuring respiration.

In this project you will use the Kinect to measure where people are, and measure their respiration rate. You will need to read up on existing methodologies, suggest improvements, implement them, and evaluate them on a small set of data. This project is relatively research-focussed, but does include a fair amount of software implementation to control the Kinect.


Concept-Focused Brain Attention Model for Virtual Humans


keywords: Artificial Intelligence, Dialogue Management, Automatic Speech Recognition, Virtual Humans

key modules: AI, Machine Learning

For decades researchers have struggled with ways to make Virtual Humans engage in human-like conversations. The Turing Test, which is designed to determine whether an AI communicates like a human or not, is famous for gauging progress in this field. However, even the best systems don't seem to make very coherent conversation. This is partly because the AIs don't have a proper 'train of thought'. In this project you will use the idea of a concept-focused brain attention model to imbue existing Virtual Humans with a degree of ability to 'stay on topic', and only making sensible topic switches.

A second aspect of the project is to use the concept models to train specialised automatic speech recognition models. Please contact me for more information.

While this is a research focused project, the resulting dialogue manager must be implemented in an existing Virtual Human of the ARIA-VALUSPA EU project.


Funny Face Detector


keywords: Computer Vision, Machine Learning, Facial Expression Recognition

key modules: Computer vision, Machine Learning

we have a large collection of people trying to make a Virtual Human smile by pulling funny faces at it. At this moment, the Virtual Human cannot really detect when someone pulls a funny face, or indeed decide whether one face is funnier than another. In this project you will give the Virtual Human that capability.

Your task is to build computer vision and machine learning algorithms that can determine how to assess whether an image of a face is funny or not. We have a large database of face video, and we can help give you ideas of how to build facial expression recognition systems (we have a lot of existing code and are world-experts in this area), but the algorithm design and methodology are down to you. In the final stage of your project you must integrate your system in an existing Virtual Human that can play the smile game, and show how your funny-face detector improves play.


Virtual Human Interviewer


keywords: HCI, Dialogue Management, Knowledge representation

key modules: HCI, AI

This is a unique opportunity to work as part of the ARIA-VALUSPA EU Project with world-leading experts on Virtual Humans.

We are looking for excellent HCI students with keen interest in AI and who appreciate the complexities of human non-verbal language to build a Virtual Human Interviewer, with the aim of collecting domain knowledge automatically.


Intelligent Machine Learning Data Reduction Techniques for Continuous Affect Recognition


keywords: Regression, Machine Learning, Automatic Human Behaviour Understanding

The field of affect/emotion recognition is moving away from describing affect in discrete classes such as the emotions happy or sad, and is instead moving towards a continuous description of affect, where each emotion is a point in a 2 or higher Dimensional space. The most common space is that of Valence (positive/negative emotion) and Arousal (low/high energy). Happiness would have high valence, high arousal, while sadness would have low valence, low arousal in that space.

We have collected a huge database of over 40 hours of interactions between two people (dyads). A large portion of that data has been labelled for continuous affect by multiple raters (up to 8 raters per clip). The large amount of data means that we have to make a selection of what data to use in our Machine Learning techniques (i.e. regression methods). The fact that raters sometimes largely disagree on what label to give to a particular frame could be exploited for this: if raters highly agree, than the corresponding frame should be easy to detect and can be considered a prototypical example of the labelled emotion, so we could include that in the dataset. If raters disagree, we may decide to discard the relevant frame.

A student working on this project should devise a mathematical model for this data selection strategy, and implement it in C or Matlab. Experiments should then be run to see if the approach was successful. We are looking for a student with a good grasp of mathematics, in particular probability theory. If the project is a success, we aim to publish the results in a scientific paper.


Semi-Automatic Facial Muscle Action Annotation Tool


keywords: HCI/UI, XML, software engineering, Automatic Human Behaviour Understanding

The field of Autmoatic Facial Expression Recognition was born from the need of psychologists to annotate large amounts of face video in terms of facial expressions. In particular, they want to code expressions in terms of individual facial muscle actions (AUs), as described in the Facial Action Coding System (FACS). As a rule, it takes a human rater approximately three hours to annotate 1 minute of face video, a ratio of 180:1.

Unfortunately, the holy grail of fully automatic AU detection, which would cut the human annotator out of the loop, is far from being reached. What does exist, is software that detects some AU occurrences, but not all. Integrating this software in a custom-build AU Annotation Tool would speed up AU Annotation considerably.

We are looking for a student with affinity for user interfaces, interest in automatic human behaviour understanding, and a willingness to work with people from other disciplines, in particular psychologists. The goal of the project would be to create an AU annotation tool from scratch, that incorporates our latest version of automatic AU detection, thus creating a Semi-Automatic Facial Muscle Action Annotation Tool. If the project is successful, this tool is likely to be used by many researchers all over the world!