Affective Computing and Social Signal

Processing in the UK


Thank you

The AC.UK event is now over. It was a great succes, thanks to the brilliant talks by the invited speakers, the discussion inspiring posters brought by delegates, and of course the presence of all the delegates themselves.

What?

A get-together of researchers and interested parties in the field of Affective Computing and Social Signal Processing in the UK.

When?

On Friday 7 September 2012, between 10 am and 5 pm. An optional dinner may prolong the day for you...

Where?

On the brand-new Jubilee Campus of the University of Nottingham, in the Business School South building. See below (soon) for a map and additional info.


Programme

This programme is preliminary, and small modifications/additional detail will follow.

9:30-10:00A24Registration and welcom coffee
10:00-10:15A25Introduction
10:15-11:00A25Seminal talk: Roddy Cowie
11:00-12:00A25Seminal talk: Maja Pantic
12:00-13:00A24Lunch and poster session
13:00-14:00A25Seminal talk: Joanna Bryson
14:00-14:40A25Seminal talk: Daniela Romano
14:40-15:20A25Seminal talk: Paul Tennent
15:20-15:45A24Coffee break
15:45-16:30TBDUKRC panel session: Multi-disciplinary funding now and in the future

Poster session

Everybody is welcome to bring a poster presenting either recent research or your global research interests. Posters will be hung in the foyer and the lunch/coffee area. When registering, please indicate if you want to bring a poster, including a (preliminary) title and abstract.

Registration

The event is free, but we would kindly ask you to register well in advance, so we can prepare our logistics. Please email michel.valstar@nottingham.ac.uk if you intend to attend. Please indicate in the email whether you:
  • Would like to bring a poster (in which case, please provide title and abstract)
  • Intend to join us for dinner in town afterwards
  • Are ok with us adding your name to the online list of attendees (see below)
  • Would like to make use of the on-site accomodation (indicate days required: Thursday and/or Friday night is available)

Accomodation

There is on-site accomodation available, only 20 meters away from the workshop venue. Please indicate if you would like to make use of this when registering, or afterwards by emailing michel.valstar@nottingham.ac.uk. Bed & Breakfast rate - £38.90 plus VAT. This includes:
  • Bed in single study bedroom
  • En-suite bathroom
  • Tea/coffee making supplies in bedroom
  • Bathroom toiletries
  • Full English breakfast

Attendees

In order of registration time.

  1. Roddy Cowie, Queen's University Belfast
  2. Maja Pantic, Imperial College London
  3. Joanna Bryson, University of Bath
  4. Daniela Romano, University of Sheffield
  5. Paul Tennent, University of Nottingham
  6. Michel Valstar, University of Nottingham
  7. Alessandro Vinciarelli, University of Glasgow
  8. Richard Gunn, EPSRC
  9. Rachel Tyrrell, ESRC
  10. Christian Wagner, University of Nottingham
  11. Stuart Reeves, University of Nottingham
  12. Hatice Gunes, Queen Mary University London
  13. Genovefa Kefalidou, University of Nottingham
  14. Michael Hurst, University of Loughborough
  15. Etienne Roesch, University of Reading
  16. Holger Schnadelbach, University of Nottingham
  17. Tony Belpaeme, University of Plymouth
  18. Paul Brunet, Queen's University Belfast
  19. Bihan Jiang, Imperial College London
  20. Kirsty Smith, University of Nottingham
  21. Niklas Hambüchen, Imperial College London
  22. Shakya Ganguly, Oxford University
  23. Hongying Meng, Brunel University
  24. Steve Benford, University of Nottingham
  25. Gary McKeown, Queen's University Belfast
  26. Joseph Connor, Meet Insight
  27. Ginevra Castellano, University of Birmingham
  28. Brendan Walker, University of Nottingham
  29. Timothy Cootes, University of Manchester
  30. Sandy Louchart, Herriot Watt University
  31. Hiroshi Shimodaira, University of Edinburgh
  32. Atef Ben Youssef, University of Edinburgh
  33. Paul Harter, University of Nottingham
  34. John Crowe, University of Nottingham
  35. Christopher Peters, University of Coventry
  36. Raphael Lamas, University of Nottingham
  37. Nils Jaeger, University of Nottingham
  38. Marwa Mahmoud, University of Cambridge
  39. Ntombi Banda, University of Cambridge
  40. Vaiva Imbrasaitė, University of Cambridge
  41. Chao Chen, University of Nottingham
  42. Min Zhang, Horizon Nottingham
  43. Andruid Kerne, Texas A&M University
  44. Patrick Moratori, University of Nottingham

Roddy Cowie, Queen's University Belfast

Psychology and human-like interfaces: a marriage of true minds?

The advent of human-like interfaces means that human beings interact with devices that are modelled on human beings. That raises an enormous range of issues for psychology. Its readiness to engage with them will crucially influence its future.

It is easy to think of the interaction as an extension of HCI, but that is only partly true. The key point is that existing theory not only informs understanding of the user, as in traditional HCI; but it also indicates directly what the device might be like. That has wide-ranging implications.

Obviously, there need to be channels that allow psychological theory to inform model building. That has happened in some areas, for instance with widespread awareness of sophisticated emotion representations (appraisal or dimensional) and relevant behavioural features (FACS or mirroring). In other areas, a great deal remains to be absorbed, e.g. interactive phenomena (e.g. alignment and complementarity) and complex states that shape interaction (e.g. stances such as politeness or antagonism).

At another level, Psychology’s ability to specify devices that interact acceptably open the way for a new kind of test for existing theories, which has been called ‘go and do thou likewise’. Trying to model humans makes us ask whether psychology’s statements and formulae actually tell us how people achieve what they do in everyday activities, or whether essential issues have been glossed over. The answer is often uncomfortable.

High on the list of issues that the test highlights is the contrast between deliberately stylised laboratory scenarios and everyday situations which are graded, ambivalent, and dynamic. It is a real problem if theories turn out only to fit data because the data have been systematically selected and simplified. For instance, does research on the recognition of stylised stimuli clarify situations where indicators are distributed across time and modality, interspersed with others, and could signify different things?

All that presupposes ways of evaluating human-like systems. Evaluation will certainly involve speed and accuracy, along standard HCI lines; but there are also very different issues involving the quality of the interaction. It is a major challenge to conceptualise the relevant kinds of quality, let alone to measure them.

With the challenges come new resources. Obviously, the new devices make it possible to carry out experiments that are vastly more precisely controlled than experiments involving two human interactants can be. Less obviously, it becomes realistic to see the development and annotation of large multimodal databases as a significant way of accumulating and communicating understanding of complex human behaviours, and, of course, of testing theories.

All in all, engaging with human-like interfaces has the potential to transform psychology rather than simply adding an application area. It is another matter whether the potential will be realised.

Roddy Cowie went to Stirling University in 1968, spent a junior year abroad at UCLA, and graduated with joint honours in Philosophy and Psychology. His DPhil, at Sussex, studied the relationship between human vision and machine vision. He was appointed to Queen’s University, Belfast in 1975, and became professor in 2003. His research focus is perception that creates impressions that are subtle and hard to describe exactly. The approach is to look for formal techniques that can capture what these elusive impressions are like, and the way they relate to the physical events and patterns of energy that underlie them. The work has spanned several areas, including picture perception, the experience of hearing loss, and the perception of music. The recent focus has been the perception of emotion – the emotional colouring that pervades everyday life, rather than sterotypical outbursts.

The work on emotion has been done through projects funded by the European Commission, in collaboration with a range of partners from across Europe. PHYSTA (1998-2001), ORESTEIA (2000-2003) and ERMIS (2001-2003) laid the groundwork for a psychologically informed approach to affective computing. Key outputs were R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, J. Taylor (2001).Emotion Recognition in Human-Computer Interaction. IEEE Signal Processing Magazine January 2001. 32-80 E. Douglas-Cowie, R. Cowie & N. Campbell (2003) Editors, special double issue of Speech Communication on 'Speech and Emotion' vols 40 (1-2), pp 1- 257. HUMAINE (2004-2007), which he co-ordinated, established a broad network of teams with compatible approaches. Key outputs were Taylor, J, Scherer, K and Cowie R (2005) Special issue of Neural Networks on Emotion and Brain vol 18(4) April 2005 Cowie, R (2009) Perception of emotion: towards a realistic understanding of the task. Phil. Trans. R. Soc. B 364, 3515-3525 Petta, P. Pelachaud, C & Cowie R (eds) (2011) Emotion-oriented systems: the HUMAINE handbook Springer: Heidelberg. SEMAINE (2008-2010) translated ideas from the earlier projects into a system capable of holding emotionally coloured interactions with a user. Key outputs were McKeown, G., Valstar, M., Cowie, R., Pantic, M., & Schröder, M. (2012). The SEMAINE Database: Annotated Multimodal Records of Emotionally Coloured Conversations between a Person and a Limited Agent. IEEE Transactions on Affective Computing, 3(1). doi:10.1109/T-AFFC.2011.20 Schröder, M., Bevacqua, E., Cowie, R. et al. (2012). Building Autonomous Sensitive Artificial Listeners. IEEE Transactions on Affective Computing doi:10.1109/T-AFFC.2011.34 .G. Current projects are SSPnet (2009-2014), aiming to develop social signal processing (including emotional signals as part of a wider pattern); SIEMPRE (2010-2013), studying the signals and patterns of communication that make live musical performance a unique experience; and ILHAIRE (2011-2014), studying laughter.

Maja Pantic, Imperial College London

Machine Analysis of Facial Behaviour

Facial behaviour is our preeminent means to communicating affective and social signals. There is evidence now that patterns of facial behaviour can also be used to identify people. This talk discusses a number of components of human facial behavior, how they can be automatically sensed and analysed by computer, what is the past research in the field conducted by the iBUG group at Imperial College London, and how far we are from enabling computers to understand human facial behaviour.

Maja Pantic is Professor of Affective & Behavioural Computing and the leader of the Intelligent Behaviour Understanding Group (iBUG) at Imperial College London, Computing Dept. She received various awards for her research on automatic vision-based analysis of facial behaviour including the European Research Council Starting Grant (ERC StG) as one of 2% best junior scientists in any research field in Europe in 2007 and the BCS Roger Needham Award 2011, awarded annually to a UK based researcher for a distinguished research contribution in computer science within ten years of their PhD. She is the Editor in Chief of the Image and Vision Computing Journal (IVCJ/ IMAVIS), Associate Editor of the IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics (IEEE TSMC-B), Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI), and a member of the Steering Committee of the IEEE Transactions on Affective Computing (IEEE TAC). Prof. Pantic is a Fellow of the IEEE. For more information, see: http://ibug.doc.ic.ac.uk/~maja/

Joanna Bryson, University of Bath

Roles for Emotions in Artefacts and Other Agents

Following the Cognitive Sciences, Artificial Intelligence has (largely) moved on from dismissing emotions as "irrational" aspects of natural intelligence that can be improved on to treating them as trendy, flashy and desirable. But what are emotions really for? I will begin this talk by reviewing common applications in human-robot interaction, where emotions have proved useful for making interfaces to AI more intuitive for human users. I will also examine emotions as the core component of natural action selection. I will present evidence that synthetic drives and emotions may form a useful component to authored intelligence and be a simplifying design feature that should be a part of the standard synthetic agent builder's tool kit. Finally, I will look at the ethical considerations of building apparent and / or functional synthetic emotions into artefacts, and discuss the impact of this on the public perception and utility of AI

Joanna Bryson is an academic specialised in two areas: the advancement of systems artificial intelligence (AI), and the use of AI simulations to further the understanding of natural intelligence, including human culture.  She holds degrees in behavioural science, psychology and artificial intelligence from Chicago (BA), Edinburgh (MSc and MPhil), and MIT (PhD).  She joined The University of Bath in 2002, where she was made a Reader in 2010.  Between 2007-2009 she held the Hans Przibram Fellowship for EvoDevo at the Konrad Lorenz Institute for Evolution and Cognition Research in Altenberg, Austria.   In 2010 she was a visiting research fellow in the University of Oxford's Department of Anthropology, and since 2011 she has been a visiting research fellow at the Mannheimer Zentrum für Europäische Sozialforschung.  At Bath she leads the Intelligent Systems research group, one of four in the Department of Computer Science.  She also heads Artificial Models of Natural Intelligence, where she and her colleagues publish in biology, anthropology, cognitive science and systems AI.

Paul Tennent, University of Nottingham

Lies, Damned Lies and Biodata – Lessons from the field

Biosensors have been an active part of psychology-driven HCI (and indeed other areas) for a long time. However, a recent zeitgeist of affective, or at least emotional computing has brought biosensing back into the focus of the modern HCI community. Sensors are becoming smaller, cheaper, more available – indeed they are fast becoming consumer objects, moving out of the lab and into the “real world.” Even EEG, long the province of the men and women in white coats, requiring gels and head shaving and intimate knowledge of the brain, has had a recent revolution with the introduction of several consumer grade EEG systems like the Emotiv. Now you can pop one on your head in a matter of minutes and… and… well there’s the question. We are surrounded by a plethora of such sensors but what can we do with them? Has our perception of their uses changed to match their availability? Traditionally biosensors have been rather isolated: we measured arousal with GSR or ECG, but probably not both. But now, sensor fusion is the new black. We want to combine the outputs of loads of sensors and start pinning labels on the results. In practice however sensors don’t all talk the same language, and even those that do may not be saying what you think they are. And while you, as a scientist, may have a handle on what they are saying, what does their output mean to a lay-person? – and how can this be made meaningful? This talk will outline some history of biosensor usage in the mixed reality lab, going back to around 2006, and look at how our understanding of that data, and in particular “thrill” has changed. It will discuss the tools, tricks and abstractions we have applied to the data to make it useful, meaningful, playful and understandable. It will also involve gas masks, bucking broncos, food deprivation and buffy the vampire slayer.

Paul Tennent set up camp in the mixed reality lab in 2006 after completing a PhD at the University of Glasgow and has been unwilling to move on ever since. Initially focussing on the tools to support mixed methods social science, he worked on the development of Replayer and Digital Replay System, both designed to make quantitative data accessible and accountable within a qualitative framework. Several years ago he developed an interest in the study of thrill. As a key member of the thrill laboratory he has been involved in the staging of a plethora of thrilling events, all with a view to measuring, defining and ultimately objectifying the traditionally subjective idea of thrill; focussing primarily on the use of biosensors to capture, measure, represent and even adapt peoples’ thrilling experiences. For the last nine months he has been a principal developer on Horizon’s Vicarious project, constructing flexible tools to gather, process, display and make use of data from heterogeneous sensors and sources. He has been involved in several media projects over the last few years all aimed at promoting a public understanding of the science of thrill and was last seen in public performing a virtual skydive in the name of science.