Collaborations with machines

This position has been presented at the CSCW 2016 panel on “Innovations in autonomous systems: Challenges and opportunities for human-agent collaboration”. For an overview of the positions presented, see this Abstract in the ACM DL.  

The time could not be more pertinent for the CSCW community to engage with the many challenges posed by increasing machine autonomy—once deemed science fiction, a number of fears about AI are becoming reality. Many of the jobs currently done by humans are likely to become automated in the near future. Led by distinguished scientists and entrepreneurs such as Stephen Hawking, Elon Musk and Steve Wozniak, more than 20,000 people have signed an open letter against autonomous weapons.

The CSCW community should do its bit to address some of the many challenges from ethics to appropriate systems and interaction design. The One Hundred Year Study on AI (AI100) states ‘Collaborations with machines’ as one of the key challenges[1]. In this position I argue that CSCW is well-placed to take an active role in addressing a host of challenges by drawing on decades of expertise in systems design and collaborative work. Particularly relevant are key themes such as accountability, awareness, situated action, and division of labour in socio-technical systems. I will draw on some examples from our own work in energy and disaster response to illustrate how humans and agents can work together in a variety of settings. My hope is that this will stimulate discussion and uptake of relevant topics in the CSCW community.

I want to begin by talking about this idea of a Human-Agent Collectives (HACs). Members of the multi-institution research project ORCHID have worked on HACs for the past 5 or so years. According to a recent article published in the Communications of the ACM (Jennings et al., 2014), “HACs are a new class of socio-technical systems in which humans and smart software (agents) engage in flexible relationships in order to achieve both their individual and collective goals. Sometimes the humans take the lead, sometimes the computer does and this relationship can vary dynamically”. The rhetoric here makes HACs sounds quite abstract, so I want to provide some concrete examples. The researchers of the ORCHID project (including myself) have spent some of the previous years building, deploying, and studying instances of HACs for various real-world domains, including Energy in the Home, Disaster Response and Citizen Science.

In Energy, this has included the design and study of an agent-based system that charges the battery when the dynamic electricity price is cheaper, and notifies you if the price goes up (Costanza et al. 2014). As with much research concerned with Energy Use in everyday life, we found there is a fine line between how much people want to interact with rationalising systems, even if they help save money, etc. and just to get on with their lives. There are interesting design challenges to balance between complete automation on the one extreme, and full ‘manual’ control on the other. What we also found in our energy related research is that energy use doesn’t happen in a vacuum, it happens within a network of contingent, everyday practices and concerns. Issues such as who has access to the technology, which data is generated, where is it kept, and who ultimately owns the data are key concerns (Rodden et al. 2013). In particular, trust in the providers of services will be an essential factor in the adoption of ‘smart’ consumer systems.

In Disaster Response, we have taken a different approach, we have both conducted ethnographic fieldwork with disaster response organisations (Fischer et al. 2015), and designed a mixed-reality game to study agent-assisted teamwork in field trials. Here, we have looked at classic CSCW topics such as coordination, awareness, and division of labor (e.g., Jiang et al. 2014). A key finding here was that while the agent support works well to assist with predictive and repetitive elements of the work setting—in our case this was assigning people to tasks—the human ability to deal with the unpredictable contingencies that inevitably arise is crucial for success. This has some important implications for systems design. Rather than to attempt to provide the agent with a model of the entire world perhaps should accept the fact that humans are better at certain things than machines. This changes the starting point from one of automation to one of collaboration between humans and machines. And then we can ask what do the interfaces have to look like that enable humans to be in-the-loop and enable them to, for example deal with contingencies and support situated action, for example to deal with human error, avoid interruption, and implement alternative courses of action.

I would suggest that with regard to the key issues I merely had time to touch upon lightly, there is no one-size-fits-all solution to design, but that these need to be worked through for each and every application that is being built. Rather than to attempt to come up with ‘how to’s’, I would suggest to build up a catalogue of essential questions we need to ask of our designs and in our research, to enable informed decisions and an ethical approach to the design of systems that feature some level of autonomy. What I would suggest as an important starting point, we ought to treat these kinds of systems as socio-technical, rather than as autonomous, even the Mars rover Curiosity has been designed to allow human interaction at some key points. So with this said, perhaps we could use this panel as an opportunity to begin a dialogue about what questions we should ask that enable us as a community to contribute to this timely topic.

References

Costanza, E., Fischer, J.E., Colley, J.A., Rodden, T., Ramchurn, S.D. and Jennings, N.R. (2014). Doing the Laundry with Agents: a Field Trial of a Future Smart Energy System in the Home. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM Press.

Fischer, J.E., Reeves, S., Rodden, T., Reece, S., Ramchurn, S.D. and Jones, D. (2015). Building a Birds Eye View: Collaborative Work in Disaster Response. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15). ACM Press.

N. R. Jennings, L. Moreau, D. Nicholson, S. Ramchurn, S. Roberts, T. Rodden, A. Rogers. 2014. Human-Agent Collectives. Communications of the ACM, Vol. 57 No. 12, Pages 80-88.

Jiang, W., Fischer, J.E., Greenhalgh, C., Ramchurn, S.D., Wu, F., Jennings, N.R. and Rodden, T. (2014). Social Implications of Agent-based Planning Support for Human Teams. In: Proc. of the 2014 Int. Conference on Collaboration Technologies and Systems (CTS ’14). IEEE.

Rodden, T., Fischer, J.E., Pantidi, N., Bachour, K. and Moran, S. (2013). At Home with Agents: Exploring Attitudes Towards Future Smart Energy Infrastructures.In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM Press.

 

[1] https://ai100.stanford.edu/reflections-and-framing

Leave a Reply