When robots die: The existential challenges of human-robot interaction

Adriana Tapus has studied human-robot interaction (HRI) long enough to expect the unexpected.
"When you work with people, you have many, many surprises. You cannot know exactly what the human will do next and many things come up that you didn't expect," says Tapus, associate professor ENSTA-ParisTech, who builds assistive robotics architectures and investigates HRI in therapeutic environments.
Tapus tells of a patient recovering from a stroke who tried to cheat a therapeutic bot during a musical game designed to improve the rehabilitation process. There was no personal gain involved, says Tapus: beyond the satisfaction of outwitting the bot, that is.
"These are things that we couldn't imagine when we designed the system [...] Humans are unpredictable," says Tapus.
For example, young children interacting with the humanoid NAO bot -- a central figure in Tapus' current research -- often instinctively kiss the bot's head, says Tapus. And, if NAO's eyes turn red, the children ask researchers why the bot is upset.
But one elderly woman's relationship with Bandit (a humanoid bot developed by the University of South California's Interaction Lab) provides the most tantalising glimpse into the kind of small, human drama that could be played out hundreds of thousands of times in the future, if robots become more commonplace in our daily lives.
As part of a long-term study of the ways people and bots interact in therapeutic settings, Tapus had Bandit play a musical game with people suffering from dementia. The "Song Discovery" game (loosely based on the television game show "Name That Tune") was specially designed to help dementia patients maintain their attention levels.
The elderly patient played with Bandit twice a week over a period of about eight months. Each session lasted about an hour.
"She looked forward to the interaction. She really liked to hear the music and play the game," explains Tapus.
The fieldwork was a success. Many participants demonstrated improved cognitive performance over the course of the eight months and the research demonstrated some of the benefits of using bots in therapeutic settings. Their fieldwork complete, the team took Bandit back to the lab to assess the data collected and start work on their scientific papers.
But the story didn't end there for the elderly patient.
Faced with the reality that Bandit wasn't coming back, she became very distressed.
"She was depressed," says Tapus. "The robot had shown nice, encouraging behavior towards her."
But Bandit was gone.
As things turned out, over a period of about three weeks, the elderly patient's distress faded and everything went back to normal.
But is it possible that in the future, when humans and bots live side by side, day after day, sharing experiences and forming relationships-of-sorts with each other, that the elderly patient's sense of loss will not just be a normal experience, but a typical experience?
How will we react when a loyal humanoid bot that has served us for years suddenly fails? Will we genuinely grieve for no-longer working bots that have accompanied us through so much of our lives?
And when bots die, will we treat them like cherished family pets and bury them in our gardens (or robot graveyards)? Will we place the deceased bots' body in a glass case in the livingroom? Or keep a head on the mantlepiece as a memento?
"No" says Tapus. "We get upset when our coffee machine is not working anymore but we don't bury it. Robots are machines that can help us and that's all."
Tapus argues that the old lady became distressed because an activity she enjoyed had been terminated, not because of any "attachment to the robot itself."
"Robots show intentional behavior, which we otherwise only associate with animals. Hence, we encounter a situation in which robots appear to be alive and only our rational thinking tells us that they are not," says Bartneck. "But our rational thinking is the weaker part of our brain. I think that we will have a similar relationship with robots as we have today with pets."
In a 2007 experiment, participants were asked to turn off a desk-mounted bot as it pleads with them to keep it turned on (video). In another experiment, he asked participants to kill a bot with a hammer for (apparently) under-performing in a brief set of competency tests.
Both experiments indicated that people are more reluctant to kill apparently intelligent bots than apparently incompetent bots. A participant in the second experiment exclaimed: "This is inhumane! You are sick!" at one point (video). (But not until after she had successfully killed the bot.)
In the future, our categorical distinction between alive and dead will have to be redefined to accommodate our humanoid companions, says Bartneck.
"I do not think that robots will necessarily be the solution for many of our societal problems, such as the aging society, but they will certainly become household members. We are lazy and vain. Building human like robots fulfills a deeper need for humanity. We want slaves and we want to play god," says Bartneck.
A crucial difference between bots and pets (and, presumably, family members) remains, however.
"Robots can live forever. We can backup and transfer their brains into a new body. So we will not have to say good bye to them...I do not know how long it will take for us to get used to the idea that the ghost in the machine can be transferred," says Bartneck.
In this sense, bots can be reincarnated time and again, each bot's "identity" being transferred, like so much digital soul, from one body to another when the need arises.
The "rituals of farewell" that might develop remain to be seen, says Nikolaos Mavridis, Assistant Professor of Computer Engineering at New York University, Abu Dhabi, but chances are, that initially they could be similar to the ones that we have for pets or even humans.
"Notice for example how easily we attribute for human-like beliefs, emotions or intentions to pets, and even to insects, when scientific evidence clearly shows that what we could call an 'intention' for example, is quite different between humans and insects," says Mavridis, who examines human-robot relationships via the FaceBots experiment and the Ibn Sina bot, the world's first arabic-language dialogic android.
We approach different entities in different ways however: we spray insects and slaughter sheep, for example. How we treat robots then, will depend on the way we interact with it, the robot's appearance and behavioural repertoire, and our pre-existing expectations.
The backbone of long-term meaningful and sustainable human-robot relations, however, is built on constructing and maintaining a strong metaphorical "common locus" consisting of shared memories, friendships, interests and tastes. It might not matter if the friendship is all one-way either, says Mavridis.
"What might really matter in a human-machine relationship is not necessarily the question of whether shared memories, friends, and language do exist bilaterally. It is sometimes enough that unilateral memories held by the human of moments with the machine exist, together with their 'imaginary' completion -- that is, by knowingly or unknowingly fooling ourselves that the machine remembers the moments we had together," explains Mavridis.
Initially, when it comes to dealing with the (so-called) death of a robot, we will re-use elements of the existing mental models we have for dealing with humans, pets, and machines, biased by our sci-fi and film-induced expectations of what robots should behave like.
"Then, slowly, as robots enter our everyday life more and more, and as they become more and more intelligent, there will arise a new special kind of mental model for them. However, this will still be based on elements from the existing kinds of mental models, mixed, adapted and extended with new elements. Thus, slowly it will become something new, with strong human elements, but neither a copy nor something new entirely," says Mavridis.
In the future, people will become attached to bots' service designs rather than to the bots themselves, says Jodi Forlizzi, associate professor at the Human-Computer Interaction Institute in Carnegie Mellon University.
"My vision is somewhere between attachment to a pet and getting rid of a robot like you would an old coffeemaker. I think the service design of the robot is what we will develop attachment with -- a virtual thing that can be transferred from body to body of a robot," says Forlizzi.
"You have to remember that individual differences do help shape the relationships we make with technology," adds Forlizzi, who is currently working on a study about the attachments people form to virtual objects, such as their Facebook data.
While the appearance and behavior of a bot shapes our expectations of its behavior, (with humanoid bots being treated more like humans than non-humanoid bots) people also make social attributions towards bots that aren't human-like at all, says Forlizzi.
This tendency even extends to the vacuum-cleaning Roomba bot with a participant in one study affectionately nicknaming the bot "Manuel" after the Fawlty Towers character.
Forlizzi's team recently completed a four-month field study withSnackbot, a humanoid snack-delivery robot. Analysis of this data is not yet complete, but something interesting about human-robot relationships is already emerging: people seem to treat bots differently in private than they do in public.
We probably won't discard broken bots like old toasters, but we probably won't treat them like pets either, says Bilge Mutlu, director of the Human-Computer Interaction Laboratory at the University of Wisconsin-Madison.
"People get attached to products and things that are not in the biological sense alive and I don't see any reason why robots can't be a part of that," says Mutlu, citing some people's attachment to their Tamagotchis as an example.
"There is something special about robots that enables meaningful relationships. Whether that means that when the meaningful relationship ends, people are really upset about it, is something to be answered by empirical research […] We have to see how society builds these relationships and what kind of behaviours and responses emerge," says Mutlu, adding that while we don't replace spare parts on our pets, we may have to do so for our bots.

Σχόλια

Δημοφιλείς αναρτήσεις από αυτό το ιστολόγιο

One day at the WS-REST workshop in Florence

PyAPI: A python library to play with various API Standards, like Hydra, Swagger, RAML and more

What is the Semantic Web?