Dr Ilaria Torre, Adapt

Focus on research: Dr Ilaria Torre, Adapt

Life
Dr Ilaria Torre, Adapt

31 July 2018

Dr Ilaria Torre is an Edge fellow and post doctoral researcher at the SFI-backed Adapt centre for research into digital content. Coming from a background in Psychology, her current work uses machine simulation to examine how we interact with virtual humans.

Your work looks at how we interact with machines through speech and facial recognition ‘channels’. Why did you chose to work with both channels instead of one or the other?
I am actually a speech scientist by background, so I generally know more things about one of these two channels. However, they are naturally interlinked: to produce speech you have to move ‘articulators’ (such as tongue, lips etc) that naturally result in facial movements, apart from speech production. Since I am working with artificial characters – such as virtual humans or robots – that have both a face and a voice, it is important to consider how both these channels work in the interaction.

Once you ‘align’ voice and face channels how quickly do we start to create trust and how quickly can that be broken when they misalign?
Trust is an important phenomenon in society, at all sort of different levels (eg we trust people at the individual level, but we also trust institutions), so it’s important to both form a first impression of trustworthiness and to maintain it over time. In the case of artificial characters, I think it’s important to both design them to appear trustworthy at a first glance, and to actually behave truthfully in the long run.

However, machines can malfunction and this can break the first impression of trustworthiness. So perhaps, if we know that a machine is not going to work perfectly all the time, we might want to design it so that it doesn’t seem infallible, and people who will interact with it can form first impressions which are congruent with the machine’s actual abilities – which might be discovered only later in the interaction.

When it comes to building trust with a virtual human do people take the best bits of dealing with a real person (speech and face) with the best bits of dealing with a robot or are we more inclined to attach human qualities as soon as something becomes convincing?
We humans are funny in that we tend to anthropomorphise everything that has anything that vaguely resembles a human feature. For example, doesn’t a cheesegrater sometimes look like it’s smiling? When it comes to interacting with a machine, research by Clifford Nass and colleagues has shown that people unconsciously apply to human-machine interaction the same social rules that they apply in human-human interactions –  such as politeness, reciprocity, and even gender stereotypes.

When it comes to trust, I think it might boil down to whether the different channels (eg face, voice, body) are congruent with an impression of trustworthiness, but also with its function. For example, a big manufacturing robot might need to have a certain voice that reassures about its safety, while a child’s tutor robot might have a different voice to appear trustworthy, etc.

Are people as sensitive to negative as positive experiences with avatars? Do people dislike being given out to? How about being told we are looking well?
I would say that people are more sensitive to negative experiences in human-machine interactions. When everything is working well in the interaction, there is nothing that stands out, while as soon as something doesn’t work, people will be more likely to remember it.

In my previous research, I found that not much was happening when the artificial agents I was using were behaving well, but it became interesting when they behaved badly. It might be similar to a certain phenomenon where people are more likely to leave a negative review of a restaurant than a positive one.

In the case of being given out to, I think that people would actually be upset, or pleased, as it if was a human telling that. There is a lot of research that has shown that people interact with artificial agents as though they were social agents. In psychology there is the ‘white coat effect’ where test subjects act in a way they know to be wrong because they are guided by an expert.

Have you noticed a similar effect with avatars where people will defer to a virtual person because of some assumed impartiality?
In my experiment at the Science Gallery in Dublin I found that people trusted one type of avatar more than the others. So while people do trust avatars in certain conditions, the avatars are not assumed to be more knowledgeable just because they are machines. In that experiment, the avatars gave reasons for a certain behaviour, but people still didn’t trust them if they though that the reason wasn’t good enough, or if something about the avatar was off-putting.

In what medium would you like to see your work explored next?
I would really like to carry out the same experiment with robots. Robots have bodies, so it would be really interesting to see how this added dimension contributes to trust.

Read More:


Back to Top ↑

TechCentral.ie