Press "Enter" to skip to content

People trust humans and robots similarly but emotions differ

Leslie 0

On 9 March, researchers at Yale University published a study which suggested that humans may warm up to social robots more if the latter make, or admit, mistakes.

Published in the Proceedings of the National Academy of Sciences, the study also reported a more positive team experience when the robots made or admitted to a mistake rather than being silent or making neutral statements (like reciting the game’s score).

New research published in the Journal of Economic Psychology reinforces that people can extend similar levels of trust to humans and robots.

However, people’s emotional reactions in trust-based interactions varies depending on partner type, reveals the study that was led by Chapman University‘s Eric Schniter and Timothy Shields, along with University of Montreal’s Daniel Sznycer.

Trust game

The researchers used an anonymous trust game experiment during which a human (trustor) decided how much of a $10 endowment to give to a trustee–a human, a robot, or a robot whose payoffs go to another human.

The robots were programmed to mimic previously observed reciprocation by human trustees.

The experimental design allowed researchers to explain two important aspects of trust in explainable robots. First, how much humans trust robots compared to fellow humans. Second, patterns of how humans react emotionally, following interactions with robots versus other humans.

Results

The experiment shows people extend similar levels of trust to humans and robots. Through their trust interactions with the robots, participants learned about the cooperativeness of fellow humans.

Credit: Chapman University

Social emotions are more than feelings–they regulate social behavior. More specifically, social emotions such as guilt, gratitude, anger, and pride affect how we treat others and influence how others treat us in trust-based interactions.

Participants in this experiment experienced social emotions differently, depending on whether their partner was a robot or human. A failure to reciprocate the trustor’s investment in the trustee triggered more anger when the trustee was a human than when the trustee was a robot.

Similarly, reciprocation triggered more gratitude when the trustee was a human than when the trustee was a robot.

Further, participants’ emotions finely discriminated among robot types. They reported feeling more intense pride and guilt when the robot trustee’s payoff went to a human than when the robot acted alone.

Implications

  • Driving, for instance, will present interaction opportunities where it will matter whether decisions are being made by humans or robots–and if they serve humans or not.
  • Analogous interactions occur with automated or robotic check-in agents, bank tellers, surgeons, etc.
  • Partnerships with consistent reciprocators may consolidate into stronger, more productive partnerships when the reciprocators are fellow humans, because humans elicit more gratitude than robots do.
  • Conversely, partnerships with inconsistent reciprocators may be more stable when the reciprocators are robots, because robots elicit less anger than humans do.
  • Further, humans experience pride and guilt more intensely in interactions where robots serve a beneficiary. This suggests people will be more likely to re-extend trust to similar partners.