Humans May Put Too Much Trust in Robots, Study Finds

istock
istock | istock

As A.I. technology improves, a major challenge for engineers has been creating robots that people are comfortable around. Between scary pop culture tropes, talk of the singularity, and the simple otherness of artificial intelligence, it would be understandable if people were hesitant to put their faith in non-human helpers. But a new study indicates that people are increasingly willing to trust robots—even when they shouldn’t.

At the Georgia Institute of Technology, 30 volunteers were asked to follow a robot down a hallway to a room where they were given a survey to fill out. As they did, a fire alarm started ringing and smoke began to fill the room. The robot, which was outfitted with a sign reading “Emergency Guide Robot,” then began to move, forcing the volunteers to make a split-second decision between following the droid on an unknown route or escaping on their own via the door through which they entered the room. Not only did 26 out of the 30 volunteers choose to follow the robot, but they continued to do so even when it led them away from clearly marked exits.

“We were surprised,” researcher Paul Robinette told New Scientist. “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.”

In fact, volunteers gave the robot the benefit of the doubt when its directions were a little counterintuitive, and in another version of the study, the majority of participants even followed the robot after it appeared to “break down” or freeze in place during the initial walk down the hallway. That is, despite seeing the robot malfunction just moments before the fire, volunteers still decided it to trust it.

While engineers want humans to trust robots, it becomes problematic when that trust goes against common sense, or when humans fail to recognize errors or bugs. The challenge then becomes not only developing trust but teaching people to recognize when a robot is malfunctioning.

“Any piece of software will always have some bugs in it,” A.I. expert Kerstin Dautenhahn told New Scientist. “It is certainly a very important point to consider what that actually means for designers of robots, and how we can maybe design robots in a way where the limitations are clear.”

“People seem to believe that these robotic systems know more about the world than they really do and that they would never make mistakes or have any kind of fault,” said researcher Alan Wagner. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”

For more on the study, check out the video from Georgia Tech below.