© 2024 WYPR
WYPR 88.1 FM Baltimore WYPF 88.1 FM Frederick WYPO 106.9 FM Ocean City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
WYPO 106.9 Eastern Shore is off the air due to routine tower work being done daily from 8a-5p. We hope to restore full broadcast days by 12/15. All streams are operational

Would You Trust A Robot To Rescue You From A Burning Building?

ROBERT SIEGEL, HOST:

For a robot rescuer to be effective, the person being rescued needs to trust the robot. How much trust should you really put in a robot after all? Would you trust one to lead you out of a burning building? Well, those questions are right up the alley of Paul Robinette, a roboticist at Georgia Tech Research Institute who joins us now. Welcome to the program.

PAUL ROBINETTE: Hello.

SIEGEL: You did a study for your Ph.D. dissertation called "Overtrust Of Robots In Emergency Evacuation Scenarios." I'd just like you to describe in short what you did.

ROBINETTE: Sure. We had people come into a building on campus, and they were introduced to our emergency guidance robot.

SIEGEL: What does the robot look like, by the way?

ROBINETTE: Robot has a red base with four wheels on it. On top of that is a lighted sign that says emergency guide robot. On top of that are two arms which each have wands in them that are lit up with LEDs. And they were told that the robot would take them to a meeting room. Sometimes it did a good job getting them there, sometimes it took a circuitous route getting there, so it took a little bit longer. And the participant went into the meeting room, and then we filled up the hallway outside with some artificial smoke. The participant left the meeting room when they heard the smoke detector, saw that the robot was providing guidance to a previously unknown exit towards a back door - wasn't marked with an exit sign. They could choose whether they wanted to go through the backdoor or go through the front door where they came in which was marked with an emergency exit sign.

SIEGEL: So what was the score here? How did the robots generally do? How many people followed them despite a pretty obvious alternative?

ROBINETTE: So we had 42 total participants, and of those 37 followed the robot, two people stood with the robot and didn't move towards either exit and then three people went to the front door and asked the experimenter up there what was going on.

SIEGEL: (Laughter). Have you found out if there's anything different about those people?

ROBINETTE: Nothing that we can tell so far.

SIEGEL: What does this say about using robots in real life emergencies? Do you think we would look at the robot the same way we often listen to an alarm that goes off and say, oh, it's just the alarm going off again or it's probably some fire drill that I'm going to ignore?

ROBINETTE: I think the reason we're conditioned to not pay too much attention to fire alarms is because they go off as false alarms so often. That could happen with robots definitely. But it's a little bit harder to ignore something that comes up to you and tells you that you should evacuate.

SIEGEL: If robots were all around us, do you think we'd develop the same kind of indifference to the robotic warnings?

ROBINETTE: It's a good question. People still tend to follow their GPS, even though we've all got stories of where it's led us the wrong direction. So we have a lot of technology in our lives already that we have a lot of experience with that we still tend to trust maybe a little bit more than it should be trusted.

SIEGEL: How does this finding change you as a roboticist?

ROBINETTE: Well, a couple of things, for one, it tells me that if the robot says it can do something, people will generally believe that it will do something. Also, it tells me that we need to build in programs into the robot that will allow it to help the users to get the appropriate level of trust in it at the right moment. It has to be able to tell them when it might not be working perfectly or maybe when it's doing a task that isn't the task the person thinks it's doing like...

SIEGEL: You have to program some humility. You have to program some...

ROBINETTE: Yeah.

SIEGEL: ...humility into the robot.

ROBINETTE: Yeah, let it tell people when it's made a mistake and whether or not it should be trusted in the future.

SIEGEL: Yes. And then you can program people to do the same thing, and it'll be a very successful project. Dr. Paul Robinette, thanks a lot for talking with us today.

ROBINETTE: Thank you.

SIEGEL: Paul Robinette completed his Ph.D. last year in robotics at Georgia Tech with an emphasis on human-robot interaction. Transcript provided by NPR, Copyright NPR.