A Scottish robotics researcher is working on robot-human interaction and trust. Her outfit is working on the use of robots in harsh conditions, such as extreme cold, unforgiving climates, and underwater locations. This project, a news tory claims, “has unveiled a new method of communication that allows machines and humans to speak the same language and understand each other’s actions in real time.”
The project is further described as a way of building trust between the humans and the robots, a way of teaching robots to recognize when they have lost their human coworker’s trust, and a method of teaching people to trust robots.
MIRIAM (Multimodal Intelligent inteRactIon for Autonomous systeMs), the system being developed, is intended to cause humans to trust robots and AI systems’ decisions, even if the humans involved can’t understand the reasoning behind those decisions. Research conducted on this system focuses at least in part on how to program robots to make statements that human beings find convincing.
Do you trust your robots?
Helen Hastie has done research on the question of trust and robots. While the research is interesting and has turned up insights that could help program robots to appear more trustworthy, or even to help people estimate the likelihood of robot error more accurately, they are a far cry from “speaking the same language,” let alone “understanding each other’s actions.”
Programming a robot to be more convincing is closer to writing a magazine ad to be more convincing than it is to improving the robots’ language skills or developing trusting relationships. In fact, it may have most in common with developing a telemarketer’s script to respond to common objections.
It is at its foundation a question of human beings manipulating other human beings. The outcome may be positive or it may not. One example in the research involves medical triage. If you can get the humans to accept the judgement of the robot in spite of occasional errors, you can get through the triage process faster than if the humans insist on speaking with a human doctor before accepting treatment.
On the other hand, you may also get humans to accept erroneous diagnosis from a machine that can’t recognize when it has made an error, since it is programmed only to recognize when a human is not accepting its diagnoses.
Whom should you trust?
It may not be the researchers who have oversold the robots in this case, but it’s part of a pattern of making it sound as though robots are capable of much more than they can actually do.
Headlines about robots often make it sound as though research is further along than it really is, or as though robots are acting with greater autonomy than they really are. At the very least, reporting should recognize that machines are far from using human language as humans do.