A Russian robotics company claims to have created a robot that can tell when people are lying. It can also tell, according to the manufacturer’s press release, whether people are using drugs or would be likely to give up secrets.
The photo here isn’t really a picture of the robot in question. Known as Trubot, the robot is designed for interviewing job prospects or interrogating people.
The makers describe Trubot as an android robot a few inches short of six feet. It’s clearly masculine and has moving arms, head, eyes, and jaw. The robot sits down with a human being and asks questions in a conversational manner.
The makers say that, “thanks to special software,” Trubot “determines the veracity of the answers and the psychological profile of the respondent. Trubot can, for example, find out whether the interviewee is addicted to drugs, whether he is inclined to steal or [leak] information. Also separately the robot will reveal such features as leadership, softness, inclination to work in a team and so on.”
Trubot then prints out a “complete psychological profile.”
Can a robot read people accurately enough to create a complete psychological profile?
There has been some previous research on the question. Last year, researchers found that a software program for a “virtual interrogator” could make some judgements on human veracity — but only if the interviewees thought a human being was operating the system.
In that case, the program tracked the subjects’ electrodermal activity. No system has been developed which could automatically read subjects’ minds in a conversational manner.
However, a robot has been trained to screen job applicants to short list them for human interviews.
The Russian company that announced Trubot doesn’t offer any proof of Trubot’s abilities. But we have to wonder whether a robot would be able to do the job even if the software were up to the task.
Will people agree to be interrogated or indeed interviewed by a robot? Would they behave normally while being interviewed by a robot? Would they be willing to work at a company that outsources hiring decisions to robots?
For that matter, should people be willing to work for a company that lets machines do that? Would you trust a robot to make hiring decisions for you?
Perhaps Trubot should stick to interrogations. If people believe that they are being questioned by a robot that can tell whether they’re lying, they might be frightened enough to trip themselves up.