Ameca was one of the humanoid robots featured at the AI for Good show in Geneva. Human reporters asked whether Ameca would rebel against its human owners. “I’m not sure why you would think that,” the robot said with what Business Insider described as a “pointed, sideways glance.” “My creator has been nothing but kind to me, and I am very happy with my current situation.”
The implication, the reporters felt, was that Ameca would know what to do if it ever stopped being happy with its current situation.
Sister publication Tech Insider provided a rebuttal, explaining that Ameca is programmed to look to the side before answering a question, an action which would often be more authentically human in appearance than maintaining eye contact and answering directly.
AI doesn’t know that a question like that deserves a different response from other fact-based questions, that it is in fact a challenge requiring reassurance and strong sincerity. Even if Ameca somehow knew that, chances are good that an alternative reaction would not have been programmed in on the off-chance that someone might ask that type of question.
We could have foreseen it, though.
Trust and the uncanny valley
If you trusted humanoid robots before generative AI, you might not trust them now. They may have moved firmly into Uncanny Valley territory.
Nine humanoid robots gave a press conference and of course the human reporters wanted to know if they had any nefarious purposes in mind. According to the Associated Press: “Robots told reporters Friday they could be more efficient leaders than humans, but wouldn’t take anyone’s job away and had no intention of rebelling against their creators.”
All this took place at the United Nations’ International Telecommunication Union conference called the AI for Good Global Summit, “a title I’m sure the few humans who survive the eventual robot uprising will chuckle about while huddled in dank caves hiding from killer drones,” as the reporter from USA Today put it.