Frank Pasquale, author of New Laws of Robotics: Defending Human Expertise in the Age of AI, which has been excerpted in The Boston Globe, has proposed four new laws of robotics.
These are new laws because the original Three Laws of Robotics were devised by Isaac Asimov:
- 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Here are Pasquale’s proposed new laws.
Robotic systems and AI should complement professionals, not replace them.
The big fear that jobs will be taken over by robots shouldn’t be a problem, because we should not plan for robots to replace human workers. “Robotic meatcutters relieve workers of dangerous work;” says Pasquale “robotic day care gives us pause.”
This is a good example, because it features something human beings are particularly good at — taking care of baby human beings — and wouldn’t want to be replaced in. But if we agree that robots should help rather than replace people and build that into their programming as a matter of course, we would not have to worry about being replaced.
Of course, we live in a world in which Instgram Influencer is an actual paying job. We can probably be confident that there will be no end of new jobs available for people to do, even if we can’t imagine them right now.
Robotic systems and AI should not counterfeit humanity.
Pasquale presents this law in a question: “Do we want to live in a world where human beings do not know whether they are dealing with a fellow human or a machine?”
Right now, people make money on the premise that it’s okay to trick people into thinking that a real, caring human being is chatting with them about their car’s warranty. Those folks would probably say yes, they are spending all their energy trying to trick people in just the way being described.
But the rest of us don’t want that future, and laws are already being made and used to shut it down.
Robotic systems and AI should not intensify zero-sum arms races.
Pasquale is talking about literal wars here, and it makes sense that human beings should not allow robots to send the humans to war. But he’s also talking about competitions like classroom tests or loan applications.
A prior agreement that anything like that can’t be decided by AI might be wise. Otherwise, we could find ourselves looking at literal war examples.
Robotic systems and AI must always indicate the identity of their creators, controllers, and owners.
Human beings, in other words, should continue to be responsible for the machinery they build and program.
We know that building and programing have far-reaching consequences. We know that humans are capable of inserting their biases into robots — and in fact, seem to be incapable of not doing so.
It makes sense that humans should take and keep the responsibility.