The smart factory of the future will use sensors to gather information and then take actions based on that information. The machines will recognize the character of a workpiece and adjust their actions to meet the task. They’ll get information about the operator’s native language and provide appropriate communication. They’ll get alerts about needed maintenance and convey that information to the right operators.
But none of these actions will be autonomous. The machinery won’t whimsically decide to make a new product or choose a packaging option based on its personal tastes. The machines will just be doing what they’re told.
Why not have autonomous machinery?
San Fransisco now has semi-autonomous delivery robots taking food along the sidewalks, and the city Supervisor, Norman Yee, wants to ban them before they start causing trouble. The city government first tried to come up with ways to regulate the robots, but couldn’t come up with any reasonable regulations. So the solution is to make a regulation that says they can’t be on the sidewalks at all.
And that encapsulates the problem. Those robots are just semi-automatic — they have a human chaperone walking alongside them, like early automobiles. Even so, it’s hard to trust them, because we don’t know what they might do. City administrators are imagining crushed toes and toppled senior citizens, but we can imagine kids deciding to take a ride on one of these robots, or hackers confusing them and sending them on alternate pathways. We can imagine a lot of things.
People tend to react emotionally to robots. The more autonomous they seem, the more we treat them like people, or at least dogs. People get irritated by the semi-autonomous food delivery vehicles. They also get fascinated by them. Some news stories have described these fairly utilitarian boxes with wheels as “adorable.” So, even if the programming of food and package delivery vehicles continues to improve significantly, there’s still lots of room for highly unpredictable human behavior.
In an industrial situation, there’s less room for error. There’s plenty more potential danger.
Smart, or autonomous?
And smart doesn’t always go along with autonomous. You can get stupid autonomous machines and smart machines that are not autonomous. Add in connectivity among machines and you’re looking at a complex situation.
Machine learning, increased human ability to make changes in industrial machines, and connections among industrial machines mean that the origial machine maker can’t be fully responsible for safety. Even machines that aren’t autonomous are open to change.
In an industrial setting, banning robots isn’t the solution. It’s time to standardize safety protocols and think differently about how to keep machines and people safe in a changing environment. That’s up to us humans, since we’re both smart and autonomous.