Many people worry that automation will take over too many human jobs and leave human beings unemployed. During the pandemic, when automation jumped ahead much more than expected, this fear became even more common.
But a new group of observers is worrying less that machines will become more like humans and more able to do human things, and more worried that humans will become more like machines.
Kevin Roose, author of Futureproof: 9 Rules for Humans in the Age of Automation, claims that people should not try to be super productive like a machine or super techie like a machine. Instead, humans should home their ability to deal with surprises.
A computer can play chess, he says, because chess is all about predictable actions and inflexible rules. Put a computer in charge of a class of Kindergartners, though, and you will have trouble. Kindergartners are very unpredictable and different from one another. The rules for dealing with them are changeable and hard to quantify.
Robots will be at a loss with small children for the foreseeable future.
Here’s a real-world example: sorting produce. Machines are not good at sorting produce. Fruits and vegetables have higgledy-piggledy shapes and can present themselves from all different angles on a conveyor belt. The solution? Certainly, people are working on improving robots, but they;’ve also been trying to grow more regularly-shaped tomatoes.
Should we try to reduce the surprises in our jobs? What would that do to our lives? What will it do to our tomatoes?
Roose worries about machines making moral decisions, but others outside the field are worrying, too. Adam Garfinkel wrote “In science fiction, the typical worry is that machines will become human-like; the more pressing problem now is that, through the thinning out of our interactions, humans are becoming machine-like…More troubling are the moral issues that could potentially arise: mainly ceding to machines programmed by others the right to make moral choices that ought to be ours.”
Roose points out that this is already happening. Algorithms determine who should receive benefits and who should get parole. These decisions affect the lives of actual human beings without making human beings accountable for those decisions.
Ask nearly any human being what they would do if they walked by a river, nicely dressed, and saw a child drowning. All of them will say that they will save the child, even though they’ll ruin their clothes.
Step away from that directness, though, and things change. Will you invest in a new product that could limit child drownings, if doing so would mean you had to reduce your clothing budget? Maybe not. There are hundreds of examples like this suggesting that we kind of like leaving moral decisions up to someone else. If we don’t actually see anybody drowning, we don’t have to suffer over it.
Jobs we don’t want machines to do
Research shows that people do not want machines to make moral decisions. However, were don’t always recognize moral decisions that are handled by algorithms.
Roose also points out that there are jobs that we don’t want robots to do for more visceral reasons. We don’t want to talk to a robot when we make a 911 call. A recent medical use case caused a scandal when a robot told a patient they would not survive. Some things just should be done for people by people.
In a manufacturing situation, which is where our speciality is, we know that automation can protect people from physical danger, increase production, reduce waste, and free human beings from repetitive work.
We’re for that use of automation. If you need service or support for your Rexroth industrial motion control, we can help. Contact us right away for immediate assistance.