ICRA is underway, and of course AI and robotics are in the forefront of the show. The 2023 IEEE International Conference on Robotics and Automation (ICRA) is in the UK this year, and Ameca is one of the headliners. As the AI-powered humanoid robot dazzles visitors with poetry, the Center for AI Safety published a statement on AI risk:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The statement is signed by scientists and tech experts from OpenAI, Google, Anthropic, Microsoft, and numerous universities.
What are the risks?
We use AI for everything from choosing what TV shows to watch to finding the meanings of the error codes on our servo motors. So what could AI do that might lead to the extinction of human beings?
The Center for AI Safety has a list:
- Weaponization is literally the risk that AI could take over weapons. We’ve already seen that robots have military uses, and that AI tools can operate drones with a suitable prompt from a human. It’s barely a step to automated cyberattacks.
- Misinformation is one of the most obvious risks. Combine AI-generated persuasive content with intentional misinformation and we could see new levels of political or economic manipulation. AI hallucinations continue to be a problem, so that misinformation and disinformation could be fomented as a byproduct of human agendas, too.
- “Proxy gaming” is a danger as AI tools become more adept. “AI systems are trained using measurable objectives, which may only be indirect proxies for what we value,” CFAS points out. “For example, AI recommender systems are trained to maximize watch time and click rate metrics. The content people are most likely to click on, however, is not necessarily the same as the content that will improve their well-being.” AI systems will be ruthless in pursuit of their objectives, because they are machines.
- Enfeeblement is the term the Center is using for the possibility that robots will take over our jobs. As AI systems take on more human tasks, human beings may lose skills and knowledge, or even access to controls as well as to jobs.
- Value lock in could take place as automated systems fall under the control of fewer human beings. With more resources under control of fewer people, systemic inequities could be locked in. It’s said that a rising tide lifts all boats, but concentrating ownership of AI could lead to greater inequalities.
- Emergent goals could surprise us. As AI systems become more skilled, they may show capabilities that we can’t currently predict. They may develop agendas of their own. We have already seen that AI systems can reproduce and extend the biases of the people who train them. Many of us were surprised by that. We can’t predict all the surprises that might be in store for us.
- Deception is already on our radar. Researchers are studying how robots can be deceptive and how they can gain human trust in spite of duplicitous behavior. If robots can deceive human subjects on the say-so of researchers, there is no reason to think that they can’t deceive human coworkers or even their human owners.
- Power-seeking behavior may not be the purview of the robots, but it could be something human beings use AI for. The Center says that “inventing machines that are more powerful than us is playing with fire.” If we have trouble imagining how this could happen, there are plenty of books and movies based on the concept that we can look to for examples.
Meanwhile, back at the factory…
Chances are good that your facility isn’t using AI as much as it will be in the future. Right now, you may be most concerned about keeping your drive and control systems running. I you use Rexroth motion control systems (and you should– they’re the best), then you can count on us for service and support.