The U.S. Chamber of Commerce is usually against regulation, but they have taken a step away from their usual position and called for regulation of artificial intelligence.
What is AI?
Pew Research recently did a survey to find out whether most Americans knew AI when they saw it. With six questions asking a cross-section of adults to identify AI examples in lists of things like taking temperatures with a thermometer under the tongue and identifying spam emails, Pew found that just 30% of respondents got all the questions right.
Most of us deal with AI every day, from our morning Spotify recommendations to the gizmo at work that lets us know which valves are most likely to malfunction. But most of us don’t realize that these things are AI.
What most people think of when we hear the term “AI” is generative AI like ChatGPT and DALL-E and our old friend Replika.
AI has the potential to revolutionize the way we live, work, and play, but it also carries with it a number of risks and ethical considerations. We have already seen he use of generative AI tools to create Deep Fake pornography. We’ve seen discrimination in hiring using AI tools. Several cases of intellectual property infringement are already moving through the courts. Regulation might be a good idea.
Who would regulate?
In the United States, the regulation of AI is primarily handled by the federal government. The primary agency responsible for AI regulation is the Office of Science and Technology Policy (OSTP), which is part of the Executive Office of the President. The OSTP is responsible for developing and implementing AI policies and strategies. It works closely with other government agencies, such as the Department of Defense, to ensure that AI is used responsibly and ethically.
The OSTP has developed a number of initiatives to regulate AI. For example, the OSTP has issued a series of principles to guide the development and use of AI, including principles related to transparency, privacy, and safety. The OSTP has also developed a framework for reviewing AI applications, which includes an assessment of the potential risks and benefits associated with the use of AI.
In addition to the OSTP, there are several other government agencies that have a role in regulating AI. The Federal Trade Commission (FTC) is responsible for enforcing laws and regulations related to the use of AI. The FTC has issued guidance on how companies should use AI responsibly and has taken enforcement actions against companies that have violated the law. The Department of Homeland Security (DHS) is responsible for ensuring the security and safety of AI-related technologies, and the National Institute of Standards and Technology (NIST) is responsible for developing standards and guidelines for using AI.
In addition to the federal government, state and local governments have also begun to regulate AI. For example, California has enacted a series of laws and regulations related to the development and use of AI, including laws related to data privacy, safety, and liability. Similarly, the city of San Francisco has passed an ordinance requiring companies to obtain a permit before deploying AI-based services, and the state of New York has enacted a law that requires companies to disclose how they are using AI, as well as the potential risks associated with it.
Overall, the regulation of AI in the United States is still in its infancy, and a lot more work needs to be done to ensure that AI is used responsibly and ethically. However, the federal government and state and local governments have taken important steps to ensure that AI is regulated, and these steps will help ensure that AI is used responsibly and ethically in the future.