Regulation artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences White House announcement Voluntary AI security commitments by seven technology companies on Friday.
But a closer look at the activity raises questions about how meaningful the action is in setting policies around the rapidly evolving technology.
The answer is that it is not very meaningful right now. Lawmakers and policy experts said the United States is only at the beginning of what will likely be a long and difficult road toward creating AI regulations. While there have been hearings, meetings with top tech officials at the White House and speeches to introduce the AI bill, it is too early to predict even the roughest sketches of rules to protect consumers and prevent the risks the technology could pose to jobs, the spread of disinformation, and security.
“It’s still early days, and no one knows what the legislation will look like,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other tech companies.
The United States lags far behind Europe, where lawmakers Preparing to make AI law New restrictions to be imposed on what is seen as technology later this year most risky use, In contrast, there remains much disagreement in the United States over the best way to handle the technology, which many American lawmakers are still trying to figure out.
Policy experts said it suited many tech companies. While some companies have said they welcome regulations related to AI, they have also argued against stricter regulations being created in Europe.
Here’s a rundown on the state of AI regulations in the United States.
in the white house
The Biden administration has been on a fast-track listening tour with AI companies, academia and civil society groups. The effort began in May with Vice President Kamala Harris. meeting at the White House with the CEOs of Microsoft, Google, OpenAI and anthropicwhere he implored the tech industry to take security more seriously.
On Friday, representatives from seven tech companies appeared at the White House to announce a set of principles to make their AI technologies safer, including third-party security checks and watermarking of AI-generated content to help prevent the spread of misinformation.
Many of the practices that were announced were already in place, or on the way to be implemented, at OpenAI, Google and Microsoft. They are not enforceable by law. The promises of self-regulation also turned out to be less than consumer groups expected.
“Voluntary commitments are not enough when it comes to Big Tech,” said Catriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put in place meaningful, enforceable guardrails to ensure that the use of AI is fair, transparent, and protects the privacy and civil rights of individuals.”
Last time, the White House presented a blueprint for an AI Bill of Rights, a set of guidelines on consumer protection with the technology. The guidelines are also not rules and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.
The loudest drumbeat on regulating AI has come from lawmakers, some of whom have introduced bills on the technology. His proposals include the creation of an agency to oversee AI, liability for AI technologies that spread misinformation, and licensing requirements for new AI tools.
Lawmakers have also held hearings about AI, including Hearing with Sam Altman in May, chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers voiced ideas during the hearing about other rules, including nutrition labels to inform consumers about AI risks.
The Bills are in their nascent stage and have not yet garnered the necessary support to take them forward. Last month, Senate Leader Chuck Schumer, Democrat of New York, Announces a month-long process to formulate an AI law which included educational sessions for members in the fall.
“In many ways we are starting from zero, but I believe Congress is up to the challenge,” he said during a speech at the Center for Strategic and International Studies.
in federal agencies
Regulatory agencies are starting to crack down on some of the issues arising from AI
Last week, the Federal Trade Commission opened an investigation In OpenAI’s ChatGPT, information was sought about how the company secures its systems and how chatbots could potentially harm consumers through the creation of false information. FTC Chairperson Leena Khan has said They believe the agency has enough power under consumer protection and competition laws to crack down on problematic behavior by AI companies.
“Waiting for congressional action is not ideal given the normal timelines for congressional action,” said Andres Sawicki, a law professor at the University of Miami.