The White House announced on Friday that seven major AI companies in the United States have agreed to voluntary safeguards on the development of the technology, pledging to manage the risks of new tools as they compete over the potential of artificial intelligence.
Seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — will formally announce their commitment to new standards in the areas of safety, security and trust at a meeting with President Biden at the White House on Friday afternoon.
The announcement comes as companies race to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and videos without human input. But the technological leaps have sparked fears of a spread of disinformation and dire warnings of an “extinction threat” if self-aware computers are developed.
The voluntary safeguards are only an initial, tentative step as governments in Washington and around the world work to create a legal and regulatory framework for the development of artificial intelligence. They reflect the urgency of the Biden administration and lawmakers to respond to rapidly evolving technology, even as lawmakers struggle to regulate social media and other technologies.
The White House gave no details of an upcoming presidential executive order that will tackle a bigger problem: how to control China’s and other competitors’ ability to own new artificial intelligence programs, or the components used to develop them.
This includes new restrictions on advanced semiconductors and restrictions on the export of large language models. These are hard to control – most software can be compressed to fit on a thumb drive.
An executive order could provoke more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises will not stifle the plans of AI companies or hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.
“We are pleased to be making these voluntary commitments with others in this area,” Nick Clegg, president of global affairs at Facebook parent company Meta, said in a statement. “They are an important first step in ensuring responsible guardrails are in place for AI and they create a model for other governments to follow.”
As part of the safeguards, the companies agreed to:
-
Security testing of their AI products by partially independent experts and sharing information about their products with governments and others who are trying to manage the risks of the technology.
-
Ensuring that consumers are able to identify AI-generated content by implementing watermarks or other methods of identifying generated content.
-
Publicly reporting the capabilities and limitations of our systems on a regular basis, including evidence of security risks and bias.
-
Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, such as curing cancer and tackling climate change.
-
Researching the risks of bias, discrimination and invasion of privacy from the proliferation of AI tools.
In a statement announcing the agreements, the Biden administration said the companies must ensure “innovation does not come at the expense of the rights and safety of Americans.”
“The companies that are developing these emerging technologies have a responsibility to ensure that their products are safe,” the administration said in a statement.
Brad Smith, president of Microsoft and one of the officials who attended the White House meeting, said his company supports voluntary security measures.
“Moving quickly forward, the White House’s commitments lay a foundation to help ensure that the promise of AI continues to outweigh its risks,” Mr Smith said.
Anna Makanju, vice president of global affairs at OpenAI, described the announcement as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”
For the companies, the standards described on Friday serve two purposes: as an attempt to prevent, or shape, legislative and regulatory moves with self-policing, and as a signal that they are thoughtfully and proactively dealing with this new technology.
But the rules they agreed upon are largely lowest common denominators, and can be interpreted differently by each company. For example, companies have committed to strict cyber security around the data and code used to create the “language model” on which generic AI programs are developed. But there are no specifics about what that means – and companies will have an interest in protecting their intellectual property anyway.
And even the most careful companies are vulnerable. Microsoft, one of the companies attending a White House event with Mr Biden, scrambled last week to counter a Chinese government-organised hack on the private emails of US officials working with China. Now it appears that China stole, or somehow obtained, a “private key” held by Microsoft that is the key to authenticating email — one of the company’s most closely guarded pieces of code.
As a result, the agreement is unlikely to slow efforts to pass legislation and enforce regulation on the emerging technology.
Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University, said more needs to be done to protect society from the dangers posed by artificial intelligence.
“The voluntary commitments announced today are not enforceable, which is why it is critical that Congress, together with the White House, immediately enact legislation requiring transparency, privacy protections, and advancing research on the wide range of risks posed by generative AI,” Mr. Barrett said in a statement.
European regulators are set to adopt AI laws later this year, which has prompted many companies to encourage US regulations. Several lawmakers have introduced bills that include licensing AI companies to release their technologies, creating a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreeing on the rules and are rushing to educate themselves on the technology.
Lawmakers are grappling with how to address the rise of AI technology, with some focused on the risks to consumers, while others worry about falling behind rivals, especially China, in the race for dominance in the sector.
This week, the House Select Committee on Strategic Competition with China sent bipartisan letters to US-based venture capital firms demanding they account for investments made in Chinese AI and semiconductor companies. Those letters come on top of months in which various House and Senate panels are questioning the AI industry’s most influential entrepreneurs and critics to determine what kind of legislative guardrails and incentives Congress should explore.
Several of those witnesses, including Sam Altman of the San Francisco start-up OpenAI, have urged lawmakers to regulate the AI industry, pointing to the potential for unfair harm from the new technology. But work on that regulation has been slow in Congress, where many lawmakers are still struggling to understand what AI technology really is.
In an effort to improve lawmakers’ understanding, Senator Chuck Schumer, Democrat and Majority Leader of New York, launched a series of listening sessions for lawmakers this summer to hear from government officials and experts in a range of fields about the merits and dangers of artificial intelligence.
Mr. Schumer has also drafted an amendment to this year’s Senate version of the defense authorization bill to encourage Pentagon employees to report potential issues with AI tools through a “bug bounty” program, create a Pentagon report on how to improve AI data sharing, and improve reporting on AI in the financial services industry.
karon demirjian Contributed reporting from Washington.