This week, the White House announced It had secured “voluntary commitments” from seven leading AI companies to manage the risks posed by artificial intelligence.
Getting the companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — to agree on anything is a step forward. They include bitter rivals with subtle but important differences in the way they approach AI research and development.
For example, Meta is so eager to get its AI models into the hands of developers many of them open-sourced, putting your code out in the open for anyone to use. other labs like Anthropic have taken a more alert approach, releasing its technology in more limited ways.
But what exactly do these commitments mean? And are they likely to change much about the way AI companies operate, given that they are not backed by the force of law?
Given the potential stakes of AI regulation, details matter. So let’s take a closer look at what is being agreed upon here and assess the potential impact.
Commitment 1: Companies commit to internal and external security testing of their AI systems before release.
Each of these AI companies conducts security testing – often referred to as “red-teaming” – of their models before releasing them. On one level, this isn’t really a new commitment. And it’s a vague promise. It doesn’t go into much detail about what kind of testing is needed, or who will do the testing.
In a statement with commitmentsThe White House said only that the AI model would be tested “in part by independent experts” and would focus on AI risks “such as biosecurity and cyber security, as well as its wider societal implications”.
It’s a good idea to get AI companies to publicly commit to continuing this type of testing and encourage more transparency in the testing process. And there are certain types of AI risks – such as the danger that AI models could be used to develop bioweapons – that government and military officials are probably better suited than companies to evaluate.
I’d love to see the AI industry agree on a standard battery of security tests, such as the “autonomous replication” test that Alignment Research Center Builds on pre-release models by OpenAI and Anthropic. I would also like to see the federal government fund these types of tests, which can be expensive and require engineers with significant technical expertise. Right now, many security trials are funded and overseen by companies, which clearly raises questions of conflict of interest.
Commitment 2: Companies commit to sharing information on managing AI risks across industry and with governments, civil society and academia.
This commitment is also a bit vague. Many of these companies already publish information about their AI models – usually in academic papers or corporate blog posts. Some of them, including OpenAI and Anthropic, also publish documents called “system cards” that outline the steps they take to make those models secure.
But he has also hidden information at times, citing security concerns. When OpenAI released its latest AI model, GPT-4, this year, it broke away from industry customs and chose not to disclose how much data it was trained on, or how large the model was (a metric known as the “parameter”). It said it declined to release the information due to competition and security concerns. It’s also the kind of data that tech companies prefer to keep away from competitors.
Under these new commitments, will AI companies be forced to make that kind of information public? What if doing so risks accelerating an AI arms race?
I suspect the White House’s goal is less about forcing companies to disclose their parameter calculations and more about encouraging them to trade information with each other about the risks that their models do (or don’t) pose.
But even sharing that kind of information can be risky. If Google’s AI team stops a new model from being used to create a deadly bioweapon during pre-release testing, should it share that information outside of Google? Would this risk giving bad actors an idea of how they could get a less secure model to do the same thing?
Commitment 3: Companies commit to investing in cyber security and insider threat protection measures to protect proprietary and unpublished model loads.
It’s pretty straightforward and undeniable from every AI insider I’ve talked to. “Model weights” is a technical term for the mathematical instructions that give AI models the ability to perform tasks. If you were an agent of a foreign government (or a rival corporation) who wanted to build your own version of ChatGPT or any other AI product, weight is something you’d want to steal. And this is something AI companies have a vested interest in keeping tightly controlled.
The issue of model weight leaking has already been widely publicised. For example, the weights for Meta’s original LLaMA language model were leaked on 4chan and other websites just days after the model was publicly released. Given the risks of more leaks – and the interest of other countries in stealing this technology from US companies – asking AI companies to invest more in their security seems like an ill-advised act.
Commitment 4: Companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
I’m not really sure what that means. Every AI company has discovered vulnerabilities in their models after releasing them, usually because users try to fiddle with the models or try to circumvent their railings (a practice known as “jailbreaking”) in ways the companies had not envisioned.
The White House commitment calls on companies to establish “robust reporting mechanisms” for these vulnerabilities, but it’s unclear what that might mean. An in-app feedback button that allows Facebook and Twitter users to report rule-violating posts? a bug bounty program, similar to OpenAI started this year To reward users who find flaws in its system? anything else? We will have to wait for more details.
Commitment 5: Companies commit to developing robust technical mechanisms to ensure users know when content is AI generated, such as watermarking systems.
It’s an interesting idea but leaves a lot of room for interpretation. Until now, AI companies have struggled to develop tools that allow people to tell whether or not they are viewing AI generated content. There are good technical reasons for this, but it’s a real problem when people take AI-generated work as their own. (Ask any high school teacher.) And many of the tools currently promoted as being able to detect AI output really can’t with any degree of accuracy.
I’m not optimistic that this problem can be completely fixed. But I am glad that companies are promising to work on it.
Commitment 6: Companies commit to publicly reporting the capabilities, limitations, and areas of appropriate and inappropriate use of their AI systems.
Another sensible sounding pledge with plenty of scope. How often will companies be required to report on the capabilities and limitations of their systems? How detailed will that information be? And given that many companies building AI systems have been surprised by the capabilities of their systems after the fact, how much can they really be expected to describe them beforehand?
Commitment 7: Companies commit to prioritizing research on the social risks posed by AI systems, including avoiding harmful bias and discrimination and protecting privacy.
Committing to “prioritizing research” is just as vague as getting the commitment. Still, I’m sure this commitment will be well received by many in the AI ethics group, who want AI companies to prioritize preventing near-term harm like bias and discrimination over worrying about doomsday scenarios, as AI security folks do.
If you’re confused about the difference between “AI ethics” and “AI security,” just know that there are two warring factions within the AI research community, each of which thinks the other is focused on preventing the wrong kinds of harm.
Commitment 8: Companies are committed to developing and deploying advanced AI systems to help address society’s biggest challenges.
I don’t think many people would argue that advanced AI should No It will be used to help tackle some of society’s biggest challenges. The White House lists “cancer prevention” and “mitigating climate change” as two areas where it wants AI companies to focus their efforts, and it won’t find any disagreement with me there.
What complicates this goal somewhat, however, is that in AI research, what starts out as trivial often has far more serious implications. Some of the technology that went into DeepMind’s AlphaGo – an AI system trained to play the board game Go – proved useful In predicting the three-dimensional structures of proteins, a major discovery that spurred basic scientific research.
Overall, the White House’s deal with AI companies seems more symbolic than real. There is no enforcement mechanism in place to ensure that companies follow through on these commitments, and those reflect the precautions many AI companies are already taking.
Still, it’s a reasonable first step. And agreeing to abide by these rules shows that AI companies have learned from the failures of earlier tech companies, which waited to engage with the government until they ran into trouble. In Washington, at least where tech regulation is concerned, it’s worth an early look.