Meta calls for industry effort to label AI-generated content
Speaking at the World Economic Forum in Davos, Switzerland last month, Nick Clegg, Meta's president of global affairs, described an early effort to detect artificially generated content as “the most urgent task” facing the tech industry today.
On Tuesday, Mr Clegg proposed a solution. Meta said this will lead to promotion technical standards It can be used by companies across the industry to identify markers in photo, video and audio content that would indicate that the content was created using artificial intelligence.
The standards could allow social media companies to immediately identify content generated with AI that is posted on their platforms and allow them to add a label to that content. If widely adopted, the standards could help identify AI-generated content from Google, OpenAI, and companies like Microsoft, Adobe, MidJourney and others that provide tools that help people quickly and easily identify artificial posts. Allow to create.
Mr. Clegg said in an interview, “Although it's not a perfect answer, we didn't want to let the perfect be the enemy of the good.”
He said he hopes the effort will be a rallying cry for companies across the industry to adopt standards for detecting and signaling when content is artificial to make it easier for all of them to recognize.
As the United States enters a presidential election year, industry watchers believe AI tools will be widely used to post fake content to misinform voters. Over the past year, people have used AI to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general's office in New Hampshire is also investigating a series of robocalls that used Mr. Biden's AI-generated voice to urge people not to vote in the recent primary.
Meta, which owns Facebook, Instagram, WhatsApp and Messenger, is in a unique position because it is developing the technology to bring AI tools to wider consumer adoption, while it is the world's largest social network that uses AI. Capable of distributing generated content. Mr Clegg said Meta's position provided special insight into both the production and distribution sides of the issue.
META is working on a series of technical specifications called iptc And C2PA Standard. They are information that specifies whether a piece of digital media is authentic or not in the metadata of the content. Metadata is the underlying information contained in digital content that gives a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos or videos.
Adobe, which makes Photoshop editing software, and many other tech and media companies have spent years Advocating your peers to adopt C2PA standard and has formed Content Authenticity Initiative, The initiative is a partnership between dozens of companies — including The New York Times — to combat misinformation and “add a layer of tamper-evident provenance to all types of digital content, starting with photos, videos and documents,” according to the initiative. to append.
Companies that offer AI generation tools can add standards to the metadata of video, photo or audio files they helped create. This would signal to social networks like Facebook, Twitter and YouTube that such content was artificial when it was being uploaded to their platforms. In turn, those companies can add labels that note that these posts are AI-generated to inform users who saw them on the social network.
Meta and others also require users who post AI content to label whether they have done so when uploading it to the companies' apps. Failure to do so could result in a fine, although the companies have not specified what the fine could be.
Mr Clegg also said it would seek to provide the public with more information if the company determines that a digitally created or altered post “poses a particularly high risk of deceiving the public as a matter of importance.” Can add a more prominent label to the meta post. Information and references related to its origin.
AI technology is advancing rapidly, which has prompted researchers to attempt to develop tools on how to spot fake content online. Although companies like Meta, TikTok, and OpenAI have developed ways to detect such content, technologists have quickly found ways to circumvent those tools. Artificially generated videos and audios have proven to be even more challenging to recognize than AI photos.
(The New York Times Co. is suing OpenAI and Microsoft for copyright infringement over the use of Times articles to train artificial intelligence systems.)
“Bad actors always try to circumvent any standards we put in place,” Mr Clegg said. He described the technology as both a “sword and shield” for the industry.
Part of that difficulty stems from the fragmented nature of how tech companies are approaching this. last fall, tiktok New policy announced Its users will need to add labels to the videos or photos they upload that were created using AI YouTube announced Similar initiative in November.
Meta's new proposal would attempt to tie some of those efforts together. Other industry efforts, such as Partnership on AIhas brought together dozens of companies to discuss common solutions.
Mr Clegg said he hoped more companies would agree to participate in the standard, especially in the run-up to the presidential election.
“We felt particularly that during this election year, it would not be appropriate to wait for all the pieces of the puzzle to fall into place before acting,” he said.