Google joins effort to help spot AI-made content
Google, whose work in artificial intelligence has made it much easier to create and spread AI-generated content, now wants to ensure that such content can also be detected.
The tech giant said Thursday it is joining an effort to develop credentials for digital content, a kind of “nutritional label” that identifies when and how a photo, a video, an audio clip or other file was created. Created or changed – including AI The company will collaborate with companies like Adobe, BBC, Microsoft and Sony to improve technical standards.
The announcement follows a similar promise announced Tuesday by Meta, which has enabled the easier creation and distribution of artificially generated content like Google does. Meta said it would promote standardized labels identifying such content.
Google, which has spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate digital authentication into its products and services, though it did not specify timing or scope. Its Bard chatbot is connected to some of the company's most popular consumer services, like Gmail and Docs. On YouTube, which is owned by Google and will be included in the digital credential effort, users can immediately find videos featuring realistic digital avatars emphasizing current events in voices powered by text-to-speech services.
Identifying where online content originates and how it changes is a high priority for lawmakers and technology watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable AI detection tools have forced people to become more skeptical of the authenticity of what they see and hear on the Internet.
According to those supporting a universal authentication standard, configuring digital files to include verified records of their history could make the digital ecosystem more trustworthy. Google is joining the steering committee of one such group, the Content Promotion and Authenticity Coalition, or C2PA. C2PA standard It has been endorsed by news organizations such as the New York Times, as well as camera manufacturers, banks, and advertising agencies.
Laurie Richardson, Google's vice president of trust and safety, said in a statement that the company hopes its work will “provide people with important context, helping them make more informed decisions.” He noted Google's other efforts to provide users with more information about the online content they encounter, including labeling AI content on YouTube and providing details about images in search.
Efforts to attach credentials to metadata – the underlying information contained in digital files – are not flawless.
OpenAI said this week Its AI image-generation tools will soon add watermarks to images as per C2PA standards. Starting Monday, the company said, images generated by its online chatbot, ChatGPT, and stand-alone image-generation technology, DALL-E, will include both visual watermarks and hidden metadata to identify them as created by artificial intelligence. Is designed for. The move, however, “is not a silver bullet to address provenance issues,” OpenAI said, adding that tags “could be easily removed, either accidentally or intentionally.”
(The New York Times Co. is suing OpenAI and Microsoft for copyright infringement, and is accusing the tech companies of using Times articles to train AI systems.)
According to , there is a “shared sense of urgency” to increase trust in digital content. a blog post Last month from Andy Parsons, senior director of Content Authenticity Initiatives at Adobe. The company released artificial intelligence tools last year, including its AI art-generation software Adobe Firefly and a Photoshop tool known as Generative Fill, which uses AI to expand a photo beyond its boundaries. Uses.
“The stakes have never been higher,” Mr. Parsons wrote.
cade metz Contributed to the reporting.