Fake explicit Taylor Swift images dominate social media
Fake, sexually explicit images of Taylor Swift, possibly generated by artificial intelligence, spread rapidly across social media platforms this week, upsetting fans who saw them and calling on lawmakers to protect women from those who spread such images. Demands for a crackdown on platforms and technology began again.
An image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that had posted fake photos of Ms. Swift, but the photos were shared on other social media platforms and continued to spread despite efforts by those companies to remove them.
While X said he was working to remove the images, fans of the pop superstar flooded the platform in protest. They posted keywords related to the phrase, “Protect Taylor Swift”, in an effort to eliminate explicit images and make them more difficult to find.
Reality Defenders, a cybersecurity company that focuses on AI detection, determined that the images were likely created using a diffusion model, an AI-powered technique used in more than 100,000 apps and publicly available models. It's accessible via Google Play Store, said Ben Coleman, co-founder and chief executive of the company.
As the AI industry has boomed, companies have raced to release tools that enable users to create images, video, text, and audio recordings with simple gestures. AI tools are hugely popular, but they have made it easier and cheaper than ever to create so-called deepfakes, which depict people doing or saying things they have never done.
Researchers now fear that deepfakes are becoming a powerful disinformation force, enabling everyday internet users to create non-consensual nude images or embarrassing depictions of political candidates. Artificial intelligence was used to make fake robocalls of President Biden during the New Hampshire primary, and Ms. Swift was featured in deepfake ads selling cookware this month.
“It's always been a dark undercurrent of the Internet, various kinds of non-consensual pornography,” said Oren Etzioni, a computer science professor at the University of Washington who has worked on detecting deepfakes. “Now this is a new strain of it that is particularly harmful.”
“We are going to see a tsunami of these AI-generated explicit images. The people who created it see it as a success,” Mr. Etzioni said.
X said it has a zero-tolerance policy towards its content. “Our teams are proactively removing all identified images and taking appropriate action against the accounts responsible for posting them,” a representative said in a statement. “We are monitoring the situation closely to ensure that any further violations are promptly addressed and the content removed.”
Although many companies making generative AI tools prevent their users from creating explicit imagery, people find ways to break the rules. “It's an arms race, and it seems like every time someone comes up with a guardrail, someone else figures out how to break out of jail,” Mr. Etzioni said.
The photos originated from a channel on the messaging app Telegram dedicated to creating such photos 404 media, a technology news site. But deepfakes gained widespread attention after being posted on X and other social media services, where they spread rapidly.
Some states have banned pornographic and political deepfakes. But the restrictions have not had a strong impact, and there are no federal regulations for such deepfakes, Mr. Coleman said. He said the platform has attempted to address deepfakes by asking users to report them, but this approach has not worked. By the time they are marked, millions of users have viewed them.
“The toothpaste is already out of the tube,” he said.
Ms. Swift's publicist, Tree Penn, did not immediately respond to requests for comment late Thursday.
Ms Swift's deepfakes renewed calls for action from lawmakers. Representative Joe Morrell, a Democrat from New York who introduced a bill last year that would make sharing such images a federal crime, said on Twitter that the spread of the images was “appalling”, adding: “It's everywhere. , is happening to women every day.”
“I have repeatedly warned that AI could be used to generate non-consensual intimate photos,” Senator Mark Warner, a Democrat from Virginia and chairman of the Senate Intelligence Committee, said of the images on X. “This is a deplorable situation.”
Representative Yvette D. Clark, Democrat of New York, said advances in artificial intelligence have made it easier and cheaper to create deepfakes.
“What happened to Taylor Swift is nothing new,” he said.