Artist Stephanie Dinkins has long been a pioneer in combining art and technology in her Brooklyn-based practice. He was awarded $100,000 in May Guggenheim Museum For his groundbreaking innovations, including an ongoing series of interviews with Bina48, a humanoid robot.
For the past seven years, they have experimented with AI’s ability to realistically depict black women smiling and crying using a variety of word cues. The first results were lackluster but not alarming: His algorithm produced a pink humanoid covered in a black cloak.
She said, “I was hoping for something that reflected a little more of black womanhood.” And although the technology has improved since her first experiments, Dinkins found herself using runaround words in the text prompt to help the AI image generator get her desired image, “Give the machine a chance to give me what I want.” for what I wanted.” But whether she uses the term “African American woman” or “Black woman,” machine distortions that distort facial features and hair texture occur at high rates.
Dinkins said, “The reform has obscured some of the deeper questions we should be asking about discrimination.” The artist, who is black, said, “Prejudice is deeply embedded in these systems, so it becomes implicit and automatic. If I’m working within a system that uses an algorithmic ecosystem, I want that system to know in subtle ways who Black people are, so that we can feel better supported.
She is not alone in asking tough questions about the troubling relationship between AI and race. Many black artists are finding evidence of racial bias in artificial intelligence, both in the large data sets that teach machines to generate images and in the underlying programs that run the algorithms. In some cases, AI technologies appear to ignore or distort artists’ text cues, affecting the way Black people are portrayed in images, and in others, they appear to stereotype or censor Black history and culture. Are.
Discussion of racial bias within artificial intelligence has grown in recent years, as studies show facial recognition technology And digital assistants have trouble identifying images speech Patterns of non-white people. The studies raised broader questions of fairness and bias.
The major companies behind AI image generators – including OpenAI, Stability AI and Midjourney – have pledged to improve their tools. “Bias is a significant, industry-wide problem,” OpenAI spokesman Alex Beck said in an email interview, adding that the company is constantly trying to “improve performance, reduce bias, and reduce harmful outputs.” He declined to say how many employees were acting on racial bias, or how much money the company had allocated to the problem.
To prove his point during an interview with a reporter, 28-year-old Rabies asked OpenAI’s image generator, DALL-E 2, To visualize the buildings in his hometown Dakar. The algorithm produced arid desert landscapes and ruined buildings, which Rabies said were nothing like the coastal homes in Senegal’s capital.
Rabies said, “It’s demoralizing.” “The algorithms lean towards the cultural image of Africa that the West has created. This is based on the worst stereotypes that are already on the internet.”
Last year, OpenAI Said It was establishing new techniques to diversify the images produced by DALL-E 2, so that the instrument would “generate images of people that more accurately reflect the diversity of the world’s population.”
An artist at the exhibition of rabies, minne atairu A Ph.D. candidate at Teachers College at Columbia University who planned to use Image Generator with young students of color in the South Bronx. But now he worries that “this could lead to offensive images being generated by students,” Atairu explained.
The Feral File exhibition includes images from her “Blonde Braids Studies”, which explore the limits of Midjorney’s algorithms for creating images of black women with natural blonde hair. When the artist asked for an image of black identical twins with blonde hair, the program produced a lighter-skinned sibling instead.
“It tells us where the algorithm is collecting the images,” Atairu said. “It’s not necessarily aimed at a group of black people, but toward white people.”
She said she worries that young black children may try to create pictures of themselves and look like children whose skin tone has been lightened. Atairu recalls some of his past experiences with Midjourney before a recent update improved its capabilities. “It would generate images that would be like blackface,” she said. “You will see a nose, but it was not a human nose. It looked like a dog’s nose.”
In response to a request for comment, Midjourney founder David Holz said in an email, “If anyone finds an issue with our systems, we request they please send us specific examples so we can investigate.”
Stability AI, which provides image generator services, said it plans to collaborate with the AI industry to improve bias assessment techniques with a greater diversity of countries and cultures. The AI company said the bias is caused by “overrepresentation” in its general data set, though it did not specify whether overrepresentation of white people was at issue here.
Earlier this month, Bloomberg Analysis More than 5,000 images drawn by Stability AI, and found that its program reinforced stereotypes about race and gender, typically portraying people with lighter skin tones as having higher-paying jobs, while subjects with darker skin were labeled “dishwasher” and “housekeeper”.
These problems haven’t stopped the investment frenzy in the tech industry. Recent pink report from consulting firm McKinsey Predicted That generative AI will add $4.4 trillion annually to the global economy. Last year, nearly 3,200 start-ups received $52.1 billion in funding, According For GlobalData deal database.
Since the early days of color photography in the 1950s, when companies such as Kodak used it, technology companies have struggled against allegations of bias in the depiction of dark skin. white model in their color development. Eight years ago, Google disabled its AI program’s ability to find people gorilla and monkey Through your Photos app because the algorithm was incorrectly sorting black people into those categories. As recently as May this year, the problem still had not been fixed. two former employees who worked on the technology told the New York Times Google didn’t train the AI system with enough images of black people.
Other experts who study artificial intelligence point to the early development of the technology in the 1960s, saying that bias goes much deeper than data sets.
“The issue is more complex than data bias,” said James E. Dobson, a cultural historian at Dartmouth College and author of a recent book on it. birth of computer vision, According to her research, in the early days of machine learning there was little discussion about race and most of the scientists working on the technology were white men.
“It’s hard to separate today’s algorithms from that history, because engineers are building on those earlier versions,” Dobson said.
To reduce the appearance of racial bias and hateful images, some companies have banned certain words from text prompts that users submit to generators, such as “slave” and “fascist.”
But Dobson said companies hoping for simple solutions, such as censoring signals users could submit, were avoiding more fundamental issues of bias in the underlying technology.
“It is a worrying time as these algorithms become more complex. And when you see the garbage coming out, you wonder what kind of garbage process still exists inside the model, ”said the professor.
auria harveyAn artist recently inducted into the Whitney Museum exhibition A recent project about “reconfiguring” digital identity, using Midjourney, collided with these restrictions. “I wanted to query the database to see what it knew about slave ships,” she said. “I received a message saying that Midjourney would suspend my account if I continued.”
Dinkins faced similar problems with the NFTs he created and sold showing how okra was brought to North America by enslaved people and settlers. when he tried to use a generative program it was censored, repeat, to make drawings of slave ships. Eventually he learned to outwit the censors by using the term “pirate ship”. The image he found was an approximation of his desire, but it also raised troubling questions for the artist.
“What is this technology doing with history?” Dinkins asked. “You can see someone is trying to correct a bias, but at the same time it erases a piece of history. I consider those erasures as dangerous as any prejudice, because we will forget how we got here.
Naomi Beckwith, chief curator of the Guggenheim Museum, credited Dinkins’ nuanced approach to issues of representation and technology as the reason the artist received the museum’s first Art and Technology Award.
Beckwith said, “Stephanie has become part of a tradition of artists and cultural activists who poke holes in these overarching and holistic theories about how things work.” The curator said that his initial trepidation about AI programs taking the place of human creativity was greatly diminished when he realized that these algorithms knew almost nothing about Black culture.
But Dinkins isn’t quite ready to give up on technology. She continues to employ it for her artistic projects with skepticism. “Once the system can generate a truly high-fidelity image of a black woman crying or smiling, can we relax?”