Have you ever talked to someone who is “in their senses?” How was that conversation? Did he make vague gestures in the air with both hands? Did he reference the Tao Te Ching or Jean-Paul Sartre? Did he say that, in fact, there is nothing that scientists can be certain about, and that reality is only as real as we make it out to be?
The vagueness of consciousness, its incompleteness, has made its study a curse in the natural sciences. At least until recently, this project was largely left to philosophers, who were often only marginally better than others at articulating the object of their study. Hod Lipson, a roboticist at Columbia University, said that some people in his field call consciousness the “C-word”. “The idea was that you couldn’t study consciousness until you had tenure,” said Grace Lindsay, a neuroscientist at New York University.
Nonetheless, a few weeks ago, a group of philosophers, neuroscientists, and computer scientists, Dr. Lindsay among them, proposed a rubric with which to determine whether an AI system like ChatGPT can be considered conscious. reportWhich surveys what Dr. Lindsay calls the “brand-new” science of consciousness, pulling together elements from half a dozen nascent empirical theories and proposing a list of measurable properties that might suggest some presence in the machine. Is.
For example, recurrent processing theory focuses on the difference between conscious perception (for example, actively studying an apple in front of you) and unconscious perception (for example, your feeling of an apple flying toward your face). . Neuroscientists have argued that when electrical signals are sent from the nerves of our eyes to the primary visual cortex and then to deeper parts of the brain, like a stick being handed from one group of nerves to another, we unconsciously experience things. We do. When the baton is passed back from deeper parts of the brain to the primary visual cortex, these perceptions become conscious, creating a cycle of activity.
Another theory describes specialized sections of the brain that are used for specialized tasks – the part of your brain that can balance your top-heavy body on a pogo stick is different from the part of your brain that can Can take in a wide range of scenarios. We are able to put all this information together (you can bounce on a pogo stick while admiring a nice view), but only to a certain extent (it’s hard to do). Neuroscientists have therefore hypothesized the existence of a “global workspace” that allows control and coordination of what we pay attention to, what we remember, even what we perceive. Our consciousness can arise from this integrated, changing workspace.
But it can also arise from your ability to be aware of your own awareness, create virtual models of the world, predict future experiences, and locate your body in space. The report argues that any one of these characteristics could, potentially, be an essential part of being conscious. And, if we are able to recognize these traits in a machine, we may be able to consider the machine conscious.
One of the difficulties of this approach is that the most advanced AI systems are deep neural networks that “learn” how to do things on their own, in ways that are not always interpretable by humans. We can get some types of information from their internal structure, but only in limited ways, at least for the time being. This is the black box problem of AI, so even if we had a complete and accurate rubric of consciousness, it would be difficult to apply it to the machines we use every day.
And the authors of the recent report are quick to note that their report is not a definitive list of things anyone should be alerted to. They rely on an account of “computational functionalism”, according to which consciousness is reduced to pieces of information passed back and forth within a system, like a pinball machine. In principle, according to this view, a pinball machine could be conscious, if it were made more complex. (This might mean it’s no longer a pinball machine; let’s cross that bridge if we come to it.) But others have proposed theories that take into account our biological or physical characteristics, social or cultural contexts. are taken as essential pieces of consciousness. It’s hard to see how these things could be coded into a machine.
And even for researchers who are largely concerned with computational functionalism, no existing theory seems adequate to account for consciousness.
“For any of the report’s findings to be meaningful, the theories have to be sound,” Dr Lindsay said. “Which they are not.” He further said, this might be the best thing we could ever do.
After all, does it seem that any one of these characteristics, or all of them combined, is what William James described as the “warmth” of conscious experience? Or, in the words of Thomas Nagel, “What is it like to be you”? There is a gap between the ways in which we can measure subjective experience with science and the subjective experience itself. This is what David Chalmers has called the “hard problem” of consciousness. Even if an AI system has recurring processing, a global workspace, and an understanding of its physical location – what if it still lacks the thing that makes it what it is? Feel like Some?
When I brought up this emptiness to Robert Long, a philosopher at the Center for AI Safety who led the work on the report, he said, “This feeling is something that happens whenever you try to explain it scientifically. Are, or reduce physical processes to, some higher-level concept.
The stakes are high, he said; The progress that is being made in AI and machine learning is progressing faster than our ability to explain it. In 2022, Blake Lemoine, a Google engineer, argued that the company’s LaMDA chatbot was conscious (although most experts disagreed); The further integration of generic AI into our lives means the topic may become more controversial. Dr. Long argues that we need to start making some claims about what consciousness might be and laments the “vague and sensationalist” way in which we have gone about it, often ignoring common sense or Combines subjective experience with rationality. “This is an issue we will face now and over the next few years,” he said.
As Megan Peters, a neuroscientist at the University of California, Irvine, and author of the report, said, “Whether someone is there or not makes a big difference in how we treat it.”
We already do this kind of research with animals, which requires careful study to make the most basic claim that the experiences of other species are similar to ours, or even comprehensible to us. . This can be a fun home activity, like shooting experimental arrows from moving platforms at shape-shifting targets, with bows that sometimes turn out to be spaghetti. But sometimes we also get a shock. As Peter Godfrey-Smith writes in his book “Metazoa”, cephalopods probably have a strong but markedly different type of subjective experience from humans. Each arm of an octopus contains about 40 million neurons. what’s that like?
We rely on a series of observations, guesses, and experiments – both systematic and not – to solve this problem of other minds. We talk, touch, play, hypothesize, test, control, x-ray, and dissect, but, ultimately, we still don’t know what makes us conscious. . All we know is that we are.