Determining Consciousness
By Connor Retana
One of the key concepts in ethics is consciousness, as we tend to assign moral value to beings that possess it. But how do we determine consciousness? How do we tell the temperature outside? Well, there are two ways: walk outside and feel how hot it is or check the temperature on the nearest device. One strategy interprets the personal experience of heat or cold, while the other measures the kinetic energy of particles in the air mapped onto a defined interval scale.
Our understanding of consciousness is likely similar to the first example above. Our friends are conscious. Dogs are conscious. Everybody we know is conscious, and when regularly running into conscious beings while going about life, we rely on intuitive assumptions about what constitutes consciousness. In most instances, these assumptions will be correct. But what about instances when it is harder to determine this, like when someone has sustained severe brain damage? Unlike temperature, where we understand the underlying nature of what is hot or cold, we lack this understanding of consciousness. Consequently, we also lack an objective metric by which to measure it and point out an instance of its existence.
What if we could start from our original, folk-psychological understanding of consciousness, and develop tests that accurately measure its absence or presence? Could we use those tests to come up with a theory of consciousness that outstrips our intuitive understanding? Could we apply this theory beyond animals and non-responsive humans to AI or similar populations that have potential consciousness?
Many tests for consciousness, or c-tests, are developed in a similar way. According to an article by Tim Bayne, a Professor of Philosophy at Monash University that studies the measurement of consciousness, there is a function that is considered evidence of consciousness. Researchers designed an experiment to measure its presence. They started by experimenting on subjects that were generally considered conscious, like the average adult human. Once they found a way to measure the intended aspect of consciousness, for example, executive functioning, they developed ways that could be tested both in regular adult humans and other populations, like people with brain damage. Then, they extended those tests to the new group of subjects, using data from the original group as a baseline or control group. Given that the pre-theoretical framework for consciousness applies confidently to the original group, we don’t have to prove these tests are proof of consciousness. We can assume the consciousness of the first test subjects and work backwards to match the presence of the measured aspect of consciousness in the new group to the original.
One great example of this process is called the narrative comprehension test. As explained in an article by Lorina Naci, a psychologist and neuroscientist at Trinity College Dublin, this test examines whether or not someone has the executive functioning capacity to understand a short film. Researchers consider executive functioning to be a “process that is integral to our conscious experience of the world... a marker for conscious awareness,” according to Naci. Neurotypical adults who were presented with a short film all showed highly correlated brain activity during certain moments throughout. Researchers then analyzed this brain activity to determine the underlying commonalities between all participants. Behaviorally non-responsive subjects watching the same film could then have their brain activity compared with this common model to determine whether they displayed the capacity for executive functioning similar to that of neurotypical adults.
So how can people use models like these to create tests that help build theoretical understanding of consciousness? How do people apply successful consciousness tests to more unfamiliar systems like AI? After all, ChatGPT can follow requests or even display comprehension of a movie plot given the correct input. As of today though, there is no test that extends our human (or animal) manifestation of consciousness to this possible synthetic manifestation of consciousness. How can we test these systems for consciousness?
Bayne and his colleagues believe something called the “iterative natural kind” strategy is our best tactic for expanding our understanding of manifestations of consciousness that are highly distinct from our own. With this strategy, multiple tests are developed using typical adult humans. If they are successful at detecting consciousness, these tests are highly correlated, meaning a positive result on one test highly predicts a positive result on the others. These tests are then used on another population, like people with brain damage, or adult octopuses. During this round of testing, another test is designed and applied that is specific to this new population. The key is to make a test that translates to other similar populations, regardless of whether it works with humans. If it works, we can use it to test more uncommon populations that may not necessarily respond to tests designed for us. Then, these new tests are run and correlated with the old ones. If they are highly correlated, meaning all the other tests predict the same answer from the new one, and vice versa, we have a new novel test that has statistically supported accuracy. Using this method, we could theoretically develop new tests that measure criteria representative of consciousness that are present in relatively alien systems, like AI, and build a theory of consciousness that applies universally.
This is very new research, and there is much more for people to discover and develop as theories continue to evolve. Questions that once seemed to be the purview of ethics seminars and sci-fi are destined to become more pressing as we continue to probe the inner lives of technologies that surround us.