top of page

 Limitations to Understanding Consciousness 

"

"

I’m talking to you now, and I can see how you’re behaving; I could do a brain scan, and find out exactly what’s going on in your brain – yet it seems it could be consistent with all that evidence that you have no consciousness at all."

There are easy problems within the field of Neuroscience, and there are hard problems. Easy problems are those that rely on the basic circuitry: how do you know to remove your hand from a hot sauce pan? A question like this can be answered  using an entirely biological and cognitive approach. You analyze the pattern of activation of neurons in a tract, you develop an experiment, and you prove your hypothesis. 

 

​

​

​

​

​

​

​

 

 

 

 

 

 

 

 

 

 

 

 

Consciousness, unlike the simple sensorimotor pathway that prevents you from burning your hand while cooking, is a very very hard problem. In the previous sections, we have laid out the many complicated perspectives on consciousness, but it is important to note now the differentiation between where neuroscience is now, and where it would need to be to solve the problem of consciousness. 

 

As seen on the previous timeline in a brief history, neuroscience is a very new field. The aforementioned reflexive circuitry responsible for your quick hand removal from the sauce pan may be an easy problem for neuroscience today, but 50 years ago it was a complete mystery. Neuroscience has come a far way granted, but it has only just begun to graze the surface of the harder problems. 

​

What makes consciousness such a hard problem is that it transcends isolation and localization. If it is the result of extremely high level cognitive function, then we simply do not have the technology currently to disseminate it from concurrent cognitive processes. One thing that neuroscientists can agree upon is that consciousness is not the product of a singular neurological pathway, but rather the culmination of many different and diverse pathways, like threads within a rope.

​

​

 

 

 

 

 

 

 

 

 

 

The existence of consciousness presents an interesting philosophical dilemma as well, which is why do we have it? If our minds on a base level operate in binary, how are we not more that organic robots who lack personality? Why and how are we capable of having such a dynamic inner experience? In an interview, philosopher David Chalmers once stated “I’m talking to you now, and I can see how you’re behaving; I could do a brain scan, and find out exactly what’s going on in your brain – yet it seems it could be consistent with all that evidence that you have no consciousness at all." Due to the decentralized nature of consciousness, we cannot simply visualize it with our current brain scanners and experimental studies. According to the results of our current investigations, there is nothing that physically differentiates our brain function from that of a sophisticated robot. So where do we find consciousness? Our mind cannot currently fully grasp its own existence, and it may never be able to. 

​

We are limited in our understanding of consciousness because we are limited in our tools to study it. Our minds only allow us to see an entirely personal, three dimensional interpretation of the particle/field interactions around us. Our cognition has an upper limit, which is why we are compelled to use computers to boost what we can do on our own. Our physical conditions limit us too. A larger particle accelerator could be the key to answering many questions about our universe, however they can only be built to a certain size within the physical restraints of our world. Many believe that consciousness lies on the border between what we could one day know about our existence, and what will never understand. Like a squirrel trying to understand algebra, consciousness may be out of our grasp entirely.

​

This unattainability of an understanding of consciousness may not be all bad, however. Admittedly, being unable to completely understand consciousness means it will never be something that we can replicate, which spares us from some pretty difficult ethical dilemmas in relation to Artificial Intelligence(AI).

​

According to Brittanica, Artificial Intelligence is "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." The concept of Artificial Intelligence was first explored by Computer Scientist Alan Turing in the 1930s. Turing was a pioneer in research regarding what he called "thinking machines," meaning machines that could solve complex logic problems. In the 1950s, Turing developed a test to assess such machines rightfully called "The Turing Test." The Turing Test consists of two human participants, and one computer. One of the humans serves as an evaluator and is tasked with assessing a conversation between the machine and the other human subject. The evaluator knows that one of the other subjects is a computer however he is not told which one. At the end of the test, the evaluator will be asked to identify which of his fellow subjects was the human and which was the machine. If he guesses incorrectly, or cannot distinguish the two, then the machine is said to have passed the test, exhibiting an intelligence equal to or greater than its human counterpart. 

​

Currently, our AI software can do most of what our minds circuitry can do. IBM's intelligent machine "Watson" in 2011 beat two reigning champions in a Jeopardy game watched by millions, proving its ability to understand and integrate complex information at a rapid pace. Computers like Watson have even passed the aforementioned Turing Test, befuddling humans into thinking they, too, are human. One issue with current AI however is that each computer operates a mile deep but an inch wide, meaning they are highly proficient in certain specific tasks, but lack the flexibility and wide knowledge the human brain contains. Watson may be able to answer trivia questions in a matter of seconds, but if asked to explain how he himself works and operates, he falters. He cannot contemplate his own computing processes, and because of this he is limited in his growth. 

 

Not just Watson, but all current 'artificially intelligent' computers still lack a consciousness in the form of self-awareness and an understanding of one's inner experience. AI lacks whatever it is that gives us the ability to feel emotional and mental pain. Machines cannot currently learn what it is like to feel a certain way, and until we can fully grasp our own consciousness and what it is that allows for our emotional experience, it is likely that they will not be able to. 

​

Though our current AI lacks consciousness, and given that our own inability to understand consciousness prevents us from developing computers that have it, it is a possibility that computers could develop a different form of it without our intervention. This possibility relies on another category of computer science that is the focus of much research: machine learning. Machine learning is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human involvement. Machine learning is when computers are equipped to teach themselves. Given the immense amount of digital data they have to learn from, the idea that they could interpret and incorporate this information and develop themselves into a conscious being is not entirely far fetched. In fact, this is the topic of the critically acclaimed 2014 movie Ex Machina, which takes place in a not so distant future. In the movie, computer programmer Nathan develops an intelligent female robot, Ava, who has the ability to learn from crowd-sourced internet data. The "brain" that Nathan gives this robot is essentially a highly plastic router, allowing her to access and learn from everyone and everything on the internet. Nathan uses a young programmer, Caleb, to be his human subject in the Turing Test only this time the rules are different: given that Caleb already knows that Ava is a robot, will she still be able to convince him that she is human, or at least contains the amount of intelligence that is necessary for her to qualify as a human? The answer is yes.

​

​

​

​

​

​

​

​

​

​

 

This movie emphasizes the power of machine learning on the development of conscious artificial intelligence. Its morbid ending leaves the viewer questioning not only if Ava is a conscious being, but also what separates (if anything) Ava from an organically born human. It makes one question their own position in the world and most of all it makes you wonder just how far away we are from developing machines that can surpass our own understanding of consciousness without our help. At one point, in reference to Ava, Nathan morbidly states "One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction."

 

Ex Machina provides an artistic interpretation of a possible future for machines, however our current understanding of machine learning is not yet refined enough for this level computation. There is still a question of whether or not machines, even if they are given access to all information available, could decipher the hard problem of consciousness. It may be something beyond human and machine comprehension. 

​

Another big issue in the study of consciousness that may prevent machines and humans from full understanding is our innate inability to record and quantify the subjective experience of others. We discussed this in the previous section in relation to animals. It is impossible for us to completely understand the lived experience of animals and because of this, it is very difficult to delineate where, or if, in evolution a split arose between conscious and non-conscious organisms. So, too, it is impossible for us to understand the lived experience of our fellow humans. For all we know, we could be experiencing and entirely different world than those around us without even knowing it. An example of this would be some severe cases of color blindness. In certain cases, patients may have a complete inversion of a color - switching red to green or vice versa. If they grow up associating the word"red": with what the general population sees as green, it would be extremely difficult to discover they even have a disorder. We cannot describe colors in themselves, we can only use their semantic associations to transmit information to others whom we assume have the same perception as us. This may be a small example, but it emphasizes our lack of language to fully express our lived experience. How can we hope to study consciousness if we cannot quantify and understand the experiences of our fellow humans?

​

​

​

​

​

​

​

​

​

​

​

 

The tools we currently have to study consciousness limit our understanding, as does our inability to understand the subjective experience of those around us. Computer science, as well, is far from solving the problem of consciousness. In all of these varying fields, even if they are given time and monetary investment, consciousness may still lie outside of the scope of discoverability. The truth is, we don't have enough of an understanding of consciousness to know if it is even within our comprehension. Only time and research will give us more clues about this complicated concept, and until we have more information, all we can do is keep investigating consciousness through scientific, philosophical, and artistic approaches. Chalmers, in reference to the future of this area of study, once stated “But it wouldn’t surprise me in the least if in 100 years, neuroscience is incredibly sophisticated, if we have a complete map of the brain – and yet some people are still saying, ‘Yes, but how does any of that give you consciousness?"

​

Sensorimotor Pathway for Hand

hand-black-and-white-clipart-1.jpg
tmp713094286744223744.jpg

Sensory Neuron 

Encodes feeling of stimulus into electrical pulses that inform brain about the stimulus

Motor Neuron

Takes electrical impulses from the brain encoding a response to a stimuli and translates them into a motor response

Thalamus

Key structure in the brain for generating an appropriate motor response from sensory input 

Scroll over boxes for more information

{

}

"

One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction."

"

But it wouldn’t surprise me in the least if in 100 years, neuroscience is incredibly sophisticated, if we have a complete map of the brain – and yet some people are still saying, ‘Yes, but how does any of that give you consciousness?"

bottom of page