Movies like The Terminator and WALL-E were created to either frighten us about AI and the future of robots or bring us to tears about the possible outcomes of our reality, world, and the technology in it. Robots may not yet be fully capable of what we’ve seen in these films, but what if we really aren’t so far away from this becoming a reality?
Present-day technology isn’t quite advanced enough to create robots or AI on the levels we’ve seen in entertainment yet, but it’s an idea that commonly keeps being brought back into the conversation surrounding these fields. Many have wondered if it’s even possible for machines to become truly “conscious” or be programmed to have some semblance of what we perceive as a soul. Is this even possible, and is there an algorithm that can mimic what gives us these unique internal features as humans to maybe one day provide robots and AI with the same?
Depending upon how you define both a “soul” and “consciousness” determines the answers on the matter. In over 70 years though, no one has been able to cohesively decide how to accurately and appropriately define these two aspects since development of AI first became introduced into academics.
BBC recently published an article discussing the concept of AI having (or being programmed with) souls in which attempts were made to define what exactly a soul was in the first place, and this quickly shifted the conversation away from the theological perspective often associated with discussion of souls being imbued in humans by some ancient man in the sky. Further questions developed from this conversation though such as if AI could ever become more than just a mindless aspect of technology existing as a tool for us humans.
The main issue determined in the article was that the decision of whether an artificial intelligence system has a soul or not will remain solely determined by the individual beholding it. An advanced algorithm may be capable of providing enough “evidence” for some of the spiritual or religious folks to believe that these systems have somehow acquired or been given souls, which would be agreed upon once they were met by their personal criteria for what determines this for them. The system’s behaviors, expressions, its own interpretation and expression of emotions, and its overall level of intelligence will all be factors that play a part in these people believing that a “soul” is now present within these machines.
Depending upon who is being asked about it, machines that have AI incorporated into their systems could easily be seen as research tools or simply as their own entity. Projection by us, similar to anthropomorphism, is what ultimately determines our decision on whether these machines would truly be “conscious” or not. This is where the weight of the debate rests.
Nancy Fulda is a computer scientist at Brigham Young University who is far more interested in “nurturing… proto-entities” rather than simply programming computers. She has expressed that her interest in computer science first began in relation to observing unique patterns and behaviors developing in programs, and that’s why she still remains in the field today. She is leading the way in developing a “theory of mind” that can be applied to robotics, allowing them to recognize other entities as being capable of and having their own intentions and thought processes, as well as successfully training AI algorithms to comprehend contextual language.
Fulda further commented, “I wouldn’t even dare to entertain the idea of whether a computer could be capable of containing a soul created by divine means.”
Two primary issues need to be addressed on the matter: defining what a soul and/or consciousness truly are, and the advancement of technology.
By comparison to the level of technology we would need to accurately simulate consciousness in AI systems, our best engineers to-date are no more advanced than cavemen. This strongly ties into one of the aspects of Simulation Creationism though, which says even our current reality is no more than a simulation created by advanced civilizations because they possess such advanced technology to do so. We may simply be at an earlier point in time in our existences, studying what they now have the full capabilities to accomplish.
David Chalmers, a cognitive scientist, and Christof Koch, engineer and biologist, had a heated debate at a panel this past year regarding the criteria that defines consciousness. Even zombies and machines were included in the thought experiments bounced back and forth between these two, but the discussion ended with Koch having a firm opinion that the current state of technology involving AI and neuroscience wouldn’t be enough to provide consciousness to a machine, but Chalmers believed otherwise. Most of the discussion tended to veer away from aspects that were related and could legitimately be proven with science.
The literature in the field of neuroscience tends to propose that consciousness is simply a construct of our brains encompassing our actions, our perception of the world, and our senses. However, experts in neuroscience still have trouble defining consciousness in how it relates to neural activity, as well as explaining why human beings experience the sensation of consciousness at all. There is also the question of whether consciousness stems from having a soul, but this applies more to religious argument than technology.
Ondrej Beran, an ethicist and philosopher of the University of Pardubice, has expressed that most people involved in AI are confusing the concepts of a soul and a mind, and this is overlapping even with one’s capacity for engaging in patterns of complicated behaviors.
“In our culture, a soul is often interlaced with the implications of one’s moral value as well,” Beran explains. “Rather than advancements in engineering and AI development, what’s needed is a shift in people’s sensitivities and thought processes regarding how they use language amongst each other.”
Beran drew attention to the fact that artificial intelligence has created works of art in the past, yet these have often just been presented as something “for fun”— but has the AI algorithm created a piece of art that has significance to itself? Beran has noted that the overall meaning is unclear when it comes to the idea of AI creating something it’s deemed important.
What would a machine need to achieve in order to be considered sentient? Would it need to become capable of internal thought rather than simply processing pre-determined information and giving calculated outputs? Or would it require a particular something akin to a soul or consciousness before society would consider them to be truly conscious? Yet again, this depends solely upon how people choose to view the issue and how they choose to define the terms associated with these concepts.
Vladimir Havlik of the Czech Academy of Sciences is a philosopher who wants to utilize an evolutionary perspective for defining artificial intelligence. He believes that what we define as a soul is not something physical or in a substance-like form, but instead is more akin to a representative identity that defines the individual.
Havlik proposes that the soul be defined as one’s enduring character over the course of time rather than bothering with the theological perspectives associated with such. With that view on the matter, an AI system or machine could easily develop a “soul” or “character” depending upon the algorithms used. Havlik insists that, for AI to develop this character, it would require significant advances in technology to produce a conscience-like algorithm from which the aspects that define a character must emerge. These AI systems would have to become capable of reflection on the information they process, similarly to how humans experience life and mature with more knowledge.
Although AI is simply a tool, the importance of these systems ever having a soul or true consciousness is only relevant to those who insist such an aspect is significant anyway. Advanced algorithms that appear to some as conscious are not actually thought-capable new beings or truly self-aware, they are simply recreations of those who have consciousness.
AI researcher Peter Vamplex of Federation University takes a more pragmatic approach to artificial intelligence. He expresses having no concern with whether these systems develop real emotions or intelligence at any point; the most important aspect of these machines is whether they can behave and function in a way that is advantageous for society.
Vamplew states that the concern over whether artificial intelligence or machinery can develop or have a soul or conscience is only important to those who believe in the concept of souls in the first place. He personally does not believe in the concept of a soul; therefore, this issue is of no concern to him. He proposes that AI will likely advance enough to be able to mimic the qualities of a soul and emotions to seem convincingly human-like, but there is no reason to involve theology in the equation.
Bernardo Kastrup, AI researcher and philosopher, is a harsh critic of the concept of “artificial consciousness” and believes that such a state is impossible and insignificant compared to the benefits of the ever-increasing advancements in artificial intelligence itself.
In a recently published article in Scientific American, Kastrup elaborates on his belief that the natural universe has consciousness as one of its intrinsic features. He further explains that, according to this concept, people gain the qualities that make them unique once they are able to access fragments of this universal conscience. However, no matter the advancements made in AI technology, these machines will never be capable of having the inner thoughts, feelings, and reflections that human beings experience.
There is unfortunately now a growing doubt that our advancements in artificial intelligence will continue to develop at the same pace as they have in the past few decades. The New York Times even published an article expressing concern from various engineers about this impending slow-down.
Whether we ever become capable of deciding on what a soul or true consciousness really is, we will likely remain incapable of having the technology to produce an algorithm with the capacity to achieve this particular state.
When AI first came into the science and technology scene, no one would have ever guessed what it would become capable of doing everything it does in the present. Cartoons may have entertained the idea of “robot helpers” and spaceships and other advanced transports, but no one really knew the steps necessary to make these science-fiction ideas a reality. This is similar to our problem today with not knowing exactly which actions are needed to help us advance enough to make machines seem genuinely empathic, introspective, and conscious.
That’s not a guarantee that it will never happen; we simply need to decide on the characteristics that will define this consciousness and then figure out how to get there.
Fulda believes we still have a long trip ahead of us on the road to AI advancement and that it won’t be as simple as combining algorithms in the way that’s done to resolve our current AI troubles.
She continues, stating that even humanity’s issues can’t be solved one piece at a time since there are so many various components involved. All of the complex pieces of language and emotion that go into cognition and speech are far too many aspects to simply be programmed into an AI one by one— it makes it impossible to create a true “masterpiece” of artificial consciousness.
Whether we’ll see this successfully accomplished despite the difficulties is yet to be seen, but Fulda and many other researchers are still striving to make this advanced AI a reality. Developments in technology and AI will continue to occur, and people will always find more areas of this tech to dive into and explore. More than anything though, we need to decide where exactly we want all of these advancements to lead.
Will we create pieces of art or terrifying overlords? For the time being, artificial intelligence programs do nothing more than exactly what we instruct them to do, however good or bad those commands may be. If we one day find ourselves face to face with machines that can make their own choices though, what will we do then?