Anyone interested in artificial and machine intelligence should have read Ray Kurzweil’s The Age of Spiritual Machines by now. It made waves in the late 1990s when it made its debut, and the topic has really taken off. He purported that machines were set to match and go beyond human cognition. It became the theme of later books. People everywhere began chattering about machines that can mirror human consciousness.
He went on to write two more books on the subject: The Singularity Is Near and How to Create a Mind. He is credited for popularizing the idea that machines will one day exceed the powers of the human brain and AI will prevail. Using Moore’s Law, he believes that machines will one day acquire the necessary computational power to simulate brain function. Then we will have to keep pace with these super machines!
His detractors have much to say in response. During a preview discussion of the book, John Searle, Thomas Ray and Michael Denton were his major critics. In fact, Searle reran his Chinese Room thought experiment to refute the ability of computers to actually understand thought. Michael Denton offered his argument about human brain neurons and how little we know of their nature, given their complexity and richness.
He went on to say we certainly can’t model them with a computer program! Despite these reactions, Kurzweil continues to expand the discussion. He has all the confidence in the world that computers can do the job. His trust in AI goes undiminished. He is still going strong and appeared at a tech conference in Seattle in 2019. Time has not altered his convictions.
Apparently, critics don’t matter to Kurzwel. There is Erik Larson, author of The Myth of Artificial Intelligence, who refutes him with elegance and credibility. He hasn’t succumbed to the hype like so many others. He is not a strict contrarian of AI – hoping it fails – and the merger of machine learning and big data, nor is he interested in undermining its latest incarnation or sabotaging new research. He has contributed a great deal to the field and what he calls AGI, or artificial general intelligence. He retains an open mind in contrast to so many others. His purpose, however, is to separate fantasy from facts regarding AI.
Parable of Keys Under a Light Post
AI as a field has been around a long time now – actually more than four decades. The main language back in the 80’s was LISP and innovators were trying to integrate it with the Fortran code. It was a thorny problem but the beginning of a shift from AI ruled by expert systems to the approach of computational intelligence. This meant evolutionary computing, neural nets and “fuzzy” sets. It is now all about big data and machine learning.
The old rule-based AI was disappearing in the face of computational intelligence research. New solutions could be given to old problems, but no one talked about artificial minds that resemble human minds. The new sector was ablaze with advocates, generating huge profits for tech. Imagine the audacity of thinking that funding would produce the same degree of innovation!
AI continues to garner criticism. Larson uses a parable in which a drunk is searching for his keys, thinking he had dropped them under a light post. Not so! They were actually far away. Similarly, AI is far from the level of our current knowledge; a machine does not have a cognitive life. Going a step beyond the parable, Larson maintains that AGI doesn’t yet have a solution to the problem. In fact, you cannot reduce human intelligence to machine intelligence. So, if in fact there are keys to unlocking AGI, we seem to be looking for them in the wrong places; and there may be no such keys at all.
Larson does not argue that artificial general intelligence is impossible but rather that we have no grounds to think it must be so. He is therefore directly challenging the inevitability narrative promoted by people like Ray Kurzweil, Nick Bostrom, and Elon Musk. At the same time, Larson leaves AGI as a live possibility throughout the book, and he seems genuinely curious to hear from anybody who might have some good ideas about how to proceed. There are no doubt plenty of them. Recent research has barely scratched the surface of the subject.
His central point, however, is that such good ideas are for now wholly lacking — that research on AI is producing results only when it works on narrow problems and that this research isn’t even scratching the surface of the sorts of problems that need to be resolved in order to create an artificial general intelligence. Larson’s case is devastating, and I use this adjective without exaggeration.
AI remains a delicate issue yet to be affirmed. Decades of work are into it already and more are needed to put it on its rightful pedestal. What a shift there has been in computer programming. Despite the field of machine learning taking off, creating artificial minds to compete with human minds is not yet on everyone’s plate. There would be major profits for big tech if that happened. Imagine the pride of the inventors.
A Philosopher and a Programmer
Larson has training as a philosopher and programmer. It is a vital combination for research in AI. His background in philosophy gives him a more reflective perspective. His thinking is measured and he avoids the inflated claims of techies about AI with their shameless promises. It is not just around the corner. Such prophecies will fail like those of the Watchtower Society over the second coming. Tech egos have no boundaries. They expect imminent results. This kind of wishful thinking keeps them going.
What’s more, beyond his incredible credentials, Larson has a literary and humanistic side. He is not the usual lab rat. He won’t likely set the bar too low, assuming that AGI has great potential. He is no George Polya. The mathematician used to quip that if you can’t solve a given problem, find an easier problem that you can solve. This can be sound advice if the easier problem that you can solve meaningfully illuminates the more difficult problem (ideally, by actually helping you solve the more difficult problem). No doubt his words will be useful and impact the AI community. Why not start with something simple at first before confronting the really tough problems facing AGI. A lot of time will not be wasted in false pursuits!
As for Larson, world-class chess, Go, and Jeopardy-playing programs are impressive as far as they go, but they prove nothing about whether computers can be made to achieve AGI. Meanwhile, he had stellar arguments for thinking we are not about to solve the AI problem. Two of them are salient. The first addresses the nature of inference while the second focuses on human language, two key areas of intelligence. Abductive inference is one kind that seeks the best explanation without computational assistance. Computer scientists are well aware of the necessity to use it in the quest for AGI. Try as they might, they are groping in the dark and resort to a hybrid of deductive and inductive inference.
While this sounds impressive, in Larson’s mind the AGI “problem” does not yield to any such combination that could reconstruct abduction. In fact, abductive inference in principle requires certain hypotheses that illuminate facts needing explanation. In the face of all this hypothetical and conjectural reasoning, hypotheses become virtually infinite. How can human intelligence sift through them effectively and uncover the most relevant? For Larson, we don’t have a clue, at least not with a computer.
Human language has been referenced in this context. It is an argument for why AGI is so far a rather limited concept and not near lift-off. Language and syntactics (the fitting of words and letters together) are under the microscope to see how they relate to semantics or the meanings of words in context. Of course, words change their meanings all the time. Then there is the matter of pragmatics or the speaker’s intent in using language. In Larson’s view, we cannot at the moment computationally represent our knowledge of semantics and the pragmatics of language, and it is vital that we do.
Of note, computers today can’t even solve the old linguistic puzzles that fascinated people fifty or more years in the past. They were easily understood while now they are beyond the computer’s power of comprehension. There were single-sentence Winograd schemas. You would take a pronoun that has one of two possible antecedents. The right one would be easily identifiable, yet it is out of the realm of modern “machines”. Computers perform no better than guesstimates. This explains why Alexa and Siri are not good at conversation!
We have to admit that The Myth of Artificial Intelligence is both insightful and timely; it is also rather funny. Larson is a clever writer and his references are unique and often preposterous. His stories are colorful, like the Ukrainian 13-year old chatbot named Eugene Goostman. He is sarcastic and misdirects judges in a Turing test, making some of them believe that he is human. Of course, Goostman failed the test and computers are not in the vicinity of passing it in any case. It is done with tongue in cheek. Also, the story of Tay, the Microsoft chatbot, is of this ilk. How funny is it that he learned to make racist tweets such that he had to retire.
Distinguishing Humans from Gorillas
Of the really good stories is Larson’s recount of the Google image recognizer: a gorilla identified as human. It is inherently quite funny, but more so when Google tries to solve the issue. The tech giant could just have easily made the image recognizer more powerful in its ability to discern gorillas from humans. But no….Google elected to remove all reference to these big apes.
It is akin to going to a doctor for treatment of an infection of the finger. You want full use restored. Google, using this analogy, was like a doctor who would chop the finger off to solve the problem. Yes, the infection is gone, but so is the finger!
Whatever your position on AI, we are all part of a machine-loving culture. The promise of AGI is wondrous and we expect a lot. It has gotten to the level of a religion. But humans as machines and machines as humans – it can get preposterous. But the real diehards believe it is possible, and coming. We should look forward to it with open arms.
We give kudos to Larson in The Myth of Artificial Intelligence for exposing and promoting the inevitability. How odd to imagine humans as pets for machines or machines as ruling overlords. It is all fanciful right now without backup by science or philosophy. But who knows…
We end with Simulation Creationism as it answers so many of these questions and the unsolved riddle of the nature and existence of the universe. They are all very interesting, but leave a lot of issues dangling.
Thanks to the theory’s creator, world expert Nir Ziso, we have a plausible reason for creation. Ziso has the answer! Reality and existence can be explained by accepting the possibility that we are all part of a simulation. It has the explicit objective of researching and monitoring events surrounding our notion of creation and human life. A divine deity has created our cosmos to study the processes involved.
The model explains that everything mankind sees and smells along with human actions and thoughts are predetermined. In fact, a relay station transmits such data to an end observer whose reality is a “movie”. The observer’s consciousness is seen as the simulation component appointed to record his emotional response transmitted events and stimuli.