The Artificial General Intelligence (AGI ) projects we are going to talk about today are considered less ambitious than any strong AI. However, some scientists argue that computers will never be able to capture the capabilities of human intelligence.
Both supporters and opponents of the idea of computers capable of solving any intellectual problem subject to a person, there are many arguments in defense of their position. We will find out what arguments each side brings and try to figure out if AGI has a chance now and in the future.
This article is based on a recent publication by Professor Ragnar Fjelland, Why General Purpose AI Will Not Be Created, but we will be looking at more than just the cons.
Silent knowledge
Since human intelligence is general (that is, it can solve almost any intellectual problem), humanoid AI is often called general artificial intelligence (AGI). Despite the fact that AGI has an important property of human intelligence, it can still be considered as a weak AI. However, it differs from traditional weak AI, limited to specific tasks or areas. Therefore, the traditional weak AI is sometimes called artificial narrow intelligence (Artificial Narrow Intelligence, ANI).
The ability to use algorithms at tremendous speed is a hallmark of ANI, but it does not bring it closer to natural intelligence. Mathematician and physicist Roger Penrose in the famous book “The New Mind of the King. On Computers, Thinking, and the Laws of Physics ”, published in 1989, suggested that human thinking is mostly non-algorithmic, and therefore cannot be modeled using an ordinary computer such as a Turing machine.
Thirty years before Penrose, philosopher Hubert Dreyfuss expressed similar thoughts in his work " Alchemy and Artificial Intelligence ." He also wrote the book What Computers Can't Do , in which he argued that human knowledge is mostly implicit (non-verbal) and cannot be formulated in a computer program.
In 1958, physicist, chemist and philosopher Michael Polani first formulated the concept of "personal (or implicit, tacit) knowledge." Most of the knowledge we use in our daily life is tacit — we don't know what rules we apply when completing a task. As an example, Polanyi cited swimming and cycling, when all movements are performed automatically.
The problem is that much of the expertise remains taciturn. For example, many of us are experts in walking, but if we try to formulate exactly how we walk, we will give an extremely vague description of the real process.
AI milestones
( c )
In the 1980s, the arguments of Polanyi, Dreyfus, and Penrose began to lose strength - thanks to the discovery of neural networks that could learn on their own, without explicit external instructions.
Although large-scale projects (for example, Japan's " Fifth Generation Computer ", which began in 1982 and promised the creation of AI using massively parallel logic programming) have failed , historically only successes are remembered. Most notable AI advancement towards the end of the 20th century demonstrated by IBM specialists. In 1997, Deep Blue defeated world chess champion Garry Kasparov in a series of matches.
The IBM supercomputer was created to solve a specific problem on a chessboard, and not everyone saw it as an AI success. However, in 2011, IBM Watson won over humans in the Jeopardy! (in Russia the show is known as "Own Game"). Against the background of Deep Blue, Watson was a colossal step forward - the system understood queries in natural language and found answers in different areas of knowledge.
It seemed that a new era of expert systems would begin very soon. IBM was planning to harness the power of the computer in medicine. The idea was on the surface: if Watson had access to all the medical literature, it could offer better diagnosis and treatment than any physician. In the following years, IBM was involved in several medical projects, but achieved modest success . Today the company's effortsfocused on developing AI assistants who perform routine tasks.
Of course, one cannot but say about the main achievement of AI developers today - the AlphaGo system, with which the final disintegration of the arguments against AGI is associated. AlphaGo has shown that computers can process tacit knowledge. DeepMind's approach has been successfully applied in Atari Breakout, Space Invaders and StarCraft, but it turns out that the system lacks flexibility and cannot adapt to changes in the real environment. Because problems arise in an ever-changing world, deep reinforcement learning has so far found few commercial applications.
Cause and investigation
( c )
In recent years, AI advocates have gained a powerful new tool - applying mathematical methods to huge amounts of data to find correlations and determine probabilities. While big data does not reflect the ambition to create strong AI, its proponents argue that it is unnecessary. In the book Viktor Mayer-Schoenberger and Kenneth Kuke "Big Data: a revolution that will change the way we live, work and think," says that we may not need to develop computers to human intelligence - on the contrary, we can change our thinking to become like computers.
In big data, we operate with correlations, but we don't always understand where the cause is and where the effect is. In the bookPearl and Mackenzie Why? A new science of causation ”the authors say that to create real AI, a computer must be able to cope with causation. Can machines represent causal relationships in such a way that they quickly get the information they need, answer questions correctly, and do it with the ease that even a three-year-old has?
Even neural networks have some flaw here. We don't really know why the system makes this or that decision. A few years ago, a team from the University of Washington developed a program that trained to distinguish between husky and wolf. The task is quite difficult, since the animals, as you can see in the illustration, are similar to each other. But despite the complexity, the system worked with 90% accuracy. After analyzing the results, the team realized that the neural network worked so well only because the images with the wolves were mostly snow.
What if…
Historian Yuval Harari argues that between 70 thousand and 30 thousand years ago, a cognitive revolution took place in the world, the hallmark of which was the ability to imagine something that does not exist. As an example, he cited the oldest known ivory figurine "male lion" (or "female lioness") found in the Stadel cave in Germany. The figure has a human body and a lion's head.
Pearl and McKenzie refer to Harari and add that the creation of the lion-man is the forerunner of philosophy, scientific discovery and technological innovation. The main prerequisite for this creation was the ability to ask questions in the format: "What happens if I do ...?", And to answer them.
However, computers with causal relationships are not doing well. Just like 30 years ago, machine learning programs, including programs with deep neural networks, operate almost entirely in associative mode. But this is not enough. To answer causal questions, we must be able to intervene in the world.
According to Ragnar Fjelland, the root of the problem is that computers do not have a model of reality, cannot change the reality around them and do not interact with it in any way.
Terminological impasse
Not the most obvious problem is that, according to some experts, we cannot succeed in a certain area without understanding the "rules of the game." There are still difficulties even with terminology, and we do not know what exactly to call artificial intelligence. Moreover, the understanding of natural intelligence is far from ideal - we simply do not fully know how the brain works.
Consider, for example, the thalamus, which is responsible for transmitting sensory and motor information. This part of the brain was first described by the ancient Roman physician Galen. In 2018, an atlas of the thalamus was created : based on histology, 26 thalamic nuclei were isolated , which were then identified on MRI. This is a great scientific achievement, but scientists assume that there are more than 80 nuclei in the thalamus (the exact numbernot installed so far).
In On the Measure of Intelligence , François Schollet (AI researcher at Google, creator of the Keras deep learning library and co-developer of the TensorFlow machine learning framework) points out that prior to the global consensus on “what is intelligence,” attempts to compare different intelligent systems with human intelligence doomed to fail.
Without clear metrics, it is impossible to record achievements, and, consequently, to determine exactly where to move in the development of artificial intelligence systems. Even the notorious Turing test cannot become a lifesaver - we know about this problem from the thought experiment with the "Chinese room".
Presence as a sign of intelligence
Most proponents of AGI (and strong AI) today follow Yuval Harari's arguments. In 21 Lessons for the 21st Century, he refers to neuroscience and behavioral economics, which allegedly showed that our decisions are not the result of "some mysterious free will," but "the work of billions of neurons in the brain calculating all possible probabilities. before making a decision. "
Therefore, AI can do many things better than humans. As examples, the writer cites driving on a street full of pedestrians, lending to strangers, and negotiating commercial deals - all of which require the ability to "correctly assess the emotions and desires of other people." The rationale is: "If these emotions and desires are really nothing more than biochemical algorithms, there is no reason why computers cannot decipher these algorithms - and they can do so much better than any Homo sapiens."
This quote echoes Francis Crick's thought in The Amazing Hypothesis":" The amazing hypothesis is that "You", your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will are really nothing more than the behavior of a huge accumulation of nerve cells and associated them molecules. "
There is also an alternative opinion: even the most abstract theories are based on our everyday world. Philosopher, founder of phenomenology Edmund Husserl mentions Einstein's theory of relativity, claiming that it depends on the "Michelson experiments" and their confirmation by other researchers. To conduct this type of experiment, scientists must be able to move around, handle instruments, and communicate with colleagues.
As Hubert Dreyfuss noted, we are bodily and social beings living in the material and social world. In order to understand another person, one does not need to study the chemistry of his brain; rather, one needs to be in the “skin” of this person, to understand his life world.
To illustrate Dreyfus's statements, the writer Theodore Rozzak suggestedconduct a thought experiment. Imagine watching a psychiatrist work. He is a hard working, experienced person and obviously has very good practice. The waiting room is full of patients with various emotional and mental disorders: someone is on the verge of hysteria, someone is tormented by thoughts of suicide, others suffer from hallucinations, some patients are tormented by the most severe nightmares, and some drive themselves to insanity with the thought of what they are being watched by people who will hurt them. The psychiatrist listens carefully to each of them and does his best to help, but without much success. On the contrary, it seems that the condition of the patients is only getting worse, despite the heroic efforts of the psychiatrist.
Now Rozzak asks us to put the situation in a broader context. The psychiatrist's office is located in a building located in Buchenwald, where the patients are concentration camp inmates. Biochemical algorithms will not help us understand patients. What is really needed is knowledge of the broader context. An example just doesn't make sense if we don't know that the psychiatrist's office is in a concentration camp. Few can put themselves in the shoes of a prisoner in Nazi Germany. We are unable to fully understand people in situations that are very different from our own experience. But we can still understand something, since we exist in the same world with other people.
Computers, in turn, exist in their own world of machines, which at least partially explains the problems that prevent IBM Watson Health and Alphabet DeepMind from solving problems of the real world. IBM is faced with a fundamental mismatch between how machines learn and how doctors work. DeepMind found that solving Go's problems did not bring them any closer to answering questions related to finding a cure for cancer.
Conclusion: computers go out into the world
( c )
Not only critics of AGI know about the problems. Researchers around the world are looking for new approaches, and there are already some successes in overcoming barriers.
Despite the fact that even a dog is smarter than IBM Watson , according to AI theorist Roger Shank, the future of medicine undoubtedly belongs to computer systems. The paper, published in June 2020, demonstrates the tremendous success of Pharnext: in fact, their AI has found a simple and affordable solution to the problems of genetic Charcot-Marie-Tooth disease.
AI has assembled an amazing cocktail of three approved drugs to alleviate hereditary motor sensory neuropathy. If we consider a new "medicine", bewilderment is guaranteed: the first component is a drug used to treat alcoholism, the second has an effect on opioid receptors and is used to combat alcohol and opioid addiction, the third is generally a sugar substitute.
After going through millions of options, the AI chose just such a combination. And it worked: Experiments in mice and humans have shown increased connections between nerves and muscles. It is important that the patients' well-being improved, and the side effects were insignificant.
Speaking about the problem of presence in the world, it is worth mentioning therecently at the Technical University of Munich, an ambitious study: a robot at an interconnected level was taught to perceive and act in the real world. The study is part of a large-scale European project SELFCEPTION , which combines robotics and cognitive psychology to develop more discerning machines.
The researchers decided to provide robots and artificial agents in general with the ability to perceive their bodies the way humans do. The main goal was to improve the ability to interact in the face of uncertainty. They took as a basis the theory of active output of the neurobiologist Karl Friston, who came to Russia last year with lectures (for those interested in the topic, we recommend looking at Russian orEnglish ).
According to the theory, the brain is constantly making predictions, checking them against the information coming from the senses and making adjustments, and restarting the cycle. For example, if, on the way to an escalator, a person suddenly finds a traffic jam on the way, he adapts his movements accordingly.
An algorithm based on Friston's principle of free energy (a mathematical formalization of one of the predictive brain theories) represents perception and action working towards a common goal, which is to reduce prediction error. In this approach, for the first time for machines, the sensory data better matches the prediction made by the internal model.
In the long term, this research will help develop an AGI with human adaptability and interaction. It is with this approach that the future of artificial intelligence is associated: if we release AI from cramped servers into the real world, perhaps someday we can turn on self-discovery in machines.