Although the term “artificial intelligence” (AI) is not strictly defined, it is the collective designation of attempts to give machines an intellect similar to that of humans.
AI can be said to include individual machines such as translation devices and TV quiz show champion Watson that use technology born from pertinent research, as well as equipment for analyzing data of quantities so vast that are beyond human capacity.
It would appear that we are entering a new age in which the achievements of AI research are being applied not only in the field of computing, but also robots, cerebral neuroscience and communications technology.
Originally, the concept of machines that think emerged from computer research following World War II, and was advocated in 1947 by British mathematician Alan Turing. In 1956, American scholars John McCarthy and Marvin Minsky held a conference at Dartmouth College in Hanover, N.H., where the term artificial intelligence was first used. It became the dream of many a researcher.
In the 1950s, development commenced of a machine that could play chess. The science fiction classic film “2001: A Space Odyssey,” released in 1968, features a scene in which the artificially intelligent computer HAL 9000 plays chess with a crew member of the spaceship it controls. In 1997, around 40 years after its development began, IBM supercomputer Deep Blue won a match against Garry Kasparov, then the world chess champion, and its objective was achieved.
However, a machine winning a chess match against a human is not the same as being capable of thinking like a human. Machines that are proficient at the game cannot engage in conversation with their human opponent, nor can they gauge their emotions. They have only been given a limited intelligence.
One of the obstacles to AI development is the weakness of computers, in that they cannot perform tasks they have not been programmed to do. The instant a situation occurs that has not been planned for, it becomes incapable of responding to it. This is called the “frame problem,” and it was a particular headache for researchers in the 1980s.
This proved to be a turning point, and from the 1990s, new trends emerged in AI research. One of them focused on the body, as with efforts to give the iCub the intelligence of a 3-year-child. In Japan, there have also been attempts to discover the mechanism by which intelligence is developed by building robots that have a functionality similar to human bone and muscle structure.
Another trend is the “neural net” approach, based on the results of research into the human and animal brains. The idea is to input a mechanism identical to the cerebral nerve system into a computer, in order to make it function like a brain. The more that signals move back and forth between specific nerve cells, the stronger their connection gradually becomes, and by applying this neural network functionality from living creatures to computers, it enables them to gain the benefits of repetitive learning.
Actually, neural network research also became hugely popular in the 1980s and died by the 1990s without producing many accomplishments.
Why are neural networks now coming under the spotlight once more? Andrew Ng, director of Stanford University's Artificial Intelligence Lab, points to two factors: technological innovations in semiconductors and advances in software.
Over a 20-year period beginning from the 1990s, computer chip performance has increased roughly 30,000-fold, and together with developments in software, what was impossible then has become possible today.
Adding to this is the accumulation of information due to the commercialization of the Internet in the 1990s.
For example, by collecting all official United Nations documents from the Internet that have been translated into at least five or six languages, translation computers can acquire a massive amount of example sentences for reference purposes. Before the Internet, no one could have imagined that it would one day be possible to gain a grasp of trends from generation to generation and public security from nation to nation by accumulating and analyzing every word posted on Facebook and Twitter. An environment is being developed that could foster the development of an intelligence surpassing that of human beings.
THE HISTORY OF AI, AS SEEN BY ITS PROPONENT: DR. MARVIN MINSKY
At the conference held at Dartmouth College in 1956, Minsky became one of the first people to propose artificial intelligence as a field of research. Today he is a professor emeritus at the Massachusetts Institute of Technology (MIT), and we asked him to take a look back at the history of AI.
* * *
I believe that intelligence is the ability to think up a solution when a new problem arises. When thinking about artificial intelligence, it's important to study the common knowledge that humans acquire naturally, and how it is developed, and then theorize it. It's that kind of common knowledge that forms the basis of our thought processes for finding a solution when we fail at something.
Even so, after just over half a century since I proposed the concept, I'm disappointed in the current state of artificial intelligence. Progress has been very slow lately.
Until the 1980s, under the Cold War paradigm, countries competed over research findings. Today, the focus has shifted to private sector research, and there is no longer the leeway to keep studying a single aspect over a long period. I feel as if researchers have been too attached to old-fashioned methods. There are some examples of accomplishments in research becoming commercialized, but only for specific usages.
I still think that the dream of artificial intelligence will come true someday. It will be very difficult, however, and may take hundreds of years.
(This article was written by Etsushi Tsuru and Ikuya Tanaka.)
- « Prev
- Next »