With the invention of computers, man found an implement which relieved him
of mental burden in the same way as machines relieved man of physical
effort during the industrial revolution. However some people have sought to expand the computer’s capabilities so as to emulate human behavior more fully.
In the minds of some of the mathematicians, philosophers, psychologists, and computer scientists there has been a nagging question: Can a computer act and think like a man? Can computers display moods, emotions, awareness and understanding which are attributes that come easily to man?
Many other scientists assert that it is not possible for computers to emulate human thought, as thought cannot be divided into small procedures or algorithmic processes-which is the only way a computer can be programmed. Their argument is based on biological and psychological studies of the brain. At least the present day computers cannot duplicate the intricate processes of the human brain, which are made up of millions of neurons interacting with one another in precise and well-defined ways, creating for us the image of the external world.
Artificial intelligence is defined by the dictionary as ‘the ability of a machine to perform those activities that are normally thought to require intelligence.’ There is a branch of computer science by this name, concerned with the development of machines having that ability. Putting it simply, it is an ability of a machine to learn, to organize sensory patterns, to play games-for example, chess, like the Deep Blue computer used by IBM as a show piece-and to find proofs of mathematical problems. The approach taken by computer scientists is to use discrete symbolic manipulation-a function that present-day digital computers can perform very aptly-in order to emulate intelligence.
Alan Turing, one of the greatest mathematicians of this century, believed that machines could be programmed to exhibit intelligent behavior. In 1980, he designed the Turing Test, which is an ‘imitation game’, to see whether machines could actually think. In this test, a questioner is seated in a closed room and interrogates two sources outside the chamber. One source is a human being while the other is a computer. If the questioner is not able to distinguish between the answers given by the man and the machine, then one can infer that the machine is simulating human intelligence. To date, using this test, no machine has come close to that.
We need to take a closer look at how the computer functions.
A present-day digital computer solves problems by using a program expressing an algorithm. Thus far developed by a human programmer, this is a series of steps that follow a logical sequence. The precise statement of a sequence of steps, a procedure, required for performing a particular function is known in computer terminology as an algorithm.
Thus we are led to the following question: Can the functioning of the human brain be reduced to a series of logical steps-to an algorithm? So far the work on AI has only been able to show that the computer can solve problems in a very limited sphere and in specialized fields. One example is the expert system known as MYCIN, which diagnoses bacterial infections of the blood and prescribes drug therapy. Its knowledge consists of hundreds of rules. There are other programs like this but all have very limited capability and applicability.
The main problem with computer-created intelligence is that it cannot create a system which assimilates knowledge from diverse fields and integrates it into a "common sense knowledge-base".
Some scientists believe that AI has bypassed the capability known as common sense. In the words of one of these scientists, Dreyfus, "A typical human would expect that a glass of water when broken would splatter its contents". But a computer does not know that, unless specifically programmed to do so.
Expert systems, which are rule-based, cannot compete with human common sense. Scientists like Dreyfus suggest that every aspect of understanding cannot be rule-based but is experience-based. For example, if a person learns that a glass of water splatters on breaking, then he can, without being instructed, conclude that a jug of water will do the same thing on breaking, but a computer cannot draw a similar conclusion. Dreyfus asserts that an AI-programmed computer of today lacks the common sense of even a two-year-old child.
This leads to the question of what exactly is this phenomenon of ‘understanding’, can it be measured-and with reference to computers, can it be programmed? John Searle , in 1980, designed a ‘Chinese Room’ experiment to address this issue. Imagine a person who does not understand Chinese, sitting in a room. He knows only English. He does have a comprehensive procedure-an algorithm-for answering a question in Chinese. He is given a story in Chinese. Using his algorithm he produces the correct Chinese squiggles to answer any question put to him and slips the answer under the door.
To the person outside the room it would appear that he ‘understands’ Chinese, but does he understand the story? No. He has merely answered the questions given to him. One can surmise that this is probably how a machine would do it.
Let us look at something closer to home for computers-the issue of proving mathematical theorems. Computers are products of ‘discrete mathematics’, and it is helpful to keep in mind that they only know an algorithmic, procedural way of solving problems. Can computers prove mathematical theorems? Basically the question can be reduced to asking whether proving mathematical assertions requires more than a procedural approach. It seems that is the case.
Kurt Goedel, an eminent Austrian mathematician, came up with an incompleteness theorem in 1931, which states that, ‘Every sufficiently powerful formal theory allows a true but unprovable proposition’. Goedel proved that despite having symbols and rules of syntax, a theory contains within its framework a proposition P that cannot be proven by following the rules. Which raises a very crucial question, do mathematicians function non-algorithmically?
Roger Penrose, a very renowned English mathematical physicist and author of the books 'Shadows of Mind' and 'The Emperor's New Mind', concerns himself with the non-algorithmic nature of mathematics. By very complex mathematical reasoning which is out of the scope of this overview, he showed that mathematicians do get around Goedel's proposition P and thus established that mathematical exercises by human beings are non-algorithmic in nature. Namely, the human brain does not function just algorithmically but it does much more than that. He further asserts that ‘consciousness’ has come into being to make judgements that cannot be made ‘algorithmically’.
Penrose believes that new discoveries in physics such as the quantum gravity theory are required to provide understanding and sensible explanation for intelligence and consciousness. Hence, in the future such discoveries may possibly pave the way for machines to emulate human consciousness.
John Haugeland, a proponent of AI, said in 1985 that ‘The fundamental goal of AI research is not merely to mimic intelligence or to produce some clever fake. AI wants only the genuine article - machine with mind, in the full literal sense.’ An interesting aspect of that field of study is that its proponents have always made extravagant claims of breakthroughs being just a few years off. Unfortunately or fortunately these have not yet materialized. Whether or when we will have machines with consciousness is anyone's guess at this juncture.