by Charlton D. Rose
In a Star Trek episode entitled, "Measure Of A Man," a villainous cybernetics researcher obtains authorization to dismantle Data, the android, to learn more about its construction. This action is met by Data's friends with resistance, and a legal battle ensues to determine whether Data is a life form with rights.
Riker, appointed to present the researcher's case, argues that because Data is composed of circuits and wires, he is nothing more than a sophisticated computing machine. (His case seems almost rock-solid when he forcibly switches Data off during the trial.) Later, Data's defense provides testimony to show that because Data has had many human-like experiences, including even an intimate relationship with another crew member, he must therefore be ruled a sentient life form, with all the rights of a human.
Normally, Star Trek has a reputation for portraying the future society as having solved the problems that vex us today. However, "Measure Of A Man" raises issues that are still debated in the 20th century. If Star Trek is any reliable predictor of our world's future (hah!), then the issue of whether machines can be alive won't be resolved any time soon.
Akin to the debate of whether machines can live is the debate over whether machines can be intelligent. This is the great Artificial Intelligence debate, one which has not been resolved and probably never will be.
Advocates of a view called "strong AI" believe that computers are capable of true intelligence. These "optimists" argue that what humans perceive as consciousness is strictly algorithmic, i.e. a program running in a complex, but predictable, system of electro-chemical components (neurons). Although the term "strong AI" has yet to be conclusively defined [Sloman 1992], many supporters of strong AI believe that the computer and the brain have equivalent computing power, and that with sufficient technology, it will someday be possible to create machines that enjoy the same type of consciousness as humans.
Some supporters of strong AI expect that it will some day be possible to represent the brain using formal mathematical constructs [Fischler 1987]. However, strong AI's dramatic reduction of consciousness into an algorithm is difficult for many to accept.
The "weak AI" thesis claims that machines, even if they appear intelligent, can only simulate intelligence [Bringsjord 1998], and will never actually be aware of what they are doing. Some weak AI proponents [Bringsjord 1997, Penrose 1990] believe that human intelligence results from a superior computing mechanism which, while exercised in the brain, will never be present in a Turing-equivalent computer. If weak AI can ever be proven, it might lead to a refutation of Church's thesis (as implied in [Bringsjord 1997]).
To promote the weak AI position, John R. Searle, a prominent and respected scholar in the AI community, offered the "Chinese room parable" [Searle 1980]. This parable, summarized by [Baumgartner 1995], is as follows:
He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program -- that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics.
In this parable, Searle demonstrates that although the system may appear intelligent, it in fact is just following orders, without intent or knowledge of what it is accomplishing. Searle's argument has been influential in the AI community and is referenced in much of the literature.
It is tempting for spiritually-inclined people to conclude that the weak AI vs. strong AI debate is about mind-body duality, or the existence of a soul, and whether a phenomenon separate from the body is necessary for intelligence. Far from it, the predominant opinion in the AI community, among both sides of the strong/weak issue, is that the mind is a strictly physical phenomenon [Fischler 1987]. Even Searle, a weak AI advocate, believes that
Mental phenomena whether conscious or unconscious, visual or auditory, pains, tickles, itches, thoughts, indeed, all of our mental life are caused by processes going on in the brain. [Searle 1984]
The AI debate is primarily concerned over whether our current, algorithmic computing paradigm is sufficient to achieve intelligence, once "the right algorithm" has been found. The prevailing attitude, in favor of weak AI, asserts that "the syntax of the program is not by itself sufficient for the semantics of the mind" [Baumgartner 1995, quoting Searle].
The apparent failure of traditional (strong) AI has led researches to consider new computing paradigms. For example, researchers have noted that the traditional Von Neumann "stored-program" architecture, which is the basis of most of the world's computers today, is radically different from the neural structure of the brain. "Connectionists" hope to build machines whose organization more resembles that of the brain and its neural structure, by using numerous, simple processing components connected in a massively parallel manner. An early attempt at this was the Connection Machine, introduced by Thinking Mind, Inc. in 1986, which had up to 64,000 processors, massively connected, capable of fully parallel operation.
Still, there are those who cling desperately to the strong AI dream. Searle says in [Baumgartner 1995] that these
people have built their professional lives on the assumption that strong Artificial Intelligence is true. And then it becomes like a religion. Then you do not refute it; you do not convince its adherents just by presenting an argument. With all religions, facts do not matter, and rational arguments do not matter. In some quarters, the faith that the mind is just a computer program is like a religious faith.
Although Searle's statement is biased, and his generalization about religion is decidedly illogical, his statement seems to represent the feelings of many weak AI advocates. In an article by [Bringsjord 1997], entitled "Strong AI Is Simply Silly," Bringsjord actually appears to detest those who still argue in favor of strong AI (which he calls "Strong AIniks"). It appears that this intolerance is spreading.
At the same time, however, strong AI intolerance is being met with fierce resistance. In 1990, Roger Penrose published The Emperor's New Mind , a 450 page book which has been viewed by many as an attack on strong AI. Sloman, who appears to be one of those stubborn "Strong AIniks," quickly responded with a 42 page rebuttal in [Sloman 1992], which, if read by a neutral party, is fairly effective at making Penrose look like an idiot.
However, upon closer examination of both Penrose's and Slomans' arguments -- and arguments by many other congnitive scientists as well -- it becomes painfully clear that both camps are stuck in an unresolvable debate over assumptions, guesses, and semantics. When philosophers cannot even agree on a definition of intelligence, it seems rather unfruitful to debate whether machines can achieve it. In [Tang 1998], we find a 17-page treatise of various definitions of intelligence, along with the author's recommendation of how the term ought to be extended, in order to give computers a fighting chance at achieving it.
The most widely referenced definition of intelligence is the one proposed by Turing in [Turing 1950], which challenges us to label intelligent any machine that can fool us into believing it is human. Unfortunately, most leaders in the AI debate seem to think that the Turing test is basically useless [Baumgartner 1995]. According to Hubert L. Dreyfus, a professor of philosophy at UC Berkeley,
Something can pass the Turing Test and still not have consciousness or intentionality. But I have always liked the Turing Test, because unlike philosophers like Searle and others I want to pose my questions in ways that are most favorable for the people in Artificial Intelligence and congnitivism and see if they can succeed by their own standards. I want to make it as easy as possible for them and show that they still cannot pass the test. . . . I would not impose any higher standard on them by saying, "Look, your machine has got no consciousness." I accept their ground rules and want to show that they are failing on their own terms. [Baumgartner 1995]
Dreyfus, too, is decidedly biased. Indeed, he has called AI a "great failure" [Baumgartner 1995]. However, his opinion about the Turing test raises an interesting, though pragmatic question: Why worry about strong vs. weak at all? Since the current state of AI is nowhere near the point of creating machines that can pass any reasonable test for general intelligence, why waste time bickering about whether machines can be conscious? After all, none of the proponents and opponents of strong AI have done anything more than speculate and make broad statements which, at least in the present, are neither provable nor refutable.
Perhaps the issue is important because of the ramifications resulting from the conclusion we reach. In "The Measure Of A Man," the Star Trek episode narrated at the beginning of this essay, the debate over whether the cybernetics researcher should be allowed to dissect Data eventually turns into an ethical issue. Neither side would refute Data's ability to pass the Turing test, and for that reason the defense could argue that Data was a conscious, intelligent being. The prosecution, however, could still cite Searle's Chinese room parable, concluding that the android was nothing more than a sophisticated calculator, the property of Star Fleet.
In a desperate emotional appeal, the defense argues that if Data is ruled to be nothing more than a machine, the researcher's work will become the foundation for a race of "disposable people," once again fostering the slave-master mentality that plagued societies many centuries ago. Swayed by this plea, the judge rules in Data's favor (of course), and declares him to be a sentient life form with all the rights and privileges of a human being.
As this story demonstrates, the argument over strong AI may eventually be linked to the argument over whether machines can be alive. Ultimately, this argument is destined to revolve around the definition of life, becoming a semantic issue which will never be resolved to everyone's satisfaction. And so it goes with the great artificial intelligence debate. Until we can agree upon a definition for intelligence, the issue may never be settled.