There are two important concepts first articulated by Prof. John McCarthy of Stanford University, neither of which actually imply that computers will ever evolve to become intelligent, rational creatures. One is that electronic machines can learn functions and processes. Throughout the 56 years since this concept was introduced, it has been declared an undeniable fact numerous times, only for someone to subsequently reposition the qualifications bar for “learning.”
The other is that artificial intelligence (AI) is implied by any process which, when done well and correctly, appears to have required human intelligence. In other words, like legislative gridlock, you don’t have to see it yourself to know it exists.
The brilliance of that notion is that it implies the value of faith. This is an amazing conviction, especially coming from a self-declared atheist. I hope the professor, whose passing we note with remembrance today, would not object to this self-declared Christian paying respect to his work and to reinforcing his faith.
“Intelligence” itself is a term whose definitions vary wildly, and which may apply to me only loosely. Personally, I define it as the use of a combination of available resources to achieve a desired and explicit objective. Artificial intelligence — the concept which McCarthy pioneered — requires only logic. By the professor’s reasoning, the achievement of the objective should appear to require more than logic, which to me has always implied the belief in influences beyond the obvious.
For a Dartmouth College summer research project in 1955, then-Dartmouth professor McCarthy suggested students divide into groups to study the problem of how a computer could be programmed to simulate tasks reserved for humans, and whether language could be employed as a tool for enabling “self-improvement” in the machine. Together with giants of information theory such as Claude Shannon and Marvin Minsky, McCarthy threw out a handful of ideas he conceded were incomplete on their face, and in gathering students’ interest in studying them, created what might today be described as an open source project.
“If a machine can do a job, then an automatic calculator can be programmed to simulate the machine,” read McCarthy’s 1955 proposal. “The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have… It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.”
A little while later, McCarthy tossed out this zinger: “Probably a truly intelligent machine will carry out activities which may best be described as self-improvement.” The inherent oxymoronic quality of the phrase “truly intelligent machine” may be lost on the audience of 2011; in 1955, it had the weight of “truly omnipotent battery.”
The question for the age, and the one that transformed McCarthy from a mathematics professor into a philosopher in the public mind, was whether a machine, or robot, or android (the format keeps changing) must have intelligence in order to exhibit something that resembles it enough to qualify as virtually human. What we learned from the experiment over the past six decades is that we can answer yes or no to this question, and change our minds any number of times, and yet still experience the thrill of participating in the search for the answer. And what we, as scientists at heart if not by trade, have learned through observation is that it would take the most brilliant human programmer alive to enable a machine to simulate the same stupid things that certain elements of humankind have done over this same period.
As a philosopher and, as some labeled him, a humanist, Prof. John McCarthy demonstrated an unusual degree of faith in something — perhaps not God, or a god per se, but in something greater. “I don’t claim to have a proof that God cannot exist,” he wrote in 1997, for the forerunner of what would later be called a “blog.” “It’s just that I consider the state of the evidence on the God question to be similar to that on the werewolf question.”
And yet he believed in something he called “progress.” “Humanity has progressed over hundreds of thousands of years, but until about the seventeenth century, progress was a rare event,” he wrote for his personal Web site. “There were novelties, but a person would not expect a whole sequence of improvements in his lifetime. Since then scientific progress has been continual, and in the advanced parts of the world, there has also been continued technological progress. Therefore, people no longer expect the world to remain the same as it is.” His argument for the possibility that humanity could continue to exist for billions of years into the future, was more intuitive than scientific: He failed to see any reliable evidence why it wouldn’t.
A sensible conclusion in the absence of facts to the contrary. Some would call this “intelligence.” My take on this question is similar to my take on the God question. I would call it “faith.”