Cyberstarts

From Gerald R. Lucas
Alan Turing in 1930

Alan Turing’s “Computing Machinery and Intelligence” and Norbert Wiener’s “Men, Machines, and the World About” represent the beginning of a paradigm shift not only in the way we think about technology, but in the ways we think about ourselves. Turing’s essay, while about as dry as yesterday’s biscuits, implicitly compares human interaction and biology to his universal machines, while Wiener suggests that we need to understand our technology before we use it to blindly end our days on the planet. After World War II, these men see the need of defining the human to avoid the machinery of the human program.

Turing’s universal machine — a digital computer that “could mimic the behaviour of any discrete machine”[1] — concerns mimesis, or the ability of a digital machine to mimic the behavior of a human, or any algorithmic system for that matter. Turing speculates that within fifty years “it will be possible to programme computers with a storage capacity of 109 to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”[2] The implications of this prediction are numerous.

While we are not quite at the level that Turing predicted, we have come along way in computer simulation. Just the notion of simulation is interesting here: the idea that the human can be simulated suggests that humans can be predicted — that humanity operates by a series of predictable (programmable?) algorithms that can be mimicked by a digital computer, Turing’s universal machine. With faith in science’s ability to find out just how things operate (the why here does not matter), we could eventually program the perfect ersatz human, as if there is such a thing to begin with.

This faith is based on the Enlightenment supposition in the human mind’s ability to reason and figure out the mysteries of the universe through scientific experimentation, by an application of reason to the ostensibly chaotic universe. Order can be gleaned through temerity and reason. Yet, our postmodern and poststructural zeitgeist has suggested that this faith in reason is itself a product, perhaps, of wishful thinking: that it’s no more credible than the superstition that is was supposed to replace. Indeed, after the culmination of scientific progress produced the nuclear bomb — ripping apart an order that science was supposed to bolster — many artistic expressions after WWII suggested the impossibility of humanity to even achieve a predictable understanding of ourselves. Yet, with the development of the war’s technology, scientists like Turing and Wiener saw the necessity of defining just who we are and what we hold important, for technology would change us in numerous ways, perhaps even precipitating our own extinction.

Wiener’s article compares technology to magic — the genie being let out of the bottle. Before making the wish, Wiener suggests, we better know just what it is we want: “sorcery was not the use of the supernatural, but was the use of human power for other purposes than the greater glory of God” (72).[3] By the “greater glory of God,” Wiener means an end that has a “justifiable human value.”[4] A quality of humanity that Wiener seems to value — as well as Turing — is our ability to adapt, to learn from our environments. Along these lines, Wiener posits that we must understand our technology, not worship it. Gods demand obedience and sacrifice, but understanding involves questioning and revision:

Within Wiener’s argument is the implication that we, humans, are machines that can learn, though sometimes we do not. If we do not learn from our mistakes, then Wiener sees little hope for humanity’s survival. Fortunately, tape can be erased and recorded on again.

Turing’s machine can also learn, despite its programming and Lady Lovelace’s objection: “The Analytical Engine has no pretensions to originate anything. It can dowhatever we know how to order it to perform.”[5] Indeed, machines that are programmed cannot exceed that programming. Seems very logical. Yet, if the program is complex enough, with enough capacity and processing power, it can seem, in Kurzweil’s language, spiritual — as if it has a soul, is self-aware, and more than the sum of its parts. Humans seem like gestalts because we have not figured out the program yet. Faith in scientific endeavors, like the Human Genome Project, may just unravel the human program in such a way that we could reproduce it and even significantly alter it.

Just what makes a human a human? Body? Soul? Both? Neither? While we still do not understand just why or how we’re conscious, that does not mean that we won’t sometime soon have the ability to replicate or simulate consciousness. What will we tell our technological children when they, like Frankenstein’s monster, ask us why?

Notes
  1. Turing, Alan (2003) [1950]. "Computing Machinery and Intelligence". In Wardrip-Fruin, Noah; Montfort, Nick. The New Media Reader. Cambridge: The MIT Press. p. 54.
  2. Turing & 222003, p. 55.
  3. Wiener, Norbert (2003) [1954]. "Men, Machines, and the World About". In Wardrip-Fruin, Noah; Montfort, Nick. The New Media Reader. Cambridge: The MIT Press. p. 72.
  4. 4.0 4.1 Wiener 2003, p. 72.
  5. Turing 2003, p. 59.