March 29, 2005

From Gerald R. Lucas
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Kurzweil’s Consciousness

In rereading Ray Kurzweil’s The Age of Spiritual Machines again, aspects of his argument have become more obvious to me. I’m interested, this time, in the implications that his definitions have on how we define “human.” Indeed, he states in the prologue that “the primary political and philosophical issue of the next century will be the definition of who we are.”[1]

Digital-transformation-man-sky.jpg

This should be a hint that Spiritual Machines will be attempting to do just that. Kurzweil goes to great lengths in the first part of his book attempting to define the human as an intelligent, sentient, and spiritual creature, but at the same time argues that we are the product of the idiot programmer, evolution. Human beings are machines, designed and programmed by evolution, but the next stage of our evolution will involve our relationship with our computer technology.

Kurzweil is somewhat specific in his definitions:

  • technology: “evolution by other means”[2]
  • intelligence: using limited resources optimally to achieve goals;[3] the “grandest creation” of evolution;[4] the ability to recognize and create patterns and order[5]
  • knowledge: facts based on context[5]

Kurzweil’s definition of technology has always intrigued me, perhaps because it is so general as to seem inscrutable. It would seem that technology, then, is a product of human intelligence and knowledge (I’ll get to those in a minute) that seeks to better the species: that all technology is based on an Enlightenment assumption that humans always do what is best and noble for the species; that paradise will come through technological innovation; that computers will change the species for the best. Maybe, but I can’t help thinking of Dostoyevsky’s Underground Man and his spiteful cynicism: humans are out for themselves and will often choose to do what is the worst for them. Indeed, technology has produced the cigarette, the nuclear missile, and the airport security. Are these technologies really “evolution by other means”? If not, are they even technologies at all? What about the internal combustion engine? Yes, it’s a boon for producing cheap and relatively efficient energy and has helped humanity better itself in innumerable ways, but do these benefits outweigh the environmental impact alone? Oh, that’s right, we’re interested in the human here, right? The Enlightenment privileges human reason above all else. Perhaps Kurzweil’s definition would be more accurate if it read “the evolution of the human mind by other means.” This does seem to be what he privileges, though he does give some attention to the body — one chapter, to be precise.

Also fascinating is his view of the intelligence/knowledge team; neither seems to mean much without the other. These dimensions of the mind address our ability to read and understand written documents, our mobility, our ability to share our insights, and our ability to store and recall. If this sounds like we’re describing a computer, we are. However, Kurzweil suggests that humans are also machines, and he sees a day at the end of this century where we will be primarily software, rather than hardware, “to port our minds to a more capable computing medium.”[6] As we make this transition, we will “vastly extend ourselves”: “We will be software, not hardware.”[7]

Yes, Kurzweil does discuss new bodies, but, like humanists before him, he privileges the mind as separate and distinct from the body. He seems to argue that technology will give us the freedom to design our own bodies in any ways that our minds can conceive. That our consciousness will be able to inhabit any body that we want it to. Science fiction? Perhaps, but that does not mean we can dismiss it as nonsense. Kurzweil is a smart guy, and his ideas are intriguing. However, there’s a place where he loses me. Just how does my consciousness (call it soul, if you’d like), get from my body to a machine?

Now, I’m right there with him in his thinking that we don’t have to understand how human consciousness works in order to replicate the mechanisms that produce it. Will we make artificial consciousness? I have no doubt. But it seems to me that that consciousness will be inextricably linked to the body that produces it. This notion seems to support Kurzweil’s efforts to argue that the human is a machine, that its software can move from one medium to another, like copying a computer’s operating system from one slower and lower capacity hard drive to another that’s faster and more capacious.

So, with a complete understanding of the human hardware, does it follow that we will be able to move the software to an upgrade? If consciousness is a produce of the intelligence/knowledge equation, then will it soon be possible to leave the meat behind? The irony — and perhaps the flaw — is that in order to get to that point, we first must concentrate on the meat.

One more question: how much will this cost? Kurzweil’s book is alarmingly free of economic concerns. Try Bill Joy or Bruce Sterling for views on technoclass.

Citations

  1. Kurzweil 1999, p. 2.
  2. Kurzweil 1999, p. 14.
  3. Kurzweil 1999, p. 73.
  4. Kurzweil 1999, p. 5.
  5. 5.0 5.1 Kurzweil 1999, p. 91.
  6. Kurzweil 1999, p. 126.
  7. Kurzweil 1999, pp. 126, 129.

Work Cited

  • Kurzweil, Ray (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking Penguin.