Wednesday, February 22, 2012

Fear of Ourselves


            Previously, we discussed the short documentary “Future Shock,” based on the book by Alvin Toffler, and whether or not, as the film implies, this leap into advancement is a large step back for society. I would like to take this a step further, and examine what kind of effect a radically new and different technology would have on society now, specifically, artificial intelligence. Ignoring for the moment whether it is possible, or practical, or requires far more knowledge than we have today, would an AI be human? Would it be a person, distinct from human but still equal, and what would their rights be?

            Before these questions can be explored, a working definition of artificial intelligence, for my purposes, needs to be established. In a sense, AIs are already very much in existence – there are computers out there that can beat the greatest chess players in history, for example, or that are capable of recognizing changing conditions and adapting certain actions to fit those conditions. However, in the mind of science, and of science fiction, much of which is dedicated to examining similar possibilities, artificial intelligence refers to a machine that is not only capable of “thinking,” but of a machine that is sentient and self-aware, and at least as complex as the human brain. It is for this reason that many writers turn to such devices as the ‘positronic brain,’ which can be used to explain the things our technology is not yet capable of.
            So, would such a machine be considered human? Based on the anthropological definition of humanity, which seems to be as close to perfect as any such answer can be, then no, it would not. Though it may have the capacity to interact on the level of a human, with all of humanity’s culture, quirks, and foibles, it would remain biologically inhuman. This does not, however, mean that it would not be a person. If, as John Pollock states, “we can construct an accurate and complete computer model of rational architecture,” such as the aforementioned positronic brain, “a computer running the resulting program will thereby be a person” (Pollock 462). In other words, if we can create a complete reconstruction of the process of thought, experience, and reason, whether physically or electronically, then anything making use of such a technology would be a person.
            If we can create a person, will they have rights like we do, those rights that are protected by the U.N. and the U.S. Constitution, or, being our creations, will they be no more than slave labor to us, as their distant ancestors are today? Though our rational minds tell us that if they think like us, they should have the same rights as us, our psyches tell us otherwise. This begins to address some of the fears about artificial intelligence that are most common. If artificial intelligences, which have the potential to be hundreds of times more intelligent than man, are considered people, with all of the rights that go along with that, how long until they become the dominant force on the planet?
            However, intelligence and the ability to think does not equate to the ability to emote. In an op-ed for the New York Times, Astro Teller writes about how revolutions in science have caused human beings to lose their high and mighty place in the world. In the past, “the Copernican revolution showed that Earth circles the sun” and “the Darwinian revolution undermined” the belief that humans were still the center of the physical world, metaphorically if not physically, and that because of this, “A.I. threatens one of the last remaining things separating us from the ‘lesser’ animals,” our mental superiority (Teller 1). This fear of losing our superiority is, I believe, one of the reasons everyone professes to fear the robot apocalypse. However, it is also my belief that without a capacity for emotion, AI will never be able to compete with humans; as one of the very few, if not the only, animals with the ability to feel not just the basic emotion of fear, anger, and similar emotions, but the capacity for much more complex emotions, such as compassion, hatred, love, and disgust, humans remain on top. And without the ability to feel any anger at being, possibly, lower than us ignorant humans, there would be little to no reason for an artificial intelligence to be something that should indeed be feared.
Works Cited
Pollock, John. "Philosophy and Artificial Intelligence." Philosophical Perspectives 4 (1990): 461-98. JSTOR. Web. 21 Feb. 2012. <http://www.jstor.org/stable/2214201>.
Teller, Astro. "Smart Machines, and Why We Fear Them." Editorial. New York Times 21 Mar. 1998. School of Computer Science. Carnegie Mellon University. Web. 21 Feb. 2012. <http://www.cs.cmu.edu/~astro/nytimes.html>.

1 comment:

  1. This is an interesting topic! I also find it interesting that Lisp, one of the earliest languages, was written with AI specifically in mind. Before most operating systems were developed, computers were already beating humans at chess.

    But I find it highly improbable that actual emotions could be recreated. At best, a machine could only discern what emotion a human would be likely to show in reaction to something, and show appropriate traits. AI as we know it by definition works by collecting data and making the most probable decision off of that data. Big Dog is an interesting example of this: It learns from terrain.

    The disconnect between data and emotions remains great, yet it's an interesting topic, and something that's a great fuel for sci-fi. Recommend any good books on the topic?

    ReplyDelete