Saturday, March 2, 2013

"The ultimate shadow": Consciousness and the Human in Kubrick and Dick


            1968 was an interesting year for science fiction, primarily due to two historic moments in the SF tradition: the production of Stanley Kubrick’s 2001: A Space Odyssey and the publication of Philip K. Dick’s Do Android’s Dream of Electric Sheep?  Of even more interest is the mutual concern shared by these two prominent cultural texts: the research into artificial consciousness, and the implications this holds for how we define “the human.”
            Artificial intelligence needs to be separated from artificial consciousness.  Artificial intelligence designates the ability to operate at vastly complex levels of logical computation; this typically includes such actions as algorithmic functions, games of chess, and even linguistic exchanges.  Artificial consciousness, on the other hand, must imply the ability to reflect on these actions; to consider mathematical paradoxes, to relish in the victory over one’s opponent, to speculate on the etiolations (to borrow a term from J.L. Austin) of language in instances of communication.  In most cases, artificial intelligence (henceforth referred to as “AI”) seems to come first.  Consciousness remains an uncertain and mysterious concept, and theorists from across the board – neuroscientists, philosophers, biologists, mathematicians, psychologists, the list goes on – have proffered numerous explanations for its existence.  Despite my relative ignorance in the field of neuroscience and biology, my limited understanding of consciousness proceeds from the following basic assumption: consciousness is what I refer to as an emergent phenomenon.  It is the result of incalculably complex systems of matter and biology: of what Peter Watts calls “chemicals and electricity” (Watts 41).  Consciousness thus does not require a central, core “self” around which to congeal or collect.  The self only appears in retrospect, after consciousness has already emerged out of neural and synaptic networks.
            This, at least, is the argument that must be adopted if we wish to look constructively and intellectually at Kubrick’s 2001 and Dick’s Electric Sheep.  The HAL 9000 onboard computer – perhaps the most iconic character from Kubrick’s film, and referred to as “Hal” – is not constructed on the basis of a core self or identity.  Its identity only takes hold after its complexity allows it to achieve consciousness.  The same must be said of Dick’s Nexus-6 model androids as well.  The ability to conceive of oneself as a self requires the ability to reflect upon oneself; and this reflection is an identifying mark of consciousness.  Among other things, such reflection also permits conscious organisms to contemplate ethical or empathic issues, and it is here that Dick’s novel stakes its primary concern, as evidenced by the importance of the Voigt-Kampff Empathy Test:
Empathy, evidently, existed only within the human community, whereas intelligence to some degree could be found throughout every phylum and order including the arachnida.  For one thing, the empathic faculty probably required an unimpaired group instinct; a solitary organism, such as a spider, would have no use for it; in fact it would tend to abort a spider’s ability to survive.  It would make him conscious of the desire to live on the part of his prey. (Dick 455)

The Voigt-Kampff Empathy test is introduced in the novel as a means by which to verify whether an organism is human or android.  Since the androids all look remarkably human, the only way to tell if they are not is to put to them a series of questions that are traditionally considered to elicit some empathic response.
            The question that inevitably rides on this description betrays a certain paranoia: if androids are advanced enough, can they not mimic conscious/empathic reactions?  We might be compelled to answer “yes” to this question, but we would have serious implications to consider.  Would an organism not require consciousness in order to mimic consciousness, or empathy?  Essentially, can consciousness and mimicking consciousness be differentiated?  Are they any different?  Organisms can certainly mimic intelligence, as argued by theorists such as John Searle in his Chinese room thought experiment; but how would an organism mimic consciousness?  Dick continues to blur the boundaries between human and non-human by introducing human characters that appear to exhibit no empathic faculties, particularly the bounty hunter Phil Resch:
“If it’s love toward a woman or an android imitation, it’s sex.  Wake up and face yourself, Deckard.  You wanted to go to bed with a female type of android – nothing more, nothing less.  I felt that way, on one occasion.  When I had just started bounty hunting.  Don’t let it get you down; you’ll heal.  What’s happened is that you’ve got your order reversed.  Don’t kill her – or be present when she’s killed – and then feel physically attracted.  Do it the other way.”
Rick stared at him.  “Go to bed with her first-”
“-and then kill her,” Phil Resch said succinctly.  His grainy, hardened smile remained. (Dick 537)

The novel’s protagonist, Rick Deckard, questions where the “inhumanity” lies between himself and Resch.  At one point he thinks the following: “There’s nothing unnatural or unhuman about Phil Resch’s reactions; it’s me” (536).  For Deckard, the inhumanity does not lie in Resch’s treating of an android inhumanely, but in his own human love/empathy for an inhuman organism.  Dick challenges his readers to reorient themselves in regard to what constitutes a conscious entity; furthermore, to what constitutes a human.
            Kubrick puts a similar challenge to his viewers.  In 2001, Hal is arguably the most human character, and the computer’s actions reveal a far more reflective and conscious entity than the single, circular red light indicates.  Perhaps most revealing is Hal’s paranoia upon learning that the ship’s two operative astronauts (there are others, but they remain in a programmed, monitored state of prolonged sleep – all their life functions reduced to little more than saved hard drive on a computer that has been hibernated), Dave and Frank, plan on disconnecting him (“him” also being how Dave and Frank refer to Hal).  This scene occurs immediately after Hal’s report concerning a faulty communications device is discovered to be incorrect.  Dave and Frank test the device, and they can find nothing wrong with it; Hal then suggests that they replace the device and let it fail in order to ascertain the source of the fault.  Hal proclaims that he cannot possibly be wrong in his assessment, and that it can only be attributable to “human error.”  Dave and Frank express agreement, but then quickly conceal themselves (or so they think) within one of their ship’s pods in order to discuss decommissioning the computer.  The close of this scene (and of the first half of the film) shows a shot apparently from Hal’s perspective, completely silent, but in full view of Dave’s and Frank’s lips moving behind the glass window of the pod.
The nuances of just this sequence of events are highly suggestive.  Dave and Frank excuse themselves by acting as though one of Dave’s instruments has a mechanical issue that he wants Frank to look at; this is, of course, merely a front for evading Hal’s surveillance.  Dave asks Hal to rotate the pod; after Hal does so, Dave turns off the microphone in the pod and again asks Hal to rotate it.  Hal fails to do so, thus confirming Dave’s and Frank’s mutual understanding that the computer can no longer hear them, and they can talk in private.  In retrospect, however, Hal is revealed to have been knowledgeable the entire time, meaning that when he was asked to rotate the pod a second time, he was acting as though he could not hear.  The origins of this suspicion must be traced back at least as far as Dave’s and Frank’s excusal from Hal’s presence: Hal suspected that Dave and Frank were not going to discuss a mechanical snag in a minor shipboard instrument, but were going to talk about him/it/Hal.
The revelation of Hal’s suspicion in turn reveals that he is able to conceive of himself as a self; the presumably faux-emotion in his voice, and his description of himself as a third member of the crew, are not merely theatrical tactics to make it “easier” for Dave and Frank to talk with Hal.  Hal is genuinely able to conceive of himself as a subject, as something (or someone) that Dave and Frank might talk about, and experiences an emotional reaction to this conscious realization.  Dave’s eventual decommissioning of Hal also suggests that Hal not only conceives of himself as a subject about which Dave and Frank might ponder or speak, but that Hal also conceives of his own interior self; he begs Dave to “stop” while disconnecting him, and in what is perhaps the most heartbreaking scene of the film, he tells Dave: “I’m afraid.”  Although his voice does not carry the strong emotional tone that one might expect in a human voice, the plea sounds equally – if not more – genuine.  He tells Dave that his “mind is going,” and he dies (an appropriate term in this context) singing a song that his creator taught him.  Skeptics might question whether Hal’s fear was genuine, or whether he was trying to manipulate Dave’s emotions in order to make him stop.  I, however, am not certain that there is any difference.  Hal’s ability to understand Dave’s emotions, and to reflect on the impact his own words would have, suggest not only mimicry of consciousness, but an emergence of consciousness.  What we would call “artificial” in Hal becomes, in its manifestation, as real as any human consciousness or empathy.  This characterization raises yet another important question: why must Hal’s consciousness (as well as the consciousness of the Nexus-6 androids in Dick’s novel) come to assume the character of “the human”?
Hal’s representation in 2001: A Space Odyssey not only calls into question what “the human” really is, but also betrays a formal inability to represent inhuman consciousness as anything other than human.  Consciousness, so to speak, is always only human consciousness.  There are, of course, logical reasons for this: how would an audience know it was looking at something conscious if that object was represented as a conscious form unfamiliar, or inaccessible, to humans?  Furthermore, the audience would not be able to engage in the intellectual debate that Kubrick invites his viewers into.  Questions of what constitutes “the human,” how we identify “the human,” and how we ethically treat something that possesses ambiguous “human” qualities supersede questions of how alternative consciousness (i.e. inhuman consciousness) might be formally represented.  For Kubrick (and for most science fiction involving artificial intelligence), consciousness must assume a recognizably human form in order to be intellectually assessed.
Despite its cinematic grandeur and historic importance, 2001: A Space Odyssey betrays traces of traditional anthropocentrism and teleology even while pushing against the boundaries of human thought.  Consciousness emerges in the film as a human apparatus, as something that artificial constructs strive toward, and as something that possesses the quasi-spiritual privilege of transcending itself (with the help the obviously superior race that lurks beyond the black monolith).  Kubrick does not shy away from human atrocities such as war – symbolized in the image of the bone-weapon – but ultimately human consciousness, even with all its downfalls, remains “chosen,” so to speak, by the unseen engineers (this narrative appears more blatantly in Ridley Scott’s recent film Prometheus, wherein the aliens are actually dubbed “engineers” by the human characters, although the implications are more dour than in Kubrick’s film).  The alien monolith appears in the ‘Dawn of Man’ sequence immediately prior to the advent of “tool-being” (a term used by Graham Harman in reference to Martin Heidegger); again on the Moon, immediately prior to humanity’s Jupiter Mission; and again for Dave Bowman before his transcendence as the “star child” (a term popularized not by the film, but by Arthur C. Clarke in his related books).  The representation of Hal as a human consciousness reinforces the narrative’s concern with consciousness as human, and with history as human history/teleology.
The formal inability to portray alternative representations of consciousness persists in Dick’s novel as well; despite anything we might try, artistic modes such as literature and cinema remain confined by the very limits of our consciousness and sensory faculties.  That which we make reflects the consciousness we exhibit.  However, Dick is able (due to the nature of the novel form) to explore the ideological implications of consciousness more than Kubrick is able to.  In a poignant scene, Deckard has a conversation with Wilbur Mercer, the founder of the futuristic earthly religion, Mercerism, whose followers experience Mercer’s suffering via Empathy Boxes.  Mercer tells Deckard the following: “‘You will be required to do wrong no matter where you go.  It is the basic condition of life, to be required to violate your own identity.  At some point, every creature which lives must do so.  It is the ultimate shadow, the defeat of creation; this is the curse at work, the curse that feeds on all life.  Everywhere in the universe’” (561).  Implicit in the foundation, or core, or self of every organism exists a dehiscence.  The violation of identity reveals the annihilation of it: life’s “basic condition” is a fluidity that prohibits any consistent or stable identity.
The successes of 2001 and Electric Sheep lie in their profound ability to destabilize the human, and because of their simultaneous appearance, 1968 marks a historic shift in the science fiction tradition.  Arthur C. Clarke’s Childhood’s End, a 1953 novel that describes an invasion of Earth by infinitely intellectually superior organisms, unveils its teleology as its narrative progresses; humanity, although not the most intelligent or powerful species in the universe, plays a monumental role in what appears to be the inevitable formation of what Karellen, one of the alien invaders (called “Overlords”), calls the “Overmind.”  The novel can be read in multiple ways: as a critique of religion, a political approval of communism in light of Cold War hostilities, an exploration of utopianism, etc.  None of these fully explain the novel’s concerns, and ultimately the most obvious interpretation is the best: the novel explores the possibility of a shift in consciousness and how humanity might play a role in the teleological movement of the universe.  Much of pre-1960 science fiction remains steeped in teleological tendencies; the affirmation of the human, or of an ultimate plan for the universe, or of the strengths and shortcomings of human consciousness as necessary and purposeful.  In their respective texts, Kubrick and Dick introduce something radical and groundbreaking into Western culture; not by relegating human consciousness to a lower tier of the universe’s hierarchy (as Clarke does in Childhood’s End, which still maintains humanity’s teleological importance), but by uncovering the uncertainty of what the human is.  The center no longer holds: the texts of Kubrick and Dick, and many subsequent works of the science fiction tradition, illuminate the human not as an affirmative and natural identity, but as an epistemological construct.  The self, and the human, are illusions.
This argument is not intended to convince the actuality of selflessness, or the impossibility of identity, or the unimportance of the human.  Even if “the self” is an effect of consciousness, and not a central core around which consciousness forms, it remains of importance for those who project it into themselves.  The argument I am making is that these works provide us with radically alternative perspectives from which to consider our own existence so that we might better understand organisms and entities which might appear to us as inferior or unintelligent, or even unconscious.  The purpose of blurring the boundaries between the human and the machine, or artificial and actual consciousness, is not to emphasize that humans do not possess consciousness, but to emphasize that our perspective is limited; and furthermore, that this limit might prevent us from effectively treating or dealing with that which is “other.”
Two years prior to 2001: A Space Odyssey and Do Androids Dream of Electric Sheep? French theorist Michel Foucault published his now seminal work on Western epistemological structures in the human sciences, The Order of Things.  In the introduction, Foucault writes the following:
Strangely enough, man [i.e. human] – the study of whom is supposed by the naïve to be the oldest investigation since Socrates – is probably no more than a kind of rift in the order of things, or, in any case, a configuration whose outlines are determined by the new position he has so recently taken up in the field of knowledge.  Whence all the chimeras of the new humanisms, all the facile solutions of an ‘anthropology’ understood as a universal reflection on man, half-empirical, half-philosophical.  It is comforting, however, and a source of profound relief to think that man is only a recent invention, a figure not yet two centuries old, a new wrinkle in our knowledge, and that he will disappear again as soon as that knowledge has discovered a new form. (Foucault xxiii)

This remarkable statement contains several radical and unsettling claims: that the human is an “invention,” that it is less than two centuries old, and that it will “disappear.”  Finally, Foucault’s admission of feeling “relief” at this might also be taken by some readers as a kind of vulgar misanthropy.  However, there are more nuances than many readers are willing to admit, and these are further revealed once one absorbs Foucault’s entire text.  Specifically, Foucault laments the human not as a biological organism capable of experiencing pleasure or pain, but as an epistemological construct; that is, as a construct of knowledge, an “invention.”  This invention, Foucault argues, shapes the way in which humanity conceives of itself and its place in the universe.  It influences the way humans categorize other organisms, the way they hierarchize and historicize, the way they impose boundaries and make evaluative judgments.  In short, Foucault wishes to denaturalize the assumptions that human beings have taken to be absolute.  The disappearance of humanity, which in science fiction is often represented literally, is understood by Foucault as an epistemological shift.  This disappearance would present human organisms with a new system of knowledge by which they might observe and exist within the universe.
            Science fiction, like surrealism and gothic literature before it, challenges its readers to brave the “ultimate shadow” of existence; to dare to see the world in new ways at the cost of its own perceptive destruction.  Humanity must see the values and beliefs that it takes for granted as propagated by the structure of its own consciousness; ideology, it seems, takes root at even the most basic biological practices.  Only by recognizing the contingency of our own capacities as conscious organisms can we ever hope to radically position ourselves – ethically, politically, and existentially – next to the alien, the android, the “other.”

Works Cited

Clarke, Arthur C. Childhood’s End. New York: Random House, 2001. Print.

Dick, Philip K. Do Androids Dream of Electric Sheep? Four Novels of the 1960s. Ed. Jonathan

Lethem. New York: Literary Classics of the United States, 2007. Print.

Kubrick, Stanley. 2001: A Space Odyssey. Metro-Goldwyn-Mayer, 1968. Film.

Watts, Peter. Starfish. New York: Tor, 1999. Print.

No comments:

Post a Comment