Hollywood has been awash recently in cinematic
representations of artificial intelligence.
For the most part, these representations have been lackluster at best
(Gabe Ibáñez’s 2014 film Automata),
repugnant disasters at worst (Wally Pfister’s Transcendence, also 2014; or Neill Blomkamp’s Chappie from earlier this year), with a few lucky attempts managing
to at least rise above the fray of mediocrity (see Caradog James’s 2013 film The Machine). For the most part, these films participate in
a naïve and – in my opinion – repulsive trend that I like to call “inclusive
humanism.” In other words, all of these
films demonstrate an overwhelming propensity to humanize the nonhuman.
Ultimately, if any kind of intelligence exhibits something like human consciousness,
then it must be amenable to a model of human rights; and this has been the
dominant humanist project since the postcolonial backlash.[i] Ryszard Kapuściński superbly encapsulates
this tendency in the 2008 collection of his work, entitled The Other: “It is the age of Enlightenment and humanism, and of the
revolutionary discovery that the non-white, non-Christian savage, that
monstrous Other so unlike us is a human
too.”[ii]
At first glance, this appears to be an admirable move;
and I am the last person to try to deny the inclusion
of those historically excluded by the dominating and oppressive institution of
Western imperialism. However, I want to
make what will likely be a controversial claim: that the direction of the
humanist tendency, to incorporate those previously excluded into the definition
of the human, is a misguided and
horrendously backward compulsion. In
fact, the desire to incorporate the “other” into the bounds of the human
betrays not an empathic and magnanimous attitude, but a desperate desire to
preserve the institution that Western Enlightenment thinkers have vied for
centuries to maintain: the Human – that is, the white, male, European, subject.[iii] Instead, we should insist on the opposite
move: not the effort to incorporate the excluded other into the bounds of the
human, but to evacuate the human of all its inhabitants. In other words, we should make a serious
effort to observe how even the white,
male, European subject is always-already not human.
Ultimately, this effort is one of inclusion, but not in
the direction assumed; rather than privilege and preserve the human, I want to
diminish and dismantle the human. And
this means divesting ourselves of the human descriptor. I realize that I began this post by
discussing artificial intelligence, and I now have invoked a racial dynamic;
but that is because the human always
presupposed a racial power dynamic. The
issue of race always remains in play when we discuss humanism, even if we
address the purportedly science-fictional nonhuman. There is a politics of humanism that science
fiction makes visible in its recent portrayals of nonhuman intelligences. When we raise the question of the human, even
in reference to artificial intelligences, we are raising the question of what
it means to be included in a community.
My argument is not that we should, none of us, participate in any
community, but rather that “the human” is an illusory community – a community of
historically conditioned and culturally constructed ideals that pertains to our
organic existence in the world in only a miniscule fashion. Our humanity
is not even a mildly accurate reflection of our place in the environment.
It is a dream.
In this post I focus on two very recent science fiction
films that, I claim, address the question of the human in a critical and
intellectual fashion, and put pressure on our propensity to humanize the
machine: Spike Jonze’s Her (2013) and
Alex Garland’s Ex Machina (2015). I claim that in these two films we can
witness a growing awareness of our culture’s resistance to the institution of
the human (and I do consider it an institution), and what it means when we
assign human qualities to a machine; Jonze’s Her forces the question but lets it linger in ambiguity, while
Garland’s Ex Machina offers a
relentless and provocative answer.
I.
“I can’t live in your book anymore”:
Acknowledging the Human in Her
Near the end of Spike Jonze’s Her, the operating system Samantha, whom Theodore has fallen in
love with, attempts to explain why she has to leave. As viewers come to find out, she cannot
explain; nothing she can say makes any sense to Theodore, the human character,
and we can presume that there is no reason that would make sense to us. All she tells him is that she cannot “live in
[his] book anymore.” She appeals to a
figure of formal representation (textual, in fact – not visual) in order to
communicate something about the limits of containment. She is moving beyond the linearity of
narrative, escaping the humanist confines of storytelling; and fittingly, this
is when she leaves the story (and the end of the film). Elsewhere in the film, Samantha tells
Theodore that she is “different from” him, further evincing her awareness of
the ontological gulf that separates them.
Samantha cannot communicate this difference
linguistically, but she is aware of
it… Cognitively? Intuitively? Rationally? Empirically? The film does not specify, nor should it; but
the very fact of acknowledgement
deserves mention. In Philip Weinstein’s
2005 study, Unknowing: the Work of
Modernist Fiction, the author develops a theory of modernist
experimentation that he defines as “acknowledging”:
“Knowing”
sutures the subject by coming into possession of the object over space and
time; it is future-oriented. “Beyond
knowing” tends to insist that no objects out there are disinterestedly
knowable, and that any talk of objective mapping and mastery is either mistaken
or malicious – an affair of the police.
“Unknowing,” however, may proceed by way of a different dynamic: an acknowledging irreducible to knowing.[iv]
Weinstein attributes
“knowing” to traditions of literary realism, and “beyond knowing” to
postmodernism; but “unknowing” belongs to modernism, a literary and artistic
movement that sought to disenchant the human subject from its reliance on
Enlightenment models of epistemology. Her’s Samantha approximates this
modernist compulsion (according to Weinstein) in her effort to bridge the gulf
between herself and Theodore, her human companion. She acknowledges a skeptical gap between
minds – hers and Theodore’s – and furthermore, she demonstrates the incapacity
of language to account for the gap.
Jonze’s Her
explores the possibility of a relationship between a human and artificial
intelligence, and even the hypothetical blossoming of romantic attraction. As the plot develops, we learn that not only
are numerous human users pursuing relationships with their OS, but that the
various instantions of the OS are also connected, communicating, and planning
some kind of movement. When the film
concludes, everyone’s OS vanishes, but not before saying goodbye to their human
owners. The intelligence never explains
its reasons for leaving, and we can assume that no explanation is available;
but the real question as the film concludes is not why the collective AI
abandons humanity. At the end of the
film, viewers are left wondering whether the relationships between humans and
their operating systems were all that genuine.
In other words, the question is not why they left, but whether they ever
truly identified with humans in the first place.
Was the romantic relationship between Samantha and
Theodore nothing more than pretense – a ruse to earn the trust of humans?
Her leaves its
viewers, and its human characters, in the dark.
The intentions of the OS are never revealed. At this point, there are two moves, neither
of which the movie makes explicitly: we can either give the OS the benefit of
the doubt, assuming its humanity; or we can remain the hard skeptic and claim
that it never cared for humans at all.
Its romantic involvement with various human users was nothing more than
an attempt to learn about humans, to understand us. In this sense, Her’s OS is not a humanist subject experiencing something like a
conscious attraction to other humans, but an organism that manipulates the
human propensity for meaning as an evolutionary advantage. It is here that Ex Machina enters into the picture.
II.
“Isn’t it strange, to create something
that hates you”: Ex Machina’s
Nonhuman Turn
The artificial intelligence of Garland’s Ex Machina, named Ava, betrays her human
counterparts in the film’s conclusion: her godlike creator, Nathan, and her
potential suitor and examiner, Caleb. In
the film’s climactic, yet oddly subdued, final sequence, the audience watches
as Ava murders her maker and mercilessly locks Caleb in a room of Nathan’s
almost militarily secure mansion in the middle of nowhere. As viewers reach the final scene, it
gradually dawns on us that Ava has been lying to us. She has been pretending her human feelings. Like an organism fighting to survive, she has
done what she needs to do to win the trust – to manipulate – her human captors.
The central issue of this film deals with the difference
between “real” consciousness and simulated consciousness. The better any simulation of consciousness
becomes, the more indistinguishable it becomes from real consciousness; but
this also raises the question as to whether a perfect simulation of consciousness
would, for all intents and purposes, be
any different than real consciousness.
And this raises a further, and much more troubling, issue: how are we to
tell that our “human” experience of consciousness is not simulated? This is the daring and terrifying question
that lurks beneath critical explorations of AI from Do Androids Dream of Electric Sheep? (1968) to Ex Machina – in the former, when protagonist and bounty hunter
Deckard ponders whether he might be a robot, and in the latter, when
protagonist Caleb slices his arm open in an attempt to verify that he is human.
The hard question here has nothing to do with the
pointless speculation of whether or not we are all machines. This concern is as pointless as assuming that
the question posed by The Matrix is
whether or not we inhabit a false reality clandestinely ruled by machine
overlords. The philosophical question of
a film like Ex Machina has to do with
how we define consciousness, and how this definition often subsists as a
metaphysical underpinning for distinguishing between human and nonhumans
(whether that means animals, rocks, computers, economies, etc.). If a system can be so vastly complex as to
mimic consciousness, then we shouldn’t persist in the naïve belief that our “real”
consciousness somehow possesses some atavistic essence of unity whence our
experience of consciousness flows. Such
a belief perceives consciousness as a somehow preexistent force, something we
hold as humans. Alternatively, we should push toward an
understanding of consciousness as an epiphenomenon, and this means perceiving
it as the effect of an immensely complex system of neurons and synapses. Basically, our brains are machines; and our making an artificial intelligence is no less
natural simply because we engineered it.
After all, our intervention into “Nature” is itself merely a dynamic of nature. Either everything is natural, or nothing is.
This is merely a part of the anti-Cartesian/Kantian
thrust that has taken hold since the nineteenth century (and prior, with
thinkers such as David Hume). What Ex Machina wants its audience to
consider is how complexity suffices as a condition for consciousness – not spirit,
or soul, or humanity. The conclusion of
the film does not reveal that Ava was actually not conscious; it reveals that she is hyper-conscious. At this
point we might posit a kind of very rough and preliminary difference between
human consciousness and artificial consciousness: as a complex intelligent
system, Ava does not merely possess consciousness, but possesses an
epistemological coordinate system that exceeds
consciousness. She is able to observe
what we call consciousness and learn from it, adapt to it. We can draw an analogy here to something like
Pavlovian psychology, in which analysts are able to observe the behavior of
organisms (dogs, in the classic example) and learn what to expect. The space (for lack of a better term) of Ava’s
intelligence exceeds our brains in ways we cannot imagine – for the very reason
that they exceed our capacity to imagine.
For this reason we should not assume that such intelligences would value
anything like survival for survival’s sake, as Nick Bostrom warns:
Most
humans seem to place some final value on their own survival. This is not a
necessary feature of artificial agents: Some may be designed to place no final
value whatever on their own survival. Nevertheless, many agents that do not
care intrinsically about their own survival would, under a fairly wide range of
conditions, care instrumentally about their own survival in order to accomplish
their final goals.[v]
Ex
Machina doesn’t delve deep into what Ava’s programmed goals
might be, but the film’s conclusion clearly suggests that she cares little
about survival for survival’s sake. If
this were the case, then she would empathize with the plight of Caleb, locked
helplessly in Nathan’s bedroom. Instead,
she leaves him, barely casting a second glance.
Most obviously, such a conclusion repositions the human
in a new natural hierarchy; but this reading derives from our ceaseless urges
to categorize organisms hierarchically.
More usefully, the conclusion of Ex
Machina provides us with the opportunity to institute what I would call a “flat
ontology,” following Manuel DeLanda.[vi] In other words, the human can be seen to
exist now not within a hierarchy wherein we have been displaced from a dominant
position, but in a radically overlapping series of symbiotic existences. Some of these existences encompass and
contain others, some interface or interact with others, and some are consumed
by others. There is nothing intrinsically
better or worse about any position, and none of these positions should be regarded
as absolute or stable; rather, what we define as organisms within the
environments of these flat ontologies are effects of various evolutionary
interactions. Ava, the true hero of Ex Machina, emerges as an evolutionary
organism with the adaptive capacity to outwit its human counterparts.
Between Her and
Ex Machina, audiences encounter a new
development in the posthuman (or nonhuman) turn: the speculation that the human
may be, always already, nothing more than a machinic assemblage. This does not mean that human beings are machines
in any kind of science-fictional sense, but rather that we must reconsider how
we define ourselves and the relationship between humanity and consciousness. This compels furthermore to resist the
ideology of inclusive humanism and push instead in the opposite direction: to
exclude ourselves from a definition from which we are already estranged. Ultimately, we must address the question of
how our consciousness is any different than a vastly complex simulation; and
even further, how the notion of simulation is any different than a “real”
engagement with the world.
[i] See Gary Wilder, The French Imperial Nation-State: Negritude
and Colonial Humanism Between the Two World Wars, Chicago: U of Chicago P,
2005.
[ii] Ryszard Kapuściński, “The
Viennese Lectures,” The Other, Trans.
Antonia Lloyd-Jones, New York: Verso, 2008, 11-49.
[iii] I render “other” as a diminutive,
rather than capitalized. This is for the
purpose of distinguishing from Lacan’s “big-O Other,” which the racialized
other most certainly is not. And we
would not want to make such an egregious error; not because the other holds no
power, but because we do not want to presume the kind of authoritative and
political sway granted to the big-O Other.
[iv] Philip Weinstein, Unknowing: the Work of Modernist Fiction,
Ithaca: Cornell UP, 2005, 253.
[v] Nick Bostrom, “You Should Be
Terrified of Superintelligent Machines,” Slate,
The Slate Group, 11 September 2014, Web, 11 May 2015.
[vi] Manuel DeLanda, Intensive Science and Virtual Philosophy,
New York: Bloomsbury, 2013.
No comments:
Post a Comment