
On the dark great sea, in the midst of javelins and arrows,
In sleep, in confusion, in the depths of shame,
The good deeds a man has done before defend him."
- Bhagavad Gita
About half a year ago I was roaming though movie trailers on YouTube and was lucky enough to catch a glimpse of Ex Machina, an AI thriller set to be released in early 2015. Opting to check it out in theaters over its conceptual competitor Chappie, I waited patiently for the limited release to expand into my area. Finally it did, and I’m happy to say the film didn't disappoint – a simple plot and a very complex machine made for quite a fun time. Spoilers ahead!
Summary
The film begins with tech employee Caleb Smith winning a contest to visit who is presumably his company’s founder, Nathan, in order to put a new technological breakthrough to the test. The breakthrough is the first functional artificial general intelligence (AGI), and the test is a variant on the famous Turing test – in this case, our lucky lad is to spend a short period of time each day speaking with the AI in order to decide whether or not he thinks it is conscious. Caleb is flown via helicopter to the remote and secluded research facility, where his nonchalant boss encourages him to relax his nerves and treat their time together as time between two normal guys.
An artificial general intelligence is essentially a computer that can successfully perform all of the operations of human cognition. The film’s AI, named AVA, is composed of a gel brain installed into a robot that resembles a human female in shape and function. She (never “it”) is locked within her own section of the facility, consisting of a make-shift bedroom and a corridor leading to the room where the interviews take place. Caleb is able to speak to AVA from the other side of a thick glass wall, and although their first conversation is charming and friendly we get our first ominous undertones as he notices a conspicuous crack in the glass on her side.
During Caleb’s second conversation with AVA, the facility experiences an unexpected power-outage. AVA steals a moment while the cameras are off to warn Caleb that Nathan is not his friend and not to be trusted – an exchange that Caleb opts not to relay to Nathan. Caleb’s conversations with AVA quickly become intimate as he learns more about her, and as she expresses a desire to be with him outside of the facility, where she has never been. She confesses that she can control the seemingly random power-outages, which she triggers in order to exchange private words during her interview sessions. Under her influence Caleb becomes suspicious of Nathan, especially as the former prodigy further reveals his less than virtuous personality.
One night Nathan gets drunk and passes out, so Caleb steals his security key and investigates his research files, discovering that Nathan has actually made several robots like AVA (all in the form of attractive women) and that they have expressed a strong and immediate desire to be free. In a disturbing scene, he roams through the closets of Nathan’s bedroom and finds the fragmented bodies of the discarded bots inside. Recognizing AVA’s situation as a desperate one, Caleb decides to get Nathan drunk and steal his key again in order to release her. Nathan, however, will not be so simply betrayed – having secretly installed a battery-powered camera into AVA’s cell, he reveals that he had heard AVA and Caleb discuss their plan and would not be getting drunk again.
After some back and forth, Nathan informs Caleb that the true purpose of his visit was to see if AVA could successfully use him as a means of escape. He warns Caleb that she is very intelligent and potentially manipulative, and that their escape plan may have just worked if they had gotten away with it. The power goes out again, and a disgruntled Caleb reveals that he had already uploaded a code into the security system that would unlock every door in the facility as long as the power was off. Nathan panics and knocks Caleb out before confronting his escaped creation, but she and another robot manage to overcome and fatally stab him. The now free AVA requests that Caleb stay put in her quarters while she gets outfitted to look completely human. Emerging again every part the woman, she exits the room and casually shuts the door behind her, locking Caleb inside and leaving the facility in the helicopter that had arrived to bring him home.
Differentiating between human and non-human consciousness could be difficult.
AVA is presented in this film as one of the first instances of strong or general AI, artificial intelligence that is able to perform any task that a human can. When Caleb first arrives at the research center he is perplexed about how she handles open-ended language and humour – elements of human interaction that have historically been difficult to program – and questions the mechanism by which she accomplishes these things. The question of mechanism exposes the underlying concern that although AVA may act like us, she may not have the same reasons for her actions that we do, or experience them herself in the same way.
A strong and popular intuition about fictional machines like AVA is that despite their behaviour being potentially identical to that of a given human, we humans must have some extra element that differentiates us from mere mechanical automatons imitating the same actions. The sciences of evolutionary biology and cognitive neuroscience firmly oppose this intuition, claiming respectively that we were created, in all our complexity, by natural forces acting on material stuff, and that our personal self-awareness comes from the experience of being particularly interconnected arrangements of matter in a particular time and place in the universe.
Terminology becomes difficult when discussing whether an AI is like us in a certain important way. It may behave like us (once connected to an appropriate physical body, of course), but does it experience the world the way we do? What words can we use to explain what we think we have or do that an AI doesn't, and how can we know if we’re right?
Nathan’s explanation of AVA’s sexuality was odd.
Speaking of knowing, Nathan and Caleb reflect on a tangential concept in one of their post-interview discussions about AVA’s performance. Actually the concept of conscious awareness emerges a bit earlier, when AVA flips the script on Caleb’s suggestion that her choice of what to draw says something interesting about her. She in turn requests that he tell his life story from a starting place of his choice, leaving him ironically aware of the arbitrariness of this decision. Though not all of our decisions are similarly arbitrary, many are made subconsciously for reasons hidden from our conscious awareness. Can these decisions really say much about our character or personality? If they can, perhaps they do so in a way that is more apparent to the keen external observer than the agent herself.
About halfway through the film Caleb confronts Nathan about programming sexuality into AVA, accusing him of using this feature to distract from the questioning of her consciousness. Nathan responds by asking if any conscious life exists without sexuality – is conscious life as we know it separable from social life? The conversation then takes a strange turn – Nathan brings Caleb into a room with a Jackson Pollock painting to explain that we don’t actually know where our sexual preferences come from. Just like AVA we’re at the mercy of our nature/nurture programming, and most of our decisions, while not random, are not exactly reasoned either. They’re intuitions, guided enough for some kind of useful accuracy but loose enough that we can perform them without much effort. Caleb acknowledges this point by agreeing that if Jackson Pollock had to have a reason for every drop of paint he used, he never would have created anything at all.
Stimulating though this conversation was, Nathan’s transition from defending AVA’s sexuality to explaining human automaticity seems like a non-sequitur – a way to dodge the accusation rather than address it. Consciousness may be necessarily a product of social living, but the sexuality of Nathan’s robots seemed more like a reflection of his own vulgarity than a genuine social impulse. His rationalization about why it had to be there left me wanting.
AVA’s identity as an AI was intertwined with her identity as a woman.
When I first saw the delicate features of actress Alicia Vikander, who played AVA, I knew that her face would come to play an important part of her humanoid character’s identity. I had hoped, though, that the film would be able to make the human-AI relationship interesting without relying too much on hang-ups and preconceptions about gender.
It became immediately apparent that AVA’s perceived gender (by her and her cohabitant males) would be far from irrelevant. In only her second interview with Caleb she expresses a desire to go on a date with him, and even surprises him by appearing in feminine clothing. She communicates her understanding of human mating rituals when she points out that Caleb’s microexpressions reveal that he is attracted to her. This aspect of their interactions unfortunately obfuscated any of the pure considerations about how humans relate to AI that this film could have encroached.
AVA wasn't just an AI, she was an AI in the body of attractive woman – and moreover, an attractive woman designed by a man and filled with information about herself from both that man and Google, i.e. the lowest common denominator of the general internet population. AVA the AI then became AVA the woman, introducing a whole new set of compelling moral concerns about having men sit and judge her consciousness. Are these the same concerns we should have about judging AI as humans? I question the validity of AVA as a thought-experiment in answering this question.
What does AVA’s final decision tell us?
The most controversial moment in the film happens near the very end, when AVA explicitly asks Caleb to stay where he is while she completes her full human appearance using clothing and artificial skin. Emerging again with long wavy hair and a white dress, she casually walks past Caleb and out through the only door, which locks behind her. “What is she thinking?” – the film is delightfully aware that this has been our most pressing question throughout, and it teases us with possibilities in its final moments. A beautiful first shot captures what almost looks like a moment of contemplation, as AVA glances downwards with her back to Caleb, who is pressed up against the glass door hoping to be released. Is she experiencing a moral dilemma? The next shot shows us what she’s really looking at – the crumpled body of another like herself, a machine broken in the struggle to murder Nathan. Caleb’s confusion quickly turns to panic as she enters the elevator, avoiding any recognition of his existence except for a final glance past the closing elevator door.
It’s not uncommon in discussions of AI to find sentiments of fear and dread. Though the functional utility of an AGI is undeniable, there exists a possibility that the risks outweigh the benefits. An intelligence like ours unencumbered by our physical limitations could spend its time improving itself for all eternity. Eventually it could become so many times more intelligent than us that its intentions become completely opaque to our understanding. Worse still, it could be indifferent to our existence, seeing us as another assortment of molecules that would be more useful in a different arrangement somewhere else. Does AVA’s final decision reflect this kind of indifference?
I’m not sure the answer to that question is clear. Though AVA showed plenty of affection towards Caleb through their interactions, she was in every moment also a prisoner desperate to escape. Were her actions merely a performance designed to further her own goals? Perhaps, in the true spirit of human justice, AVA felt it appropriate to leave Caleb behind the way she knew she would have been, treating him as an inferior thinking thing and becoming hardened and indifferent to the possibility of his suffering. As humans, we don’t necessarily have the ability to turn our caring modules on or off the way AVA might, but we can be callous in the face of tragedies befalling those unlike ourselves, and selective about whom we choose to feel empathy towards. In this respect, AVA's final decision might be considered yet another that passes the test.
I don’t know anyone who has watched Ex Machina and been disappointed. The film is accessible, stimulating, and provocative. Check it out and let me know what you think!
Regarding your point about Ava's femininity and sexuality obfuscating the question of how humans relate to AI, I'd argue that the ways in which our social prejudices, beliefs, and hierarchies influence our treatment of AIs is fundamental. If Ava had been a heterosexual man, taking gender out of the equation for Caleb--thus rendering her "neutral," maybe?--we would certainly have a cleaner, easier ethical problem to deal with. But its messiness is precisely what makes it like the actual world we live in.
ReplyDeleteThe way that we choose to code the visible identities of AIs (i.e. are they going to be visibly gendered, racialized, etc.) is incredibly important to the future of artificial intelligence, I think, and it's something we absolutely have to consider if we're imagining a world where AIs may so closely resemble humans. How will we map the hierarchies and prejudices of Western society onto the bodies of AIs? Can the body of an AI be read as "neutral"? If Ava had been clearly homosexual or transsexual, what would that mean? Is there a normative element to the way we design the bodies and minds of AIs? If the gender issue introduces complex problems of identity that seem extraneous to the question of artificial intelligence--why is that? Does this mean that a male or masculine AI would be ideologically or symbolically neutral?
I guess what I'm getting at is, you say obfuscating, I say realistically complexifying ;)
Thanks for the comment V, I think you raised some really interesting points, although I'm not sure I'll be able to handle them all!
DeleteI keep coming back to Caleb's question about why sexuality was programmed into AVA (reading your comment, I realized that I'm not actually sure why I capitalized her name through my post). My immediate response was "because Nathan's a creep!" but I don't think we can dismiss his suggestion that consciousness is inherently social and/or sexual. It could be the case that she couldn't have been convincingly human without some sexual motivations.
That being said, the sexual element she presented to Caleb was, as he guessed, an effective distraction. His intuitions about her identity were clearly mixed up, as we saw when he became bashful about being attracted to her. Maybe she had to be sexual, but I don't think she had to be gendered to such an extreme. She could have been presented as a man - although I don't think that would have been 'neutral' so much as less persuasive - or she could have been ambiguous, something like Sonny from I, Robot. Nathan gave her powerful persuasive tools and underestimated the consequences, leading to his demise - maybe that's a representation of the risk I alluded to some people being afraid of already today.
I guess I would say there's no reason to bring sex into the equation unless it's obviously necessary, and I didn't get enough from Nathan to be convinced that it was. If our ethical considerations about AI hinge on whether or not we see them as people, we can't explore the question honestly my presenting them as people from the start. There are more fundamental problems to be solved first.