The first of this week’s films is a thought-provoking account of the ‘dying thoughts’ of a self-aware robot. Robbie estimates that he has been floating in space for 6000 years. Now, as his batteries fail, he records his last words in the hope that someone will hear them and ‘wake him up’ again.
The issues explored in this film have obvious parallels to those of Ridley Scott’s cult film Blade Runner. Indeed, Robbie could almost be a prequel to the classic film.
Blade Runner is set in a not too distant dystopic future (2019) where increasingly sophisticated androids have been developed by large and powerful corporations. Initially, robots were built to perform tasks too dangerous or too boring for man to do. Later, second generation robots were bio-engineered for space exploration in inhospitable environments. Now, third-generation ‘synthogenetic replicants’ made from skin and flesh cultures have been created.
These ‘Replicants’ are virtually indistinguishable from humans; designed to mimic them in every way except emotional response. However, the simulated humans develop intense emotions over time (and in so doing, develop their own agendas), so they are given a limited four year lifespan. In a quest to extend this period, a small band of Replicants have escaped from an Off World Colony in a stolen shuttle and returned to Earth to confront their maker.
In light of this, our film of the week, Robbie, makes me wonder about NASA’s motivations in giving Robbie a personality. In Bladerunner, the development of emotional responses in the Replicants is seen as an undesirable side effect. Replicants with emotions are more difficult to control. They develop motivations and desires of their own, sometimes in conflict with those of the people they are built to serve.
But Robbie tells us about how he’s “…always been an emotional person,” who gets “very upset when… [human] friends would go away into space”. Perhaps investing machines with an emotional attachment to the people they serve teaches them to prioritise human welfare when working closely with them. It could also help human crew to accept the machine – even develop a bond with it – and work as a more efficient team.
The course notes ask: ‘If Robbie is capable of experiencing loneliness, happiness, faith and friendship, in what senses is he not human?’ This brings us back to Professor Fuller’s  unresolved discussion, ‘Humanity 2.0: defining humanity’. Philosophers often compare the functioning of the brain with a computer’s ability to process information and we know from studies using magnetic resonance imaging  and observations of brain surgery patients  that certain personality traits are associated with particular regions of the brain, so perhaps it is reasonable to deduce that Robbie is, indeed, human.
The course notes go on to ask: “If the humanistic principles of autonomy, rationality, self-awareness, responsibility, resilience and so on can be held by an artificial intelligence within a mechanical form, what does that say about the extent to which they rely on human cognition and the flesh of a human body to give ‘human’ meaning to the experience of the world?” It does seem rather inhumane to create a sentient being, teach it to love and then banish it to a life of isolation. I feel great empathy for Robbie, but at a visceral level, I can’t quite bring myself to share the label ‘humanity’ with a machine.
It is interesting, then, that I find it much more difficult to condemn Blade Runner’s Replicants to the scrap heap. Try watching the ‘Tears in the rain’ soliloquy, in which Replicant Roy Batty (played by Rutger Hauer) dies.
While my head might argue that humanity resides somewhere other than in flesh and blood, my heart, it seems, arrives at the answer through inductive reasoning. If it looks like a man, walks like a man, talks like a man… I don’t know what you think, but I’m sure Rick Deckard would agree.