•  
  •  
 

Abstract

The robot protagonists in HBO’s Westworld open the door to several philosophical and ethical questions, perhaps the most complex being: should androids be granted similar legal protections as people? Westworld offers its own exploration of what it means to be a person and places emphasis on one’s ability to feel and understand pain. With scientists and corporations actively working toward a future that includes robots that can display emotion in a way that can convincingly pass as that of a person’s, what happens when androids pass the Turing test, feel empathy, gain consciousness, are sentient, or develop free will? The question becomes more complex given the possibility of computer error. What should happen if robots designed for companionship commit heinous crimes, and without remorse? Westworld poses such social and legal questions to its viewers and is, thus, ripe for classroom discussion. This essay explores the complex and contradictory implications of android hosts overcoming their dehumanization through an awakening to both experience and agency. With television and film holding a mirror up to reality, what can science fiction teach us that would help us prepare for such a possibility?

Share

COinS