(Spoiler Alert: this article generally discusses large themes of the HBO series Westworld.)
I know I’m not alone when I say I can’t help myself, Westworld is an amazing show and I’m going to talk about it. I’ve been talking about it with everyone I can, at every chance I get, and it’s even creeping into my work.
But how could this show not creep into my work? Beyond the storyline, Westworld successfully evokes deep philosophical questions about the ethics of artificial intelligence and what it means to be human.
What is human consciousness? Can it be replicated? Where is the line? Can we be equal to our inventions? Can our inventions be better than us?
And it just so happens that I work in high tech where the majority of AI investment and innovation spurs. I use AI nearly every day in one form or another from my self-learning thermostat to my auto-generated reports.
So should Westworld inspire us to smash servers and put an end to this now? No. Despite the knee-jerk reaction to fear the rise of AI, my experience working with AI has taught me better.
Where the show gets it wrong is the personification of technology.
In Westworld, the AI is programmed into extraordinarily life-like hosts which, as a result, opened the opportunity for the human founders to feel empathy towards these machines and give them a path to reach consciousness. Empathy is the tragic flaw that sets in motion the entire narrative of Westworld and empathy only occurred because the AI was personified in a very real way.
So, for all the fear and speculation Westworld has caused over AI, there isn’t much merit to it because we’re unlikely to actually personify our technology and unlikely to assume this tragic flaw, empathizing with technology, ourselves. Personifying AI was necessary to drive the plot and capture the audience, but that is the only purpose.
AI technologies don’t need a face to be useful. The form in which we see AI technology coming to market is in SaaS, IoT and cloud. The AI is hidden behind a UI we interact with via our devices, screens and voice commands.
We employ AI to work for us: to gather and understand data, to optimize and execute processes and tasks, and to make our lives more efficient. We see the reports and results and that is enough even without a human face or form.
A human-like robot might make a great employee in theory—they’ll never need a sick day, slack off, or complain!—but their human-like form doesn’t add any value to their purpose. A believable (even in the smallest sense) human face would be an incredible but costly expense to do something that is done very well by algorithms tucked behind a UI. We can justify the cost of the algorithm and UI, but does the face add any value that would justify a cost increase?
I tend to think not.
However, I am neglecting one critical point by looking at this in terms of economics: what ignited the entire storyline is one person (or a few, really) who had the right combination of vision, technology, and finances to make it happen. We only need one outlier to bring about the AI revolt – and that is why Westworld is truly scary.
After discussing this article with fellow Westworld fans, it has come to my attention that there are many competing theories as to what the tragic flaw in the series really is. I contemplated including alternative views but decided against it for the sake of keeping this article under a 10-minute read.
Should you feel compelled to write on this topic offering a competing theory or response, please let us know by tweeting us at @saasaddict.