From rabbits with pocket watches to feminised motor-vehicles, we’ve been lending our traits and characteristics to non-human entities for the best part of human history. Generally speaking, personification and anthropomorphism are harmless. We’re used to seeing animals, vegetables, and minerals on their hind legs, adopting human voices and livelihoods, and we don’t bat an eyelid because we know that what we’re seeing is entirely removed from reality. Even in children’s media and literature – think Beatrix Potter, Watership Down, the Cartoon Network’s Apple & Onion – we know that the singing falafel wrap we watch on TV is entirely fantastical, and thus irrelevant to the falafel wraps we experience in real life.
So, what happens when we start anthropomorphising technologies that are specially designed to operate as humanoid assistants, or even replace existing human jobs?
As far back as Mary Shelley’s Frankenstein and the birth of science fiction, we’ve pondered the responsibility that comes with our efforts to artificially produce or replicate human consciousness, now mindful of the pace at which we might be hurtling towards a technological singularity. Professor Steven Hawking and Elon Musk have expressed fear for the future of this tech, the latter suggesting that AI – or the potential for a full, general artificial intelligence – may be our ‘biggest existential threat’.
Even before the 2011 installation of Siri on the iPhone 4S, we’ve been slowly accustomed to the presence of virtual assistants and chatbots in day-to-day life. We’re used to frequent encounters with non-human voices, whether from self-service checkout machines in the supermarket, or the enquiry filtering process that precedes a phone call to the bank. For the most part, we hear these voices the same way we see the singing falafel wrap; we know they’re not ‘real’. But increasingly, efforts in technology attempt to narrow the gap between the real and the imitation, until an artificial product becomes ‘so good’ it’s almost indistinguishable from the real thing. It’s here, in this artifice, that the waters become a little murky.
Consider virtual assistants: there’s Siri, Alexa, Cortana, Watson… the list goes on, but the inclination towards human naming is clear. It’s hardly surprising - we name everything from boats to plants to guitars, and have done throughout history.
It’s not always about sentimentality, either. In a piece for The Atlantic, Adrianne LeFrance indicates that the naming of machines has less to do with affection, and more to do with asserting control and expressing trust. We feel the need to name machines out of a ‘desire to reorganize forces more powerful than we are so that they appear to be under human control. Whether or not they actually are.’ As we become increasingly reliant on technology, anthropomorphising it helps us to feel safer – driverless cars are a great example. When it comes to AI, however, the act of naming (and humanising) might work towards the very thing we’re afraid of: independent identity, thought, and intellect.
To merely give something a name is anthropomorphism at its most gentle: a name presumes identity only on the surface. We know that not everything with a human name has human experiences. It’s when we attempt to further anthropomorphise man-made technologies – especially virtual assistants, whose functions are inseparable from existing human labour – that we’re looking at something more complicated. To make something speak, sound, and react exactly as a human would, obsessing over the realism of vocal inflections, we’re not just making an AI seem more trustworthy; we’re smudging the lines between man and machine.
Anthropomorphism is widely documented as a means through which we might – intentionally or unintentionally – satisfy a need for social connection. While we’re sceptical as to whether an anthropomorphised AI could ever be a replacement for human relationships, the introduction of voice activated virtual assistants into family homes has raised some interesting questions regarding our relationship with artificial intelligences.
When we treat our virtual assistants like real people, we complicate our existing social relations. For children growing up in homes with an Amazon Echo, Alexa isn’t just a household name, but a fully responsive member of the family and part of everyday social interaction. With this comes a whole set of new challenges for operating in contemporary spaces. Debates on ‘tech etiquette’ start to expand beyond mobile phones at the dinner table. Should we be encouraging children to afford virtual assistants the same politeness we’d expect from each other?
Many parents agree that yes, they’d prefer their kids to say ‘please’ and ‘thank you’ when giving Alexa an order, to promote good manners across the board. My concern is that this feeds an unhealthy understanding of the relationship between humans and technology, and the gratification we can receive from our interactions with either. However, in popular media we’re conditioned to empathise with the created ‘other’ time and time again: Ex Machina, Blade Runner, the droids of the Star Wars franchise and even (depending on your chosen alliances) the synths of the Fallout video game series. We’re encouraged to treat their displays of emotion as equal to that of a human, so what's to stop us bringing this empathy into the real world?
In general, (data-mining concerns aside) we’re fine with Siri and Alexa. We’re comfortable interacting with them, and we maintain an acceptable understanding of how these interactions differ from those with the people in our lives. Siri doesn’t creep us out in the same way that a full-on walking/talking humanoid might. But where does that leave something (someone?!) like Lil Miquela, Instagram’s favourite virtual influencer? How ethical is it to have a CGI ‘model’ on a platform overrun with unrealistic beauty standards?
Research suggests, perhaps counter-intuitively, that we can make robots seem less creepy by giving them more human traits. Voice assistants don’t make us uncomfortable because they are explicitly distinct from real people, in that they don’t have a body. Humanoids, however, reside in that Uncanny Valley sweet spot where it’s simultaneously their similarities and dissimilarities to real people that gives us the creeps. They’re too close for comfort – almost people, and yet still indisputably ‘other’. The problem with giving these artificial intelligences more human traits is that it doesn’t guarantee an exit from the Uncanny Valley. They remain ‘almost’ people right up to the moment that they stop being humanoids altogether and become living and breathing beings on par with humans (which, of course, invites a whole truckload of new anxieties). But for voice assistants, or other products of AI that we’re comfortable with, the persistence of human characteristics could push them out of our comfort-zone and into the realm of the uncanny.
The difficulty is in finding moderation. Refusing the realities of our current AI revolution is neither helpful nor preventative – rather, we need to be generous with our time and attention to ensure that these technologies develop in a way that is as beneficial as it is exciting. But at the same time, we need to stay as critical as we are curious.
The ethics of AI remains an entirely separate issue. While legislation struggles to keep up with technology, those working in AI development are left to navigate the tricky realms of technological advancement and ethical responsibility. In many ways, working at the forefront of AI is tipping the lid of Pandora’s Box; if something comes out that we don’t like, there’s no guarantee we can ever fully shut it away again. Establishing clear boundaries and a mutually agreeable moral duty is imperative, which has led to the creation of the Artificial Intelligence Ethics Committee and other AI-related codes of conduct.
Although AI inherently lends itself to anthropomorphic projections, it’s important to keep in mind the potential effects on our perceptions of people, relationships, and technology. And even more so now, as these technologies become more commonplace, and accessible to children who are only just starting to shape their views on the world around them.