AI Companions: Dystopian Fiction Becomes Reality
When Nobel Prize winner Kazuo Ishiguro’s book, Klara and the Sun, was published in March 2021, it was promoted as a science fiction novel. However, as we discussed in a previous Insight, the concept of an “Artificial Friend” was not as futuristic as it appeared. We raised the legal and ethical issues posed by similar technology already in development. Fast forward, only three years later, and a version of the technology has become commercially available, while legal guardrails are no closer to being in place.
Several services, such as Snapchat AI, HereAfter AI, StoryFile, and Replika, now enable users to create avatars of loved ones and to “communicate” with them. For instance, users of Snapchat AI can interact with an avatar of a deceased spouse. Other services, such as ElevenLabs, allow individuals to create voice models from previously recorded audio. One example is an IT professional from Alabama who has cloned his deceased father’s voice using this technology.
AI companionship services are also being developed for older adults, and particularly for those with dementia. For instance, ElliQ — a robotic companion designed for elderly individuals — aims to empower seniors’ independence by engaging them in conversation, games and educational activities. AI companions, in the form of chatbots, virtual assistants or digital friends are also being developed for vulnerable populations, such as those with depression, anxiety and other mental health issues as well as for children and mainstream consumers worldwide. Familiar names like Sony, have for years marketed robotic companion pets, which offer comfort without the added demands of caretaking.
Though these AI services may help combat social isolation and loneliness, they are not a replacement for human connections, and can instead exacerbate those feelings and lead to emotional dependency. For instance, the use of AI may blur the lines between reality and artificially generated content, which may be false, biased or designed to collect personal information. Mary-Frances O’Connor, a professor at the University of Arizona who studies grief, cautions individuals who use this technology when a loved one passes, “[our] brain has to understand that this person isn’t coming back,” and technology could actually interfere with the grieving process.
As with all AI, responsible development and responsible, cautious and informed use must go hand in hand.