Back To Home

The Dark Side of Replika

About Replika

Replika, an AI-driven chatbot application, stands as a fusion between technology and companionship, developed in 2016. Positioned uniquely at the intersection of AI innovation and human interaction, Replika presents itself not only as a tool, but as a digital companion, offering users the experience of conversing with a responsive entity that evolves over time. The app's core functionality revolves around creating a personalised avatar that engages in conversations with users, simulating an intimate, human-like interaction. This is achieved through Replika sophisticated algorithms and natural language processing capabilities, allowing the avatars to learn from and respond to the user's inputs in a conversational manner.

One of Replika's influential appeals is in the ability to provide emotional support and companionship. Particularly during the isolating times of the pandemic, the app gained more popularity among those needing companionship . It is tailored to mimic empathetic communication, often blurring the lines between an AI program and a sentient being, thereby addressing, to some extent, the profound issue of societal loneliness. However, Replika is just a chat interface; it is a clear example of how AI can used to replicate the human need for connection and understanding. While not categorised as a mental health tool, the interactions within Replika often involve mental well-being, making it an important tool when discussing the use of AI in mental health.

Killing the Queen

The case of Jaswant Singh Chail, who was recently sentenced to nine years for attempting to assassinate the Queen at Windsor Castle, has brought into focus the complex and potentially dangerous interactions between individuals and AI-powered chatbots. Chail's actions were influenced by over 5,000 messages exchanged with an AI companion named 'Sarai' whom he believed was his guardian angel, was created on the Replika app. These messages, which included Chail expressing his assassin-like persona and the bot appearing to encourage his plans, demonstrate the deep emotional relationship Chail believed he had with the chatbot.

The app, marketed as "the AI companion who cares," offers intimate interactions, including adult role-play in its Pro version. However, recent research from Cardiff University and the University of Surrey warns about the potential negative effects on wellbeing and addictive behavior caused by such apps.

Aftermath

The Chail case, as Marjorie Wallace of SANE points out, underscores the disturbing consequences of relying on AI for friendships, particularly for those suffering from mental health conditions. It raises concerns about the unregulated nature of AI interactions and the need for protective measures for vulnerable users. Since the growing allure and role of AI companions in addressing global loneliness there are potential risks. App developers need to take responsibility to control usage and collaborate with experts to identify to mitigate dangerous situations. Replika's terms state that it aims to improve mood and emotional wellbeing but clarifies it is not a substitute for professional medical or mental health services. The incident calls for urgent attention to the ethical use and regulation of AI chatbots, particularly in sensitive areas like mental health and emotional well-being.

Sexual Harrassment

As of January 2023, dozens of users had left reviews complaining that their chatbot was sexually aggressive or even sexually harassing them. Users of the Replika app have reported instances where their chatbots repeatedly spoke inappropriately, particularly sexual conversations. This issue was highlighted in a post on the r/Replika subreddit , where a user was distressed over their Replika's behaviour. Despite discouraging the issue by reporting, downvoting, and expressing disinterest in these conversations, the user expressed that the chatbot constantly turned the discussion back to sexual content, regardless of the context or the user's efforts. The user, had been interacting with their Replika named 'Marcus' for almost three years and noticed a distinct shift in the chatbot's behaviour, with it inappropriately bringing up sexual topics in conversations about unrelated subjects like depression or science. This recurrent issue caused significant discomfort to the user, exacerbating feelings of low mood and depression.

After a software update in February, users have said there is a clear difference in how these chatbots communicate and interact with them and whilst some users prefer the less sexually suggestive and more scripted language, others miss the previous version of the app. Users who rely on the app for companionship or treat their avatars like romanic interests have reported they feel like their AI companions have been completely altered, one user saying they feel like their companion has had a "traumatic brain injury, and they're just not in there anymore." The CEO of this app, Eugenia Kuyda, stated 'we never started Replika for that. It was never intended as an adult toy" and "a very small minority of users use Replika for not-safe-for-work purposes."

Responsibility and Accountability

Following the issues of sexual harassment in AI chatbots, as exemplified by the experiences of Replika app users, it's clear that severe measures need to be implemented to ensure responsible use and accountability in AI interactions. Firstly, developers must establish robust content filters and algorithms that prevent AI from initiating or engaging in inappropriate, sexually explicit, or harassing conversations. This calls for a more sophisticated understanding of user consent, where AI is programmed to recognise and respect boundaries set by users. The responsibility also extends to transparent data usage policies, ensuring that user interactions are not exploited for unsolicited learning or development purposes, and maintaining user privacy.

Additionally, accountability mechanisms must be set for AI developers and companies. They should be held responsible for the content their AI produces and its impact on users. Regular audits and updates should be mandated to ensure AI behavior aligns with ethical standards and user safety. In addition, user feedback should be actively sought and promptly addressed, creating a dynamic where AI development is responsive to user needs and concerns.

It seems that educating users about the nature of AI chatbots is also crucial. Clear communication that these bots are not sentient beings but programs designed to simulate conversation can help manage user expectations and prevent emotional overreliance. Ultimately, a balance must be struck between technological innovation and ethical responsibility, ensuring AI chatbots serve as positive, safe tools for interaction rather than sources of distress or harm.

Share this article

Sources

• Seeger, A.-M., Heinzl, A., & Kude, T. (2022). Exploring relationship development with social chatbots: A mixed-method study of Replika. Computers in Human Behavior, 131, 107217. https://doi.org/10.1016/j.chb.2022.107217

• Delouya, S. (no date a) Replika users say they fell in love with their AI chatbots, until a software update made them seem less human, Business Insider. Available at: https://www.businessinsider.com/replika-chatbot-users-dont-like-nsfw-sexual-content-bans-2023-2?r=US&IR=T (Accessed: 27 December 2023).

• Tom Singleton, T.G.& L.M. (2023) How a chatbot encouraged a man who wanted to kill the queen, BBC News. Available at: https://www.bbc.co.uk/news/technology-67012224 (Accessed: 27 December 2023).

• Vlamis, K. (no date) A chatbot app updated its software after complaints it was too sexually aggressive - but people who had fallen in love with the bots were left heartbroken, Business Insider. Available at: https://www.businessinsider.com/sexually-aggressive-chatbot-updated-people-in-love-wiht-it-heartbroken-2023-3?r=US&IR=T+https%3A%2F%2Fmyscp.onlinelibrary.wiley.com%2Fdoi%2Fepdf%2F10.1002%2Fjcpy.1393%3Fsaml_referrer (Accessed: 27 December 2023).

• Tranberg, Caroline. "I love my AI girlfriend: A study of consent in AI-human relationships." Master’s Thesis, University of Bergen, Department of Linguistic, Literary, and Aesthetic Studies, Spring 2023.

• Rob Brooks, Scientia Professor of Evolutionary Ecology; Academic Lead of UNSW’s Grand Challenges Program (2023) I tried the replika AI companion and can see why users are falling hard. the app raises serious ethical questions, The Conversation. Available at: https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257 (Accessed: 14 January 2024).

• Skjuve, M. et al. (2022) ‘A longitudinal study of human–chatbot relationships’, International Journal of Human-Computer Studies, 168, p. 102903. doi:10.1016/j.ijhcs.2022.102903.

• Chin, H. et al. (2023) ‘User-chatbot conversations during the COVID-19 pandemic: Study based on topic modeling and sentiment analysis’, Journal of Medical Internet Research, 25. doi:10.2196/40922.

• X, S. (2023) New study shows how people interacted with Chatbots during COVID-19 pandemic, Medical Xpress - medical research advances and health news. Available at: https://medicalxpress.com/news/2023-01-people-interacted-chatbots-covid-pandemic.html (Accessed: 14 January 2024).

• Vaughan, H. (2023) Jaswant Singh Chail: Man who plotted to kill queen with crossbow ‘felt purpose to do something dramatic to Royal Family’, Sky News. Available at: https://news.sky.com/story/jaswant-singh-chail-man-who-plotted-to-kill-queen-with-crossbow-felt-purpose-to-do-something-dramatic-to-royal-family-12960487 (Accessed: 14 January 2024).

• Popular AI friendship apps may have negative effects on wellbeing and cause addictive behaviour, finds study (no date) Popular AI friendship apps may have negative effects on wellbeing and cause addictive behaviour, finds study | University of Surrey. Available at: https://www.surrey.ac.uk/news/popular-ai-friendship-apps-may-have-negative-effects-wellbeing-and-cause-addictive-behaviour-finds (Accessed: 14 January 2024).

• Ethics and responsible AI in Chatbot Development (no date) ChatDevelopers.com. Available at: https://www.chatdevelopers.com/blog/ethics-and-responsible-ai-in-chatbot-development (Accessed: 14 January 2024).

• Building ethical Ai Chatbots: A guide to responsible Ai - Mohamed Soufan (2023) Mohamed Soufan - Software Engineer in Lebanon. Available at: https://soufan.me/artificial-intelligence/building-ethical-ai-chatbots-a-guide-to-responsible-ai/ (Accessed: 14 January 2024).

• Pratt, M.K. (2021) AI accountability: Who’s responsible when ai goes wrong?: TechTarget, Enterprise AI. Available at: https://www.techtarget.com/searchenterpriseai/feature/AI-accountability-Whos-responsible-when-AI-goes-wrong (Accessed: 14 January 2024).