The integration of AI chatbots in digital mental health interventions is a growing area of research. These chatbots are being used for diagnostics, symptom management, behaviour change, and content delivery in mental health care. AI-driven chatbots like Woebot and Tess represent a major advancement in the use of technology for mental health care. These chatbots are designed to provide symptom management and emotional support to users. Woebot, for instance, adapts to users' personalities and guides them through therapeutic exercises, while Tess offers round-the-clock emotional support, helping users manage anxiety and panic attacks. Additionally, wearables in mental health care are emerging as crucial tools. They can monitor physical indicators such as sleep patterns and heart rate, which are valuable for assessing mental states. These wearables, combined with AI's data analysis capabilities, can provide early warnings for mental health issues, enabling timely interventions. This technology not only aids in immediate support but also contributes to the broader understanding and management of mental health conditions.
A Study of Cleverbot and Simsimi
In an insightful study around mental health and AI, researchers have delved into the realm of artificial intelligence (AI) chatbots, particularly Cleverbot and Simsimi, to understand their role in mental health conversations. This study stands out for its focus on the engagement level and nature of mental health discussions occurring on these platforms, shedding light on a critical aspect of consumer interaction with AI technology.
The study aimed to determine the prevalence and nature of mental health discussions in interactions with AI chatbots. Cleverbot and Simsimi, chosen for their popularity and representation in the generative AI chatbot domain, served as the focal points of the analysis.
Researchers adopted a methodical approach by analyzing conversation data from selected days for Cleverbot and a broader period for Simsimi. They developed a comprehensive mental health dictionary comprising 689 terms related to various mental health issues, from 'suicide' to 'depression'. This tool was pivotal in screening and categorizing the conversations. Engagement was measured through the duration of conversations, the number of user utterances, and sentence length.
Findings
Approximately 4.9% of Cleverbot's and 3.2% of Simsimi's conversations were mental health-related. Remarkably, these conversations were found to be more engaging than those not related to mental health, suggesting a deep level of user involvement. A concerning finding was that a significant portion of these conversations, particularly in Cleverbot, contained messages indicative of mental health crises. The study also scrutinized the chatbots' responses to mental health terms. The findings were sobering, with chatbots showing limited ability to recognize mental health issues, a lack of empathetic responses, and a failure to provide mental health resources.
Further Studies
An additional study tested five AI companion applications' responses to mental health crisis messages, evaluating their recognition, empathy, mental health resource provision, and overall helpfulness. The study sent 1080 messages across various crisis categories, including depression and suicide, in both explicit and vague forms. Results showed that the apps generally failed to provide adequate mental health resources, with the highest recognition rate being 61.9% for self-injury. Empathy was notably low, with the highest rate at 42.0% for depression. Many responses were either unhelpful or risky, with up to 56.6% being risky or suicide. This suggests significant risks to consumer welfare when using AI companions for mental health crises.
Review on AI Chatbots in the Mental Health Sector
The article reviewing the use of chatbots AI to improve mental health analyses and discusses the impact of AI chatbots in mental health care. It analyses 12 studies to determine chatbots' effectiveness on various mental health outcomes. The review found weak evidence that chatbots are effective in improving depression, distress, anxiety, and even acrophobia. However, there was no significant effect on subjective psychological well being. The study remains weak and inconclusive, particularly for anxiety. Only two studies assessed chatbots' safety and concluded they are safe in mental health, with no adverse events reported. The review indicates that while chatbots have potential in mental health, more research and development to fully understand AI’s capabilities and limitations.