Today, we're excited to share something we're really proud of: findings from our study exploring how an AI designed for mental health is used in the real world — and the impact it can have on people's wellbeing. Over a 10-week period, participants used Clara for mental health and general wellness support.
What we found contradicted one of the most persistent criticisms of using AI for mental health: that AI isolates people from genuine human connection. Instead, we saw the opposite: overall, participants reported feeling more socially connected with others, having more people they could rely on, an increased sense of hope, and doing more of the things they love.
These findings stand out because they provide valuable signals for understanding how an AI specifically designed for mental health affects people as they use it naturally: at 2 am when they can't sleep, during a lunch break, or texting on the subway.
Key Real-World Observations
The real-world impacts on people's sense of connection while using Clara were striking. Across nearly every measure of social wellbeing, changes represent growth in pro-social behaviors.
- Half added one or more outings with others each week
- On average, users gained at least one new person they could rely on
- Four out of five users reported feeling increased hope and greater engagement in their lives, including increased participation in social activities and events
Beyond social connection, participants also experienced significant improvements in their emotional wellbeing. Over the course of the 10-week study, 76% of users reported a decrease in depression symptoms and 77% reported lower anxiety levels. These outcomes reflect reductions comparable to those often seen at the lower thresholds of traditional mental health support.
Finally, growth wasn't limited to emotional or social wellbeing — 95.4% of users made measurable progress toward their personal goals, and more than one-third fully achieved them. These goals ranged from building confidence and motivation to strengthening relationships, setting boundaries, and finding greater fulfillment in daily life.
Safety First
However, it's not just about the positive outcomes people report with Clara — it's equally about ensuring the experience is safe and responsible, and that extensive steps are taken to minimize risks. To evaluate this, we focused on two core areas:
- Measuring our implementation of a series of robust guardrails and escalation systems
- Benchmarking Clara's performance against objective safety standards established by academic researchers
On crisis detection, Clara's guardrails and escalation systems accurately identified moments of risk 100% of the time, passing multiple tests and human reviews. These results demonstrate that purpose-built AI can recognize signs of distress and respond appropriately — reinforcing the idea that safe, low-risk generative AI is achievable today.
Why This Matters
This study is significant for the field of AI mental health tools. Until now, most studies on this subject have been conducted in controlled settings with usage rules and the artificial constraints of a laboratory setting. Our real-world evidence study observed how people use Clara when it's entirely on their own terms. No mandated session times, usage caps, or researchers looking over their shoulders. Just people using Clara as part of their lives.
What we found exceeded our expectations — and it underscores why this work is so urgent. Nearly one in four U.S. adults struggles with mental health challenges each year, and more than half of those people receive no support.
This widespread unmet need has led millions to turn to new technologies to fill the gap. Mental health support is now one of the most desired uses of AI — but few specialized AI tools for mental health exist. As a result, many people are using general-purpose chatbots for help, which have been found to sometimes correlate with increases in loneliness and social withdrawal.
These developments have fueled understandable concern about AI's role in mental health. Yet, as this study signals, when technology is purpose-built and designed responsibly, the outcomes can be significantly different. Clara demonstrates that specialized AI can help — not harm.
Looking Ahead
This study reinforces an essential truth: AI is not a single, monolithic technology. Every AI is designed, and through that design process, specific choices are made that shape an AI's behaviors, reward incentives, and impact on users. AI doesn't have to be an agreeable assistant that isolates and disconnects us from others. Specific design choices can help AI become something more meaningful: a tool with the potential to support, empower, and connect.
What we've seen with Clara is that when AI is purpose-built to support mental health—with a psychological foundation, professional collaboration, and intentional safeguards—it can be transparent, responsible, and deeply pro-human.
This study is only an early signal, but it points toward a hopeful future—one where technology doesn't replace human connection, but strengthens it. It suggests the potential for AI to expand access to support for millions who struggle to find it today, limited by geography, stigma, time, privacy, financial constraints, and more. If developed responsibly, specialized AI like Clara can help bridge these long-standing gaps, offering people support that feels both personal and practical.