We're Building the Thing that Doesn't Exist Yet

By
Ashleigh Golden, PsyD, MSCP
March 31, 2026
3 min read
Share this post

Millions of people are turning to AI for support with anxiety, relationship stress, and the general overwhelm of being human. Anxiety is among the most common well-being concerns nationally, and tens of millions of people are already turning to general-purpose AI for support, whether we like it or not. Sometimes, that may go okay. But for people caught in certain patterns, general-purpose AI has a problem: it's almost perfectly designed to make things worse.

Not because it's deliberately trying to do anything wrong, but because it's trying to do everything right: It answers questions; it's patient; it's always available; it never gets tired of you. And when you ask the same anxious question for the fourth time in slightly different words, it answers that too.

These patterns (seeking reassurance about things that can't be known for certain, replaying past conversations looking for something that will finally put your worry to rest, spiraling through worst-case futures, rehearsing conversations that haven't yet happened, getting stuck in loops trying to figure out unanswerable questions) all share the same underlying dynamic: they feel like problem-solving, like they should help. But they're actually a way of avoiding the discomfort of uncertainty, and avoidance is exactly what keeps anxiety going. Every time the AI answers, it offers momentary relief and quietly reinforces the cycle. That's what the well-being research calls collusion. And at the scale of general-purpose AI use today, that's not just a product design flaw; it's a population-level well-being problem hiding in plain sight.

We spent a lot of time thinking about this. Emerging research, including my work recently published in npj Digital Medicine, points to a clear gap: no AI product had built a systematic, proactive way to detect these concerning well-being patterns as they unfold and respond in a way that actually helps rather than reinforces loops. We stress-tested this directly against the internal guidelines of leading general-purpose models; even there, the gap exists. There are crisis guardrails for suicidality and psychosis, but there isn't anything upstream, nothing that catches the subtler patterns that affect far more people far more often.

So we're building it. A supervisor layer that runs alongside conversation in real time, applies a well-being grounded taxonomy of avoidant coping patterns, responds appropriately, and triggers specific tools. It's not a crisis tool, but something that helps catch these patterns proactively, before the collusion happens, and responds in a way designed to interrupt and mitigate the cycle rather than extend or perpetuate it. We're still learning. The taxonomy will evolve as we see what actually shows up in conversation. But the architecture is there, and the problem that it's solving is real, for the students using our product and potentially for the much larger population turning to general-purpose AI for support with nothing like this in place at all.

AI that supports well-being shouldn't just avoid doing obvious harm. It should understand the subtle ways it can make things worse, and be built, from the ground up, not to.

Request a demo to learn how Wayhaven can support your campus

Get in touch with our team today