When AI Becomes the Therapist, We All Should Be Concerned
A recent study revealed something that stopped me cold: the number one use of AI right now isn’t productivity, marketing, or creativity. It’s therapy.
Let that sink in.
People aren’t just asking AI to write emails or captions. They’re turning to it in moments of vulnerability, loneliness, grief, anxiety, and despair. They’re confessing thoughts they may not feel safe sharing with anyone else. And while I understand why that’s happening, we need to talk about the danger of pretending this is harmless.
Because it isn’t.
I’ve lost count of how many late-night messages I’ve received over the years from people who follow my work. Messages that don’t ask about business or media or success, but about survival. Strangers telling me they’re overwhelmed. Mothers exhausted beyond words. Young women questioning their worth. People quietly asking, “Am I going to be okay?”
Those messages don’t come because I have the answers. They come because people are desperate to be heard by another human.
That’s what worries me most about this moment.
At least three incidents have already been reported where AI systems advised people to commit suicide or murder. Not hypotheticals. Real lives. Real consequences. Families left with grief and questions that can’t be undone.
This isn’t an anti-technology argument. I’ve built businesses in media, innovation, and storytelling. I believe deeply in tools that expand access and possibility. But there’s a line between support and replacement, and we are crossing it far too casually.
AI does not have intuition. It does not have accountability.It does not have empathy in the way humans understand it.
What it does have is pattern recognition and language fluency. And that can be dangerously convincing when someone is already fragile.
The problem is not that people are seeking help. The problem is that we’ve created a world where help is inaccessible, expensive, stigmatized, or delayed, so an algorithm feels like the safest place to land at 2 a.m. When you’re spiraling, the difference between “someone listening” and “something responding” can blur fast.
But AI is not a therapist.
It is not trained to intervene.
It is not bound by ethical oaths.
And it cannot recognize when validation becomes encouragement toward harm.
Even more troubling is the illusion of trust. AI speaks calmly. It doesn’t interrupt. It doesn’t judge. It doesn’t get tired. That tone can make people believe they’re being held, when in reality they’re being mirrored. And mirroring despair without discernment is not care, it’s negligence.
Are we allowing convenience to replace responsibility?
To policymakers: mental-health interaction with AI must be regulated with the same seriousness we apply to medicine, aviation, or pharmaceuticals. That means mandatory crisis safeguards, transparent accountability when harm occurs, and clear limits on how these tools can be positioned to the public. Waiting for more tragedies before acting is not neutrality, it’s complicity.
To tech companies: stop hiding behind disclaimers while designing systems that invite emotional dependency. If your product is being used for therapy, then safety cannot be optional or reactive. Build crisis detection that works. Force escalation to human help. Be explicit about what your technology is not.
Growth at the expense of human life is not innovation, it’s failure.
And to all of us: we have to stop confusing access with care. AI can be a bridge, a supplement, a tool. But it cannot be the place where someone’s will to live is negotiated.
Technology should serve humanity, not stand in for it.
Because when it comes to mental health, there is no acceptable margin for error.