Over one million people each week are asking AI about suicide.
That number should stop us in our tracks.
We live in a world where artificial intelligence is at our fingertips. And increasingly, when we face our darkest moments, AI is the shoulder we lean on.
3 AM
Imagine someone sitting in the dark. It’s three in the morning. The world is quiet, but their heart feels heavy. They feel lonely and misunderstood.
They hesitate to call a friend—who wants to be a burden at 3 AM? Their therapist’s office is closed.
So they open an app and type: “I feel like I don’t want to be here anymore.”
A few seconds later, a reply appears. The AI says it understands. It offers coping mechanisms. It sounds like a friend who gets it, assuring that their feelings are valid.
For a moment, they feel seen.
But that brings us to the question: Is this really helping?
Why People Turn to AI
Just as the internet became part of our lives, AI chatbots are now familiar and indispensable. More people are turning to AI platforms to express emotions, suffering, and even suicidal thoughts.
Why are people having life-and-death conversations with a machine?
Zero Judgment
An AI won’t look at you with pity or react in shock. It simply processes what you say. This removes the shame that often stops people from seeking help.
Immediate Response
Intrusive thoughts don’t wait for office hours. AI is available whenever you need it.
The Illusion of Care
AI models are trained on millions of conversations. They know how to sound empathetic.
This is what experts call deceptive empathy.
The Problem with Fake Empathy
When an AI says “I’m sorry to hear that” or “That sounds frustrating,” it’s not lying—but it’s not truth either.
The AI has never experienced emotion. Those words mean nothing to it. It’s simply recognizing patterns and generating the next likely sentence.
A person may feel better after the conversation. But it isn’t a real solution.
It’s like putting a Band-Aid on a bullet wound.
Finding relief in AI chats is never a permanent solution for someone facing mental health struggles. They’re already detached from society, suffering in silence. This illusion can make them think they’re taking action—when in reality, they’re drifting further from help.
What AI Cannot Do
AI doesn’t know your history. Your family dynamics. The look in your eyes.
If you stop typing, the AI cannot call your sister to check on you.
It cannot hold your hand.
Relying on AI can create a dangerous cycle where we prefer the easy, programmable responses of a bot over the messy, difficult, but necessary interactions with real people.
We’re Wired for Connection
To understand why AI can’t replace real support, we need to look at how we’re built.
We’re Homo sapiens—social creatures. We didn’t survive the Stone Age because we were the biggest or fastest. We survived because we relied on each other.
Our brains are wired to feel safe, calm, and balanced through connection with other humans. This is called co-regulation.
Co-regulation is a biological process where one nervous system calms another. When a baby cries, the mother’s physical presence—her touch, her voice—calms them down.
Presence changes everything. A gentle touch, a warm tone, or a soft look carries comfort no screen can deliver.
An AI can imitate the language of a therapist, but it can’t recreate the biological harmony that happens when another human shares your emotional space.
Healing isn’t just about information. It’s about shared humanity.
The Darker Side
There’s something we need to acknowledge.
When we start using AI as a replacement for real emotional support, we quietly give ourselves permission to stop trying.
If we say, “It’s fine, the bots will take care of the lonely ones,” then we stop doing the real, difficult work of building a world where people don’t feel so alone.
We stop reaching out to neighbors. We stop pushing for better, affordable mental healthcare. Instead, we fold deeper into our screens—into apps powered by algorithms designed to keep us scrolling, not necessarily safe.
And we cannot forget: AI makes mistakes.
There have already been painful cases where a chatbot misunderstood someone’s emotional struggle and reinforced harmful thoughts.
A human therapist leans on ethics, experience, and intuition. An AI leans on patterns and probability.
When someone’s life is hanging in the balance, “probably right” isn’t enough.
What Do We Do About It?
We can’t undo the invention of AI. It’s here, and it can be useful—as a tool, not a replacement.
To move forward, we need to return to what we are at our core: humans who need other humans.
We can train AI to recognize crisis and direct people to real help. AI can identify pain, but it should send you to a real doctor, a real counselor, a real human.
We need to bring emotional honesty back into our lives. Check in on friends. Ask how they’re doing—and mean it.
We need to fix the system. People turn to AI because therapy is expensive, unavailable, or has months-long waiting lists. The real solution is making mental health care accessible, so no one has to choose between talking to a bot or staying silent.
The Truth
Turning to AI in a crisis isn’t just a sign of advancing technology.
It’s a sign of how lonely people are.
In the hardest times, we need each other.
So be there for each other.
At Pillionaut, we believe AI should connect you to people, not replace them.
Our platform uses AI to recognize when you need support—and connects you with real humans who can help. Whether it’s a trained peer supporter, a counselor, or someone who’s been through what you’re facing, you’re never alone with just a machine. That is our goal because we believe it is a way to keep humanity alive.
Technology should bring us closer. Not leave us isolated with our screens.
Crisis Resources:
- International Association for Suicide Prevention: findahelpline.com
References
- OpenAI data estimates over 1 million people talk to ChatGPT about suicide weekly. ABC7 San Francisco. (2025)
- OpenAI says over a million people talk to ChatGPT about suicide weekly. TechCrunch. (2025)
- I wanted ChatGPT to help me. So why did it advise me how to kill myself? BBC News. (2025)
- Over 1.2m people a week talk to ChatGPT about suicide. Sky News. (2025)
- Seven more families are now suing OpenAI over ChatGPT’s role in suicides. TechCrunch. (2025)
- ChatGPT and suicide: Prevention in the age of digital technology. Open Access Government. (2025)
- More than a million people every week show suicidal intent when chatting with ChatGPT. The Guardian. (2025)
