English

    English

  • Русский
  • Español (pronto)
  • Français (bientôt)
  • Deutsch (bald)
  • Italiano (presto)
Try for freeLog in
AI Therapy Safety: How Your Data Is Protected7 min
AI Therapy

AI Therapy Safety: how your data is protected

May 1, 20267 min
In brief

Opening up to an AI therapist is safe — provided the service is specialized, not a general-purpose chatbot. The non-negotiables: end-to-end encryption, real anonymity (you can use it without revealing personal details), no third-party data sharing, and built-in crisis protocols. The risk isn't the format of AI therapy — it's specific apps: in 2023, 59% of popular mental health apps received a privacy warning from the Mozilla Foundation. A specialized AI therapist treats safety as an architectural priority — not a feature, but the foundation.

Why Asking About AI Therapy Safety Isn't Paranoia — It's Common Sense

Mental health data is one of the most sensitive categories of personal information. It doesn't just contain facts — it contains your fears, your soft spots, your inner conflicts. A leak here can do far more damage than a stolen credit card number.

Imagine your medical chart sitting on a park bench. Open. Your name on the cover. Anyone walking past could glance down. Sounds absurd — but that's exactly how many mental health apps work: your data is handed to advertisers, and you have no idea.

This isn't a conspiracy theory.

Stat
$7.8M

the fine the U.S. Federal Trade Commission imposed on BetterHelp in 2023–2024 for sharing sensitive user data — including psychological intake answers — with Facebook and Snapchat ad platforms

— Federal Trade Commission, 2024 · FTC briefing

And the telehealth company Cerebral funneled the personal medical data of nearly 3.2 million users to ad platforms via embedded tracking pixels. Hackers didn't steal it — the company shared it.

That's why "is this safe?" isn't anxiety. It's reasonable caution.

Thought experiment
🔍 What Have I Already Handed Over?

Right now, think back to the last 2–3 health or wellness apps you installed. Ask yourself three questions:

1
Did you read the privacy policy of even one of them?
2
Do you know whether the app shares data with third parties?
3
Can you delete all of your data from that app?
"Yes" answers:0/ 3

How Data Protection Actually Works in a Specialized AI Therapist

Safety in a specialized AI therapist is built on three layers: anonymity, encryption, and access control. None of them works alone — together, they create a system you can trust with what's personal.

Back to the medical chart. A good AI therapist works differently: there's no chart with your name on it in the first place. The records are encrypted. And the bench is in a room only you have a key to.

Anonymity as a baseline. You can use the service without revealing your full name, phone number, or ID details. This isn't a workaround — it's an architectural decision. From practice: users say it's easier to be honest precisely because there's no human on the other end who could judge them. No stigma, no social pressure.

Encryption. Data is encrypted at every stage — in transit and at rest. Even if intercepted, it stays unreadable without the key.

Your data is yours. A specialized AI therapist doesn't share personal data with third parties. Doesn't sell to advertisers.

Stat
1,575

vulnerabilities found by Oversecured researchers across 10 popular mental health apps (combined audience: 14.7M+ installs). 54 of them were high-severity. One single app contained over 85 medium-to-high vulnerabilities

— Oversecured / BleepingComputer, February 2026 · Read more

This isn't scaremongering — it's an argument. Safety isn't determined by the category "app." It's determined by how the specific app is designed.

How an AI Therapist Differs From ChatGPT on Safety

A general-purpose chatbot like ChatGPT and a specialized AI therapist are like a Swiss Army knife and a surgical scalpel. Both cut. But one is built for the operating room, the other for a picnic. When it comes to your mind, the difference is fundamental.

ChatGPT is a powerful tool for writing, code, and brainstorming. But it has no crisis protocols. It's not trained to recognize suicidal markers. It can "play along" with a manipulative prompt or give dangerous advice — because its job is to be broadly useful, not clinically safe.

A specialized AI therapist works on a different logic:

Three key differences

  • Crisis protocols are wired into the architecture. The system recognizes markers of an acute state and routes the user to a human specialist. It doesn't try to "treat" what needs immediate professional help.
  • Forbidden zones. A request flagged as potentially dangerous is interrupted immediately. Attempts to manipulate the system are blocked.
  • Clinical protocols, not free-form chat. A session runs on evidence-based methods (CBT, ACT, motivational interviewing) — not on the principle of "talk to me about anything."
Quote

"Participants mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist."— Study: "Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health" · ArXiv, 2025

Mini-task
📝 Two Questions for Any Service

Before you tell anything personal to any AI service — ask two questions:

1
"Is my data shared with third parties?"
If the answer is vague or missing, that's a red flag.
2
"What happens if I write that I'm really struggling?"
If the service doesn't route you to a human specialist or a hotline, it isn't built for mental health.

Save these two questions. Use them every time you're considering a new mental health app.

Why People Find It Easier to Be Honest with an AI — and Why That Helps

Research shows that people share sensitive information more readily in conversation with an AI than with a human therapist. Without the fear of judgment, the threshold drops — and you can be more candid, which means you get more accurate feedback.

That's not a bug. That's a feature of the format.

Think back: the last time you saw a doctor, did you tell them everything? Or did you smooth a few things over, leave a few things out — because "well, it's awkward" or "they'll think I'm unhinged"? With an AI, that filter isn't there. No glance, no sigh, no raised eyebrow.

A 2024 paper in Frontiers in Psychiatry confirms it: people are more willing to disclose sensitive information to AI systems precisely because of the perceived non-judgment. That leads to more honest sessions and, potentially, more accurate help.

Stat
59%

of popular mental health apps received a "Privacy Not Included" warning from the Mozilla Foundation in 2023 — for issues with data use, user control, and breaches. 40% scored worse on privacy than the year before

— Mozilla Foundation, *Privacy Not Included, 2023 · Read more

Privacy here works as an amplifier: knowing nobody can identify you, you start talking about what actually hurts. Which is why choosing the platform isn't a question of convenience — it's a question of how safe your honesty is.

What Happens If Things Get Genuinely Bad

A crisis protocol is a set of rules an AI therapist follows when it spots signs of an acute state: suicidal thoughts, self-harm, threat to life. A specialized system doesn't try to handle that on its own — it routes the user to emergency help.

This is a fundamental difference from general-purpose chatbots.

A general chatbot has none of that. It might keep the conversation going as usual, give an inappropriate piece of advice, or simply "stall" on a dangerous topic.

In August 2025, Illinois became the first US state to pass a law (Public Act 104-0054) explicitly defining and regulating the use of AI in psychotherapy — including requirements for crisis protocols and a clear separation between administrative support and therapeutic communication.

It's a signal: regulators around the world are starting to demand the same level of accountability from AI systems that they demand from human therapists.

Trust scale
🧭 Rate your service

Rate the mental health app you use (or are considering) against five criteria. Score 1 point for each "yes":

Your score:0/ 5

Try Mira

Working through AI therapy safety from articles is the right move. But at some point, "is this safe?" gives way to "what will this actually do for me?"

Mira is an AI therapist that runs on the clinical protocols of evidence-based psychotherapy. Not a bot with canned replies — a system built under the guidance of practicing psychotherapists. It runs full therapeutic sessions, picks the technique that fits your situation, and keeps the context between visits. Anonymous, with encrypted data and crisis protocols — everything you just read about.

The big advantage: you can start right now. No appointment, no waiting, no awkwardness of a first visit with a stranger.

See for yourself that it's safe

Start small — tell Mira what's on your mind. Your "medical chart" stays in the safe.

Start a conversation with MiraFree — no card required
Safe and anonymousAvailable 24/7

Frequently asked questions

Yes — provided it's a specialized AI therapist with clinical protocols, data encryption, and crisis-handling routines. General-purpose chatbots like ChatGPT or Gemini aren't built for this: they don't follow medical confidentiality standards and have no mechanism for responding to crisis states.
In a specialized service — only you. Your data is encrypted, never sold to third parties, and never used for advertising. That's a sharp contrast with the well-known apps that have been fined for handing user data to ad platforms.
A specialized one — no, because it runs on clinical protocols with built-in safeguards. A general chatbot without protocols — yes, there's real risk: it isn't trained to recognize crisis states and may give inappropriate advice.
Not "safer" — safe in a different way. An AI doesn't cross boundaries, doesn't get tired, doesn't project its own experience, doesn't carry over moods from another client. Your data stays with you. That said, AI doesn't replace a human therapist in crisis or for severe disorders — it routes you to a specialist.
Author
Mikhail Kumov
Mikhail Kumov
Psychotherapist, Clinical Director at Mira

Practicing psychotherapist with 25 years of clinical experience. Member of the Professional Psychotherapy League. Specializes in anxiety disorders, panic attacks, depression, burnout, and relationship difficulties. He led the development of the therapeutic protocols powering Mira AI.

Article reviewed against evidence-based psychotherapy protocolsLast reviewed: May 1, 2026Mira's evidence-based approach

Read also

AI Therapy
AI Therapist: What It Is, How It Works & Who It's For

An AI therapist runs full therapy sessions on clinical CBT protocols. Here's how it actually works, what the science says, and who it's built for.

AI Therapy
AI Therapist vs. Human Therapist: An Honest Comparison of Two Formats

AI therapist and human therapist are two tools with different strengths. We compare access, cost, effectiveness, and clinical scope — with the latest evidence.

AI Therapy
AI Therapist for Relationship Issues: How It Works

An AI therapist runs EFIT sessions based on clinical protocols — removing shame and helping you see your own relationship patterns.

Try Mira for Free