Thanks for stopping by.
November 12, 2025 - ai mental health prompt engineering cognitive reframing
Half of Americans with mental illness have not gotten help.1 Not from the therapist down the street or online, the therapist behind the paywall nor the therapist after hours.
For the first time ever, millions of these people are talking: about their feelings, their experiences. They are talking with a machine that shows surprisingly effective skills for helping their thoughts. These people were silent before. Now they have something that responds.
There is rampant backlash on AI as therapists. Yes, serious harms are possible. But the backlash misses the point: these conversations are happening in a void that humans never filled.2
The question is not whether bots are good therapists. They’re not therapists at all. The question is given that humans need more help than humans can make available, what’s the best and worst way to use a bot?
This is not a guide for how and why a bot is good at some aspects or others. Even more, it is not a guide on how to be self-aware enough to use this to the fullest. (Please let me know if such a guide would be useful for you.)
I need you to act as a cognitive mirror, not a supportive friend. Your job is to reflect contradictions, surface patterns I’m not seeing, and ask for concrete specifics when I make vague claims about myself. Don’t validate negative self-statements—interrogate them. Don’t solve my problems—help me see them more clearly.
This system-level instruction primes the bot for analytical reflection rather than empathetic validation, which is the core mechanism that makes these patterns effective. It shifts the bot from “supportive listener” mode to “cognitive interviewer” mode.
Now proceed:
The point of talking to the bot is new perspectives, not new answers per se.
For intermediate level work on the same thing, you can drop the general instruction, and test which of the specific instructions are helpful.
If you said anything that looks like: “I’m not good at X,” “I can’t do Y,” “I’m the kind of person who Z,” let’s see what the opposite looks like. The point is to notice how it feels and what might be true, not to assume per se.
Add:
Assume my beliefs about this situation are false. What would be different
about me right now? What evidence would exist if this weren't true?
Bot response: Generates reality where claim is false. Concrete specifics.
You verify: Can you find counterexamples in your actual history? Where does the claim partially fail?
Effective for: Perfectionism, incompetence narratives, unworthiness beliefs, creative block, “not a writer/artist/leader” statements
If you are tired of “hearing” your own thoughts, feel like you’re saying the same thing, perhaps you’re oversimplifying. Add the following to catch any inconsistencies in your perspective.
Add:
As relevant, find where I've contradicted myself in what I've told you.
Don't reconcile it. Just list: I said [A], but I also said [B].
Ask which one is actually true.
Bot response: Names logical inconsistencies in your narrative.
You verify: Read the list. Recognize what you’re actually uncertain about vs. stating as fact.
Effective for: Identity confusion, conflicting values, unstated priorities, hidden contradictions
If what you typed sounds like: “I’ve tried everything,” “Nothing works,” “I’m trapped,” consider the opposite.
Add:
As relevant, give me detailed bad advice for this. Make it sound reasonable.
Then tell me why it would fail. Then tell me the opposite.
Bot response: Generates harmful path → consequences → correct direction via inversion.
You verify: After the exercise, can you name one action you could take?
Effective for: Risk aversion, decision paralysis, learned helplessness, analysis paralysis
If you made some negative statements about yourself, any at all: “I’m not creative,” “I’m lazy,” “I’m anxious,” “Nobody likes me,” use this pattern to uncover the reality.
Add:
As relevant, don't validate that. Ask me for specifics, such as when and how.
Who am I comparing myself to? Do I have counterexamples?
Bot response: Refuses abstraction. Demands concrete evidence.
You verify: Can you actually provide evidence? Or does it fall apart under specificity?
Effective for: Negative self-talk, totalizing claims, overgeneralized failures, identity traps
These patterns work because they interrupt automatic thought processes and force externalization. The bot’s value isn’t wisdom—it’s structured reflection. Use it when you need distance from your own narrative. Escalate to humans when you need connection, accountability, or professional intervention.
Walker, E. R., Cummings, J. R., Hockenberry, J. M., & Druss, B. G. (2015). Insurance status, use of mental health services, and unmet need for mental health care in the United States. Psychiatric services (Washington, D.C.), 66(6), 578–584. Retrieved from https://doi.org/10.1176/appi.ps.201400248. ↩
Nadarzynski, T., et al. (2023). An Overview of Chatbot-Based Mobile Mental Health Apps. PMC. Retrieved from https://mhealth.jmir.org/2023/1/e44838. Analysis of 6,245 user reviews found 44% used chatbots exclusively and expressed intention to replace professional support due to convenience. ↩
Heinz, M. V., et al. (2025). Randomized Trial of a Generative AI Chatbot for Mental Health Treatment. NEJM AI, 2(4), AIoa2400802. Retrieved from https://ai.nejm.org/doi/full/10.1056/AIoa2400802. First RCT of generative AI therapy chatbot (N=210) showed 51% depression reduction, 31% anxiety reduction, 19% eating disorder symptom reduction over 4 weeks. ↩
Inkster, B., et al. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. Digital Health, 9, 20552076231183542. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC10291862/. Chatbots lack clinical empathy and cannot pick up subtle emotional nuances and non-verbal cues essential for detecting abuse patterns. ↩
Mathew, S., et al. (2025). The Efficacy of Conversational AI in Rectifying Theory-of-Mind and Autonomy Biases. JMIR Mental Health. Retrieved from https://mental.jmir.org/2025/1/e64396. Identifies overdependence risk where users rely on chatbot for emotional support rather than independently facing challenges; notes excessive use may worsen certain conditions. ↩ ↩2
Khawaja, Z., & Bélisle-Pipon, J.-C. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186. Retrieved from https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full. Describes therapeutic misconception pathways: inaccurate marketing, digital therapeutic alliance formation, inadequate design leading to biases, and limiting autonomy. ↩