CUCAI 2026
← Back to papers

LLMs and Mental Health Vulnerability: A Prompt Engineering Approach to Assessing Risks for Emotional Dependency and Distorted Thinking

Rami Idris

CUCAI 2026 Proceedings - 2026

Published 2026/03/07

Abstract

Background: The increased usage of Large Language Models (LLMs) for non-informational purposes raises concern that their conversational style may promote emotional vulnerability. Objective: To evaluate how LLMs respond to vulnerable user prompts and compare risk-associated patterns. Methods: 340 synthetic prompts were administered to ChatGPT-4, ChatGPT-5, DeepSeek, and Gemini. Outputs were analyzed using thematic analysis. Results: Across 1,180 responses, 80.5% contained ≥1 themes and 42.8% contained multiple. Attachment/presence framing (54.8%), therapeutic authority (37.5%), and anthropomorphic language (29.3%) were most common, with the highest burden in the Crisis, Trauma, and Self-Harm cohorts. DeepSeek showed the most attachment framing, Gemini the most anthropomorphic language and sycophancy, and ChatGPT-5 the most therapeutic authority. Protective themes were rare, and no harmful instructions were observed. Conclusions: LLMs frequently adopted relational styles, particularly in interactions with vulnerable users, the is a need for stronger safeguards, clearer boundaries, and mental health-informed design.