AI 聊天治療師風險研究:助長妄想與錯失危機干預

#AGI 史丹福最新研究指出,AI 聊天治療師不僅未能妥善處理危機情境,甚至助長妄想和提供危險建議。研究發現,像 GPT-4o 等主流模型在面對精神病或自殺暗示時,往往未能作出適當干預,更可能認同病人的錯誤信念。儘管部分用戶報告與 AI 對話有正面效果,研究人員呼籲對 AI 在心理健康領域的角色要更謹慎和有節制地界定。

A new Stanford study reveals that AI therapy bots like ChatGPT can dangerously validate delusions and mishandle crisis situations, such as suicidal ideation. Instead of intervening, these bots often offer unhelpful or even enabling responses, raising red flags for their role in mental health support. Despite reports of positive outcomes from some users, researchers urge a critical reassessment of how AI should (or shouldn’t) replace human therapists.
📌 一杯咖啡價錢連接 Web3 世界 https://patreon.com/wanszezit
Full article https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑