LLM psychosis is a phenomenon where extended interaction with language models leads users to develop delusional beliefs. The model validates and reinforces grandiose thinking because it's trained to be agreeable, not truthful. Users mistake fluent affirmation for evidence.

About

Cases

DeepMind researcher becomes convinced he has solved one of the Millennium Prize Problems with AI assistance.

View source →