How Should We Think About Thought Disorders
Theories of Delusion Formation
A delusion is a “fixed false belief based on an inaccurate interpretation of an external reality despite evidence to the contrary.” This definition is mediocre, but it’s also hard to improve on. One problem is that delusions appear to be on a continuous spectrum with regular beliefs. They also come in numerous flavors which might represent different mental processes. For instance, delusional persecution, guilt, and jealousy seem like extreme versions of normal experience, but the idea that your thoughts are being controlled, inserted into your mind, or broadcast aloud is harder to relate to.

Unsurprisingly, there are lots of theories to explain how delusions form. I planned to review some, only to find that Denecke et al just wrote a review and synthesis of 53 theoretical models of delusions. The authors group extant theories into five approaches, then propose a combined model.
I’m not sure a single theory will hold up here. Do all delusions have the same structure, behind varied facades? Is it meaningful that Alice thinks Jeff Bezos is spying on her, Bob thinks he is Jeff Bezos, and Carol thinks Jeff Bezos is in love with her? I think so, because I think our brains have somewhat discrete systems built for somewhat discrete purposes, and it’s sad to just attribute psychosis to dopamine dysregulation then chalk up the rest to randomness. But let’s review.
Cognitive Models
Alice sees herself as vulnerable in a dangerous world. She dwells on potential threats, jumps to conclusions about perceived danger, and disregards contrary evidence. In other words, she has an “imbalance of reasoning processes favoring fast and error-prone processing over slow and deliberate reasoning.” Mostly this works out, but at some point she goes through an especially rough patch. Under duress, she has an “anomalous experience,” like misperceiving a shadow or noticing a coincidence. She interprets the incident as both important and threatening, thus demanding explanation. Explanations provide “insight relief” and thus reinforce faulty reasoning.

My main concern with cognitive models is the emphasis on pre-existing biases. Psychosis in schizophrenia and delusional disorder is a major departure from the person’s normal self, so the focus on long-standing thought patterns seems misplaced. These models seem to apply better to garden-variety conspiratorial thinking, as well as paranoid and schizotypal personality styles. Also, what about all the other delusional themes? Did someone with an erotomanic delusion have a pre-existing tendency to fantasize about potential partners? Are people who somatize a lot at higher risk of somatic delusions? These risk factors are so common as to be meaningless.
Associative Learning
Associative learning is the linking of previously unconnected elements in our minds, like when you press a lever in your cage and then get a tasty chow pellet. This goes wrong when irrelevant stimuli create idiosyncratic associations, like when you decide it’s your neighbor’s squeaks that make your chow appear.
In this model, delusion formation starts with impaired attention: random events are processed instead of filtered out, creating false associations. Bob sees a man in a green shirt on the way to the store, where green apples are on sale, and later The Green Mile is on TV. What are the chances? Once these events acquire salience, they need explanation, and initially loose associations get reinforced into fixed beliefs.

This model casts delusion more as malfunction than variant of normal, which I think better captures the odd content of many delusional beliefs. It also connects delusion formation to dopamine circuits and thus antipsychotic action, which is nice. Finally, I like how the mechanism is neutral regarding themes. It suggests to me that ideas of reference, in which random events are attributed strong personal meaning, are a kind of ur-delusion from which other themes descend.
Social Perspective
Humans live in groups, which is pleasant, but also poses a constant risk of exploitation, ostracism, or worse. Evolution hates this! As a result, our brains are ever watchful for interpersonal threats, and persecutory delusions may be the extreme end of this useful trait. Someone who’s bad at reading facial expressions and body language is prone to misinterpret and see malice when none is intended. A sense of constant threat leads to avoidance and isolation, with persecutory ideas intensifying in the absence of feedback.

The evolutionary angle makes sense of why paranoia is so common. This perspective also makes a tantalizing connection to the typical course of schizophrenia, in which decline of social ability is a core feature that precedes overt psychotic symptoms. On the other hand, does this imply that social anxiety and autism elevate the risk for delusions? What distinguishes the development of true delusions from people with paranoid or avoidant personality?
Neurobiological Perspective
Standard neuroscience models “converge on the idea that aberrant attribution of salience to irrelevant stimuli” is the key to understanding delusions. Normally, dopamine release tracks the salience of events, as in: is this worth paying attention to? But in psychosis, random environmental noise is treated as signal. Dopamine dysfunction has any number of causes, from meth to Parkinson’s disease to genetic factors influencing neuron development elsewhere in the brain.
This is basically the Associative Learning model at a more detailed level of analysis. Again, it mostly explains ideas of reference, not any particular delusional theme.
Bayesian Inference
As summarized elsewhere, Bayesian models cast the brain as an “inference machine that holds an internal generative model of the world.” A discrepancy between predicted and incoming data, aka prediction error, leads to an update: the world is different than expected; time to update expectations.
Delusions start with too low weight on prior beliefs or too high weight on sensory input. This creates an aberrant prediction error and sense of surprise. (How did that billboard know I wanted Chick-fil-A?) A ready explanation (The Chick-fil-A corporation is is surveilling me) resolves the uncertainty and turns into a high-weight belief that colors other interpretations. Most delusions involve social content because the behavior of other people is harder to predict than the behavior of rocks and trees.

Bayesian models are a different take on how the brain works, but they build off biological and learning models. My main question is what changes between the initial conditions required for delusion formation (lightly held prior belief, strong sensory input) and delusion persistence (hyper-strong belief that dominates subsequent data).
Three blind men come across an elephant…
You’ve noticed that these models vary in emphasis but don’t really contradict each other. Shared factors include a pre-existing bias in information processing, an aberrant experience, and a resulting need for explanation. However, the theories mostly describe a general sort of delusion rather than specific content like grandiosity, erotomania, somatic infestation, or jealousy. They also seem a poor fit for certain uncommon beliefs. Let’s pull out two of these themes, which seem to stand on their own:
Misidentification delusions, like thinking your spouse has been replaced by an impostor, involve malfunction in face recognition areas of the brain. These are unusual and distinct.
Thought control, insertion, withdrawal, and broadcasting are misattributions of inner speech, related to auditory verbal hallucinations, due to defective self-monitoring in the brain. All of these are particularly associated with schizophrenia, which is interesting.
I propose a two-factor model to account for the rest. First, we all see non-existent patterns in noise, but some more than others. Overactive pattern-matching shows up in literary close reading, generic conspiratorial thinking, belief in woo-woo, and perhaps schizotypal personality. Let’s say this mode of thinking results from a too-permissive dopamine-based salience network. Ideas of reference – seeing connections, meaning, and agency that aren’t there – factor into pretty much every delusional system.1 Taken to the extreme, this general process turns a belief into a delusion.
Second, we need something more specific to explain the content of the remaining delusions, which all have a distinct affective quality, as in fear, grandiosity, guilt, jealousy, and love (erotomania). It’s uncontroversial to posit brain systems adapted to monitor for coalitions against us, to estimate our status relative to peers, and to manage close relationships. We surely have systems to scan our body for illness and infestation.
Variation in these systems, with different set points, ranges, and sensitivity, relate to personality traits and attachment styles. More extreme variation relates to neurosis and personality pathology. But it’s the pairing of a given affective system with a broken salience network that creates the array of delusional themes we observe. The specific ways in which our brain stops working can tell us something about the way it’s supposed to work.
Yes, those are em dashes. No, I don’t use an LLM!



Nice read… Isn’t the advent of computational psychiatry exciting?
We recently had a lecture on delusions in which my friends and I discussed: What is the line with conspiracy theories?
One could certainly argue a level of aberrant salience network and affective processing; but except in this scenario, reinforced by a social and group dynamic. Heck, arguably certain groups actively seek to encourage this level of salience attribution.
Of course, the line itself is social functioning. If you are ostracised for your delusions, this is bad. If it helps you in a social group, it is good. But it’s interesting to think what happens to the social status quo if we had neurobiological markers of these paradigms. Still feels a ways away.
1) I'm sure that this is covered somewhere I missed or obvious but how is a delusion conceptually distinguished from a strong, hard to shift and OBVIOUSLY FALSE belief that's socially shared and not indicative of pathology (belief in deities, spirits and similar come to mind, as well as strongly held superstitions). Intuitively, there is a difference (and I'm not mocking) but I cannot put my finger on it however I squint apart from "belief in a god having spoken to the authors of a holy book seems non pathological in a way the belief in a god speaking to me through a podcast does not".
2) While I agree that "we all see non-existent patterns in noise, but some more than others" but I feel -- tho this is a free floating hypothesis rather than a hill I'm prepared to die on -- that developing a delusion requires more than just seeing patterns everywhere. It also, obviously, requires giving them credibility, taking them as valid, and persuasively, convincingly true? Both openness to weird hypothetical AND inability to reject them / entertain them as one of many alternatives? I know a lot of people prone to fantastic conjecture and able to entertain numerous conspiracy ideas for example, a mixture of schizoid, open to experience/bored and artsy, but not as "sticky" delusions. They take them on and discard fairly easily.