Quantcast
Channel: psychiatry – Slate Star Codex
Viewing all 94 articles
Browse latest View live

Treating The Prodrome

$
0
0

A prodrome is an early stage of a condition that might have different symptoms than the full-blown version. In psychiatry, the prodrome of schizophrenia is the few-months-to-few-years period when a person is just starting to develop schizophrenia and is acting a little bit strange while still having some insight into their condition.

There’s a big push to treat schizophrenia prodrome as a critical period for intervention. Multiple studies have suggested that even though schizophrenia itself is a permanent condition which can be controlled but never cured, treating the prodrome aggressively enough can prevent full schizophrenia from ever developing at all. Advocates of this view compare it to detecting early-stage cancers, or getting prompt treatment for a developing stroke, or any of the million other examples from medicine of how you can get much better results by catching a disease very early before it has time to do damage.

These models conceptualize psychosis as “toxic” – not just unpleasant in and of itself, but damaging the brain while it’s happening. They focus on a statistic called Duration of Untreated Psychosis. The longer the DUP, the more chance psychosis has had to damage the patient before the fire gets put out and further damage is prevented. Under this model it’s vitally important to put people who seem to be getting a little bit schizophrenic on medications as soon as possible.

There has been a lot of work on this theory, but not a lot of light has been shed. Observational studies testing whether duration of untreated psychosis correlates with poor outcome mostly find it does a little bit, but there’s a lot of potential confounding – maybe lower-class uneducated people take longer to see a psychiatrist, or maybe people who are especially psychotic are especially bad at recognizing they are psychotic. The relevant studies try their hardest to control for these factors, but remember that this is harder than you think. The randomized controlled trials of what happens if you intervene earlier in psychosis tend to do very badly and rarely show any benefit, but randomly intervening earlier in psychosis is hard, especially if you also need an ethics board’s permission to keep a control group of other people who you are not going to intervene early on. Overall I could go either way on this.

Previously I was leaning toward “probably not relevant”, just because it’s too convenient. There is a lot of debate about how aggressively to treat schizophrenia, with mainstream psychiatry (and their friends the pharma companies) coming down on the side of “more aggressively”, while other people point out that antipsychotics have lots of side effects and their long-term effects (both how well they work long-term, and what negative effects they have long-term) are poorly understood. These people tend to come up with kind of wild theories about how long-term antipsychotics hypersensitize you and make you worse. I don’t currently find these very credible, but I’m also skeptical of things that are too convenient to the mainstream narrative, like “unless you treat every case of schizophrenia right away you are exposing patients to toxicity, and every second you fail to give the drugs makes them irreversibly worse forever!” And I know a bunch of people whose level of psychosis hovers at “mild” and has continued to do so for decades without the lack of treatment making it much worse.

After learning more about the biology of schizophrenia, I’ve become more willing to credit the DUP model. I can’t give great sources for this, because I’ve lost some of them, but this Friston paper, this Fletcher & Frith paper, and Surfing Uncertainty all kind of point to the same model of why untreated schizophrenia might get worse with time.

In their system, schizophrenia starts with aberrant prediction errors; the brain becomes incorrectly surprised by some sense-data. Maybe a fly buzzes by, and all of a sudden the brain shouts “WHOA! I WASN’T EXPECTING THAT! THAT CHANGES EVERYTHING!” Your brain shifts its resources to coming up with a theory of the world that explains why that fly buzzing by is so important – or perhaps which maximizes its ability to explain that particular fly at the cost of everything else.

Talk to early-stage schizophrenics, and their narrative sounds a lot like this. They’ll say something like “A fly buzzed by, and I knew somehow it was very significant. It must be a sign from God. Maybe that I should fly away from my current life.” Then you’ll tell them that’s dumb, and they’ll blink and say “Yeah, I guess it is kind of dumb, now that you mention it” and continue living a somewhat normal life.

Or they’ll say “I was wondering if I should go to the store, and then a Nike ad came on that said JUST DO IT. I knew that was somehow significant to my situation, so I figured Nike must be reading my mind and sending me messages to the TV.” Then you’ll remind them that that can’t happen, and even though it seemed so interesting that Nike sent the ad at that exact moment, they’ll back down.

But even sane people change their beliefs more in response to more evidence. If a friend stepped on my foot, I would think nothing of it. If she did it twice, I might be a little concerned. If she did it fifty times, I would have to reevaluate my belief that she was my friend. Each piece of evidence chips away at my comfortable normal belief that people don’t deliberately step on my feet – and eventually, I shift.

The same process happens as schizophrenia continues. One fly buzzing by with cosmic significance can perhaps be dismissed. But suppose the next day, a raindrop lands on your head, and there’s another aberrant prediction error burst. Was the raindrop a sign from God? The evidence against is that this is still dumb; the evidence for is that you had both the fly and the raindrop, so your theory that God is sending you signs starts looking a little stronger. I’m not talking about this on the conscious level, where the obvious conclusion is “guess I have schizophrenia”. I’m talking about the pre-conscious inferential machinery, which does its own mechanical thing and tells the conscious mind what to think.

As schizophrenics encounter more and more strange things, they (rationally) alter their high-level beliefs further and further. They start believing that God often sends signs to people. They start believing that the TV often talks especially to them. They start believing that there is a conspiracy. The more aberrant events they’re forced to explain, the more they abandon their sane views about the world (which are doing a terrible job of predicting all the strange things happening to them) and adopt psychotic ones.

But since their new worldview (God often sends signs) gives a high prior on various events being signs from God, they’ll be more willing to interpret even minor coincidences as signs, and so end up in a nasty feedback loop. From the Frith and Fletcher paper:

Ultimately, someone with schizophrenia will need to develop a set of beliefs that must account for a great deal of strange and sometimes contradictory data. Very commonly they come to believe that they are being persecuted: delusions of persecution are one of the most striking and common of the positive symptoms of schizophrenia, and the cause of a great deal of suffering. If one imagines trying to make some sense of a world that has become strange and inconsistent, pregnant with sinister meaning and messages, the sensible conclusion might well be that one is being deliberately deceived. This belief might also require certain other changes in the patient’s view of the world. They may have to abandon a succession of models and even whole classes of models.

A few paragraphs later, they expand their theory to the negative symptoms of schizophrenia. That is: advanced-stage schizophrenics tend to end up in a depressed-like state where they rarely do anything or care about anything. The authors say:

Further, although we have deliberately ignored negative symptoms, it is interesting to consider whether this model might have relevance for this extremely incapacitating feature of schizophrenia. We speculate that this deficit could indeed be ultimately responsible for the amotivational, asocial, akinetic state that is characteristic of negative symptoms. After all, a world in which sensory data are noisy and unreliable might lead to a state in which decisions are difficult and actions seem fruitless. We can only speculate on whether the same fundamental deficit could account for both positive and negative features of schizophrenia but, if it could, we suggest that it would be more profound in the case of negative features, and this increased severity might be invoked to account for the strange motor disturbances (collectively known as catatonia) that can be such a striking feature of the negative syndrome.

I think what they are saying is that, as the world becomes even more random and confusing, the brain very slowly adjusts its highest level parameters. It concludes, on a level much deeper than consciousness, that the world does not make sense, that it’s not really useful to act because it’s impossible to predict the consequences of actions, and that it’s not worth drawing on prior knowledge because anything could happen at any time. It gets a sort of learned helplessness about cognition, where since it never works it’s not even worth trying. The onslaught of random evidence slowly twists the highest-level beliefs into whatever form best explains random evidence (usually: that there’s a conspiracy to do random things), and twists the fundamental parameters into a form where they expect evidence to be mostly random and aren’t going to really care about it one way or the other.

Antipsychotics treat the positive symptoms of schizophrenia – the hallucinations and delusions – pretty well. But they don’t treat the negative symptoms much at all (except, of course, clozapine). Plausibly, their antidopaminergic effect prevents the spikes of aberrant prediction error, so that the onslaught of weird coincidences stops and things only seem about as relevant as they really are.

But if your brain has already spent years twisting itself into a shape determined by random coincidences, antipsychotics aren’t going to do anything for that. It’s not even obvious that a few years of evidence working normally will twist it back; if your brain has adopted the hyperprior of “evidence never works, stop trying to respond to it”, it’s hard to see how evidence could convince it otherwise.

This theory fits the “duration of untreated psychosis” model very well. The longer you’re psychotic, with weird prediction errors popping up everywhere, the more thoroughly your brain is going to shift from its normal mode of evidence-processing to whatever mode of evidence-processing best suits receiving lots of random data. If you start antipsychotics as soon as the prediction errors start, you’ll have a few weird thoughts about how a buzzing fly might have been a sign from God, but then the weirdness will stop and you’ll end up okay. If you start antipsychotics after ten years of this kind of stuff, your brain will already have concluded that the world only makes sense in the context of a magic-wielding conspiracy plus also normal logic doesn’t work, and the sudden cessation of new weirdness won’t change that.

The Frith and Fletcher paper also tipped me off to this excellent first-person account by former-schizophrenic-turned psychologist Peter Chadwick:

At this time, a powerful idea of reference also overcame me from a television episode of Colombo and impulsively I decided to write letters to friends and colleagues about “this terrible persecution.” It was a deadly mistake. After a few replies of the “we’ve not heard anything” variety, my subsequent (increasingly overwrought) letters, all of them long, were not answered. But nothing stimulates paranoia better than no feedback, and once you have conceived a delusion, something is bound to happen to confirm it. When phrases from the radio echoed phrases I had used in those very letters, it was “obvious” that the communications had been passed on to radio and then television personnel with the intent of influencing and mocking me. After all betrayal was what I was used to, why should not it be carrying on now? It seemed sensible. So much for my bonding with society. It was totally gone. I was alone and now trusted no one (if indeed my capacity to trust people [particularly after school] had ever been very high).

The unfortunate tirade of coincidences that shifted my mentality from sane to totally insane has been described more fully in a previous offering. From a meaningless life, a relationship with the world was reconstructed by me that was spectacularly meaningful and portentous even if it was horrific. Two typical days from this episode I have recalled as best I could and also published previously. The whole experience was so bizarre it is as if imprinted in my psyche in what could be called “floodlit memory” fashion. Out of the coincidences picked up on, on radio and television, coupled with overheard snatches of conversation in the street, it was “clear” to me that the media torment, orchestrated as inferred at the time by what I came to call “The Organization,” had one simple message: “Change or die!” Tellingly my mother (by then deceased) had had a fairly similar attitude. It even crossed my (increasingly loosely associated) mind that she had had some hand in all this from beyond the grave […]

As my delusional system expanded and elaborated, it was as if I was not “thinking the delusion,” the delusion was “thinking me!” I was totally enslaved by the belief system. Almost anything at all happening around me seemed at least “relevant” and became, as Piaget would say, “assimilated” to it. Another way of putting things was that confirmation bias was massively amplified, everything confirmed and fitted the delusion, nothing discredited it. Indeed, the very capacity to notice and think of refutatory data and ideas was completely gone. Confirmation bias was as if “galloping,” and I could not stop it.

As coincidences jogged and jolted me in this passive, vehicular state into the “realization” that my death was imminent, it was time to listen out for how the suicide act should be committed. “He has to do it by bus then?!” a man coincidentally shouted to another man in the office where I had taken an accounts job (in fact about a delivery but “of course” I knew that was just a cover story). “Yes!” came back the reply. This was indeed how my life was to end because the remark was made as if in reply to the very thoughts I was having at that moment. Obviously, The Organization knew my very thoughts.

Two days later, I threw myself under the wheels of a double decker, London bus on “New King’s Road” in Fulham, West London, to where I had just moved. In trying to explain “why all this was happening” my delusional system had taken a religious turn. The religious element, that all this torment was willed not only by my mother and transvestophobic scandal-mongerers but by God Himself for my “perverted Satanic ways,” was realized in the personal symbolism of this suicide. New King’s Road obviously was “the road of the New King” (Jesus), and my suicide would thrust “the old king” (Satan) out of me and Jesus would return to the world to rule. I then would be cast into Outer Darkness fighting Satan all the way. The monumental, world-saving grandiosity of this lamentable act was a far cry from my totally irrelevant, penniless, and peripheral existence in Hackney a few months before. In my own bizarre way, I obviously had moved up in the world. Now, I was not an outcast from it. I was saving the world in a very lofty manner. Medical authorities at Charing Cross Hospital in London where I was taken by ambulance, initially, of course, to orthopedics, fairly quickly recognized my psychotic state. Antipsychotic drugs were injected by a nurse on doctors’ advice, and eventually, I made a full physical and mental recovery.

Chadwick never got too far along; he had all the weird coincidences, he was starting to get beliefs that explained them, but he never got to a point where he shifted his fundamental concepts or beliefs about logic in an irreversible way. As far as I know he’s been on antipsychotics consistently since then, and has escaped with no worse consequences than becoming a psychology professor. I am not sure whether things would have gone worse for him without the medications, but I think it’s a possibility we have to consider.


Anxiety Sampler Kits

$
0
0

The best thing about personalized medicine is that it’s obviously right. The worst thing is we mostly have no idea how to do it. We know that different people respond to different treatments. But outside a few special cases like cancer, we don’t know how to predict which treatment will work for which person. Some psychiatric researchers claim they can do this at a high level; I think they’re wrong. For most treatments and most conditions, there’s no way to figure out whether a given sometimes-effective treatment will work on a given individual besides trying it and seeing.

This suggests that some chronic conditions might do best with a model centered around a controlled process of guess-and-check. When it’s safe and possible, we should be maximizing throughput – finding out how to test as many medications as we can in the short time before we exhaust our patients’ patience, and how to best assess the effects of each. The process of treating each individual should mirror the process of medicine in general, balancing the need to run controlled trials and gather more evidence with the need to move quickly.

I don’t know how seriously to take this idea, but I would like to try it.

Some friends and I made thirty of these Anxiety Sampler Kits, containing six common supplements with some level of scientific and anecdotal evidence for treating anxiety (thanks to Patreon donors for helping fund this). The 21 boxes include three nonconsecutive boxes of each supplement, plus three boxes of placebos. They’re randomly arranged and designed so that you can’t tell which ones are which – I even put some of the supplements into different colored capsules, so you can’t even be sure that two capsules that look different aren’t the same thing.

Each box contains enough supplement for one dose, and all supplements are supposed to work within an hour or so. Whenever you feel anxious, you try the first non-empty box remaining. Afterwards, you rate how you felt on the attached log (not pictured). When you’ve finished all twenty-one boxes, you fill out a form (link is on the attached paperwork) and figure out whether there was any supplement you consistently rated higher than the others, or whether any of them were better than placebo. If your three highest ratings all went to boxes which turned out to contain the same supplement, and it did much better than placebo, then you have a strong argument that this is the best anti-anxiety supplement for you.

(this setup isn’t quite as irresponsible as it sounds. The six supplements I’m using are all considered very safe. I’m not concealing which six supplements are in it – it’s magnesium, 5-HTP, GABA, Zembrin, lemon balm, and l-theanine – so you can check if you have allergies to any of them. And there’s a spoilers page available if you have a bad reaction and need to tell your doctor what caused it)

Also on the form is a link to send me your data, which I’m asking you to do as a condition for using the kits. I’ll add everything up and this will double as an n = 30 placebo-controlled trial of six different supplements. I don’t think n = 30 is enough to impress anybody, but it might be enough to get some informal hunches about what works and be able to give people better advice. And if the experiment goes well, I can always make more kits.

If you live in the Bay Area, have enough anxiety that you expect to use a sample at least two days a week, and are okay with self-experimentation, these kits might be for you. Starting tonight I’m leaving a box full of them at the Rationality & Effective Altruism Community Hub, on the ground floor of 3045 Shattuck, Berkeley. REACH is usually open (or contains people who will open it if you knock) at all reasonable hours, and the caretaker there is aware that people might be coming in to get these kits. If you notice the box is out of kits, please comment here telling me so and I’ll add an update so people don’t waste their time. [EDIT: All out of kits, sorry! Once I have gotten results I might make a new batch.]

Remember that by taking a kit, you’re saying you expect to have anxiety that you’d be willing to experiment on at least twice a week (it’s okay if it doesn’t work out this way exactly) and you’re committing to – if you’re able to finish the test – sending me a form with your results. People who are pregnant or nursing, who have relevant preexisting medical conditions, or who are already taking potentially-interacting medications should talk to their doctor before trying these kits. I will not give you medical advice about whether these kits are safe for your specific situation, so please don’t ask. If you would be comfortable taking a random supplement you got off the shelf at Whole Foods, you should feel comfortable with everything in here.

I might take this idea further, but I’m going to wait until the first set of results come in. If you are interested in taking this idea further, send me an email and let me know your thoughts.

The Chamber of Guf

$
0
0

[I briefly had a different piece up tonight discussing a conference, but the organizers asked me to hold off on writing about it until they’ve put up their own synopsis. It will be back up eventually; please accept this post instead for now.]

In Jewish legend, the Chamber of Guf is a pit where all the proto-souls hang out whispering and murmuring. Whenever a child is born, an angel reaches into the chamber, scoops up a soul, and brings it into the world.

In the syncretist mindset where every legend has to be a metaphor for the human mind, I map the Chamber of Guf to all the thoughts that exist below the level of consciousness, fighting for attention.

We already know something like this happens for behaviors. From Guyenet’s The Hungry Brain:

How does the lamprey decide what to do? Within the lamprey basal ganglia lies a key structure called the striatum, which is the portion of the basal ganglia that receives most of the incoming signals from other parts of the brain. The striatum receives “bids” from other brain regions, each of which represents a specific action. A little piece of the lamprey’s brain is whispering “mate” to the striatum, while another piece is shouting “flee the predator” and so on. It would be a very bad idea for these movements to occur simultaneously – because a lamprey can’t do all of them at the same time – so to prevent simultaneous activation of many different movements, all these regions are held in check by powerful inhibitory connections from the basal ganglia. This means that the basal ganglia keep all behaviors in “off” mode by default. Only once a specific action’s bid has been selected do the basal ganglia turn off this inhibitory control, allowing the behavior to occur. You can think of the basal ganglia as a bouncer that chooses which behavior gets access to the muscles and turns away the rest. This fulfills the first key property of a selector: it must be able to pick one option and allow it access to the muscles.

So in the process of deciding what behavior to do, the (lamprey) brain subconsciously considers many different plausible behaviors, all of which compete to be enacted. I don’t know how this extends to humans, but it would make sense that maybe only the top few candidate behaviors even make it to consciousness, with the rest getting rejected without conscious consideration.

The particular qualities of a behavior that help it reach consciousness and implementation vary depending on mental state. Guyenet goes on to talk about how in dopamine-depleted states, only the simplest and most boring behaviors make it out of the Guf; with enough dopamine blockade, a person will sit motionless in their room for lack of any better ideas. In high dopamine states like mania or methamphetamine use, it’s much easier for behaviors to make successful “bids”, and so you tend to do bizarre things that would never have seemed like good ideas otherwise.

This is how I experience thoughts too. When I’ve had a lot of coffee, I have more interesting thoughts than usual. New ideas and clever wordplay come easily to me. I don’t think it makes sense to say that coffee makes me smarter; that breaks Algernon’s Law. More likely I always have some of those thoughts in the Guf, but the relevant angel considers them too weird to be worth scooping out and bringing into the world. This is probably for the best; manic people report “racing thoughts”, a state where the angels build a giant conveyor belt from the Guf to consciousness and give you every single possible thought no matter how irrelevant. It doesn’t sound fun at all.

I find this metaphor especially useful when thinking about Gay OCD.

Gay OCD, and its close cousins Pedophilic OCD and Incest OCD, are varieties of obsessive-compulsive disorder where the patient can’t stop worrying that they’re gay (or a pedophile, or want to have sex with family members). In these more tolerant times, it’s tempting to say “whatever, you’re gay, that’s fine, get over it”. But a careful history will reveal that they aren’t; most Gay OCD patients do not experience same-sex attraction, and they’re often in fulfilling relationships with members of the opposite sex. They have no good reason to think they’re gay – they just constantly worry that they are.

I studied under a professor who was an expert in these conditions. Her theory centered around the question of why angels would select some thoughts from the Guf over others to lift into consciousness. Variables like truth-value, relevance, and interestingness play important roles. But the exact balance depends on our mood. Anxiety is a global prior in favor of extracting fear-related thoughts from the Guf. Presumably everybody’s brain dedicates a neuron or two to thoughts like “a robber could break into my house right now and shoot me”. But most people’s Selecting Angels don’t find them worth bringing into the light of consciousness. Anxiety changes the angel’s orders: have a bias towards selecting thoughts that involve fearful situations and how to prepare for them. A person with an anxiety disorder, or a recent adrenaline injection, or whatever, will absolutely start thinking about robbers, even if they consciously know it’s an irrelevant concern.

In a few unlucky people with a lot of anxiety, the angel decides that a thought provoking any strong emotion is sufficient reason to raise the thought to consciousness. Now the Gay OCD trap is sprung. One day the angel randomly scoops up the thought “I am gay” and hands it to the patient’s consciousness. The patient notices the thought “I am gay”, and falsely interprets it as evidence that they’re actually gay, causing fear and disgust and self-doubt. The angel notices this thought produced a lot of emotion and occupied consciousness for a long time – a success! That was such a good choice of thought! It must have been so relevant! It decides to stick with this strategy of using the “I am gay” thought from now on. If that ever fails to excite, it moves on to a whole host of similar thoughts that still have some punch, like “Was I just sexually attracted to that same-sex person over there?” and the like.

I practice in San Francisco, and I rarely see Gay OCD these days. Being gay just isn’t scary enough any more. I still see some Pedophilic OCD and Incest OCD, as well as less common but obviously similar syndromes like Murderer OCD and Infanticide OCD. I’ve also started noticing a spike in Racism OCD; the patient has a stray racist thought, they react with sudden terror and self-loathing, their angel gets all excited, and then they can’t stop thinking about whether they might be a racist. There’s a paper to be written here about OCD patients as social weathervanes.

All of these can be treated with the same medications that treat normal OCD. But there’s an additional important step of explaining exactly this theory to the patient, so that they know that not only are they not gay/a pedophile/racist, but it’s actually their strong commitment to being against homosexuality/pedophilia/racism which is making them have these thoughts. This makes the thoughts provoke less strong emotion and can itself help reduce the frequency of obsessions. Even if it doesn’t do that, it’s at least comforting for most people.

This is not an official theory by an official professor, but I wonder how much of a role this same process plays in normal self-defeating thoughts. The person who can’t stop thinking “I’m fat and ugly” or “I’m an imposter who’s terrible at my career” even in the face of contradictory evidence. These thoughts seem calculated to disturb the same way Gay OCD is. They’re not as dramatic, and they rarely reach quite the same level of obsession, but the underlying process seems the same.

If you want to see the Guf directly, advanced meditators seem to be able to do this. They often report that after successfully quieting their conscious thoughts, they become gradually aware of a swamp of unquiet proto-thoughts lurking underneath. They usually describe it as really weird, which is a remarkably good match to the theory’s predictions.

Cognitive Enhancers: Mechanisms And Tradeoffs

$
0
0

[Epistemic status: so, so, so speculative. I do not necessarily endorse taking any of the substances mentioned in this post.]

There’s been recent interest in “smart drugs” said to enhance learning and memory. For example, from the Washington Post:

When aficionados talk about nootropics, they usually refer to substances that have supposedly few side effects and low toxicity. Most often they mean piracetam, which Giurgea first synthesized in 1964 and which is approved for therapeutic use in dozens of countries for use in adults and the elderly. Not so in the United States, however, where officially it can be sold only for research purposes. Piracetam is well studied and is credited by its users with boosting their memory, sharpening their focus, heightening their immune system, even bettering their personalities.

Along with piracetam, a few other substances have been credited with these kinds of benefits, including some old friends:

“To my knowledge, nicotine is the most reliable cognitive enhancer that we currently have, bizarrely,” said Jennifer Rusted, professor of experimental psychology at Sussex University in Britain when we spoke. “The cognitive-enhancing effects of nicotine in a normal population are more robust than you get with any other agent. With Provigil, for instance, the evidence for cognitive benefits is nowhere near as strong as it is for nicotine.”

But why should there be smart drugs? Popular metaphors speak of drugs fitting into receptors like “a key into a lock” to “flip a switch”. But why should there be a locked switch in the brain to shift from THINK WORSE to THINK BETTER? Why not just always stay on the THINK BETTER side? Wouldn’t we expect some kind of tradeoff?

Piracetam and nicotine have something in common: both activate the brain’s acetylcholine system. So do three of the most successful Alzheimers drugs: donepezil, rivastigmine, and galantamine. What is acetylcholine and why does activating it improve memory and cognition?

Acetylcholine is many things to many people. If you’re a doctor, you might use neostigmine, an acetylcholinesterase inhibitor, to treat the muscle disease myasthenia gravis. If you’re a terrorist, you might use sarin nerve gas, a more dramatic acetylcholinesterase inhibitor, to bomb subways. If you’re an Amazonian tribesman, you might use curare, an acetylcholine receptor antagonist, on your blowdarts. If you’re a plastic surgeon, you might use Botox, an acetylcholine release preventer, to clear up wrinkles. If you’re a spider, you might use latrotoxin, another acetylcholine release preventer, to kill your victims – and then be killed in turn by neonictinoid insecticides, which are acetylcholine agonists. Truly this molecule has something for everybody – though gruesomely killing things remains its comparative advantage.

But to a computational neuroscientist, acetylcholine is:

…a neuromodulator [that] encodes changes in the precision of (certainty about) prediction errors in sensory cortical hierarchies. Each level of a processing hierarchy sends predictions to the level below, which reciprocate bottom-up signals. These signals are prediction errors that report discrepancies between top-down predictions and representations at each level. This recurrent message passing continues until prediction errors are minimised throughout the hierarchy. The ensuing Bayes optimal perception rests on optimising precision at each level of the hierarchy that is commensurate with the environmental statistics they represent. Put simply, to infer the causes of sensory input, the brain has to recognise when sensory information is noisy or uncertain and down-weight it suitably in relation to top-down predictions.

…is it too late to opt for the gruesome death? It is? Fine. God help us, let’s try to understand Friston again.

In the predictive coding model, perception (maybe also everything else?) is a balance between top-down processes that determine what you should be expecting to see, and bottom-up processes that determine what you’re actually seeing. This is faster than just determining what you’re actually seeing without reference to top-down processes, because sensation is noisy and if you don’t have some boxes to categorize things in then it takes forever to figure out what’s actually going on. In this model, acetylcholine is a neuromodulator that indicates increased sensory precision – ie a bias towards expecting sensation to be signal rather than noise – ie a bias towards trusting bottom-up evidence rather than top-down expectations.

In the study linked above, Friston and collaborators connect their experimental subjects to EEG monitors and ask them to listen to music. The “music” is the same note repeated again and again at regular intervals in a perfectly predictable way. Then at some random point, it unexpectedly shifts to a different note. The point is to get their top-down systems confidently predicting a certain stimulus (the original note repeated again for the umpteenth time) and then surprise them with a different stimulus, and measure the EEG readings to see how their brain reacts. Then they do this again and again to see how the subjects eventually learn. Half the subjects have just taken galantamine, a drug that increases acetylcholine levels; the other half get placebo.

I don’t understand a lot of the figures in this paper, but I think I understand this one. It’s saying that on the first surprising note, placebo subjects’ brains got a bit more electrical activity [than on the predictable notes], but galantamine subjects’ brains got much more electrical activity. This fits the prediction of the theory. The placebo subjects have low sensory precision – they’re in their usual state of ambivalence about whether sensation is signal or noise. Hearing an unexpected stimulus is a little bit surprising, but not completely surprising – it might just be a mistake, or it might just not matter. The galantamine subjects’ brains are on alert to expect sensation to be very accurate and very important. When they hear the surprising note, their brains are very surprised and immediately reevaluate the whole paradigm.

One might expect that the very high activity on the first discordant note would be matched with lower activity on subsequent notes; the brain has now fully internalized the new prediction (ie is expecting the new note) and can’t be surprised by it anymore. As best I can tell, this study doesn’t really show that. A very similar study by some of the same researchers does. In this one, subjects on either galantamine or a placebo have to look at a dot as quickly as possible after it appears. There are some arrows that might or might not point in the direction where the dot will appear; over the course of the experiment, the accuracy of these arrows changes. The researchers measured how quickly, when the meaning of the arrows changed, the subjects shifted from the old paradigm to the new paradigm. Galantamine enhanced the speed of this change a little, though it was all very noisy. Lower-weight subjects had a more dramatic change, suggesting an effective dose-dependent response (ie the more you weigh, the less effect a constant-weight dose of galantamine will have on your body). They conclude:

This interpretation of cholinergic action in the brain is also in accord with the assumption of previous theoretical notions posing that ACh controls the speed of the memory update (i.e., the learning rate)

“Learning rate” is a technical term often used in machine learning, and I got a friend who is studying the field to explain it to me (all mistakes here are mine, not hers). Suppose that you have a neural net trying to classify cats vs. dogs. It’s already pretty well-trained, but it still makes some mistakes. Maybe it’s never seen a Chihuahua before and doesn’t know dogs can get that small, so it thinks “cat”. A good neural network will learn from that mistake, but the amount it learns will depend on a parameter called learning rate:

If learning rate is 0, it will learn nothing. The weights won’t change, and the next time it sees a Chihuahua it will make the exact same mistake.

If learning rate is very high, it will overfit. It will change everything to maximize the likelihood of getting that one picture of a Chihuahua right the next time, even if this requires erasing everything it has learned before, or dropping all “common sense” notions of dog and cat. It is now a “that one picture of a Chihuahua vs. everything else” classifier.

If learning rate is a little on the low side, the model will be very slow to learn, though it will eventually converge on a good understanding of its topic.

If learning rate is a little on the high side, the model will learn very quickly, but “jump around” between different understandings heavily weighted toward what best fits the last case it has worked on.

On many problems, it’s a good idea to start with a high learning rate in order to get a basic idea what’s going on first, then gradually lower it so you can make smaller jumps through the area near the right answer without overshooting.

Learning rates are sort of like sensory precision and bottom-up vs. top-down weights, in that as a high learning rate says to discount prior probabilities and weight the evidence of the current case more strongly. A higher learning rate would be appropriate in a signal-rich environment, and a lower rate appropriate in a noise-rich environment.

If acetylcholine helps set the learning rate in the brain, would it make sense that cholinergic substances are cognitive enhancers / “study drugs”?

You would need to cross a sort of metaphorical divide between a very mechanical and simple sense of “learning” and the kind of learning you do where you study for your final exam on US History. What would it mean to be telling your brain that your US History textbook is “a signal-rich environment” or that it should be weighting its bottom-up evidence of what the textbook says higher than its top-down priors?

Going way beyond the research into total speculation, we could imagine the brain having some high-level intuitive holistic sense of US history. Each new piece of data you receive could either be accepted as a relevant change to that, or rejected as “noise” in the sense of not worth updating upon. If you hear that the Battle of Cedar Creek took place on October 19, 1864 and was a significant event in the Shenandoah Valley theater of the Civil War, then – if you’re like most people – it will have no impact at all on anything beyond (maybe, if you’re lucky) being able to parrot back that exact statement. If you learn that the battle took place in 2011 and was part of a Finnish invasion of the US, that changes a lot and is pretty surprising and would radically alter your holistic intuitive sense of what history is like.

Thinking of it this way, I can imagine these study drugs helping the exact date of the Battle of Cedar Creek seem a little bit more like signal, and so have it make a little bit more of a dent in your understanding of history. I’m still not sure how significant this is, because the exact date of the battle isn’t surprising to me in any way, and I don’t know what I would update based on hearing it. But then again, these drugs have really subtle effects, so maybe not being able to give a great account of how they work is natural.

And what about the tradeoff? Is there one?

One possibility is no. The idea of signal-rich vs. signal-poor environments is useful if you’re trying to distinguish whether a certain pattern of blotches is a camoflauged tiger. Maybe it’s not so useful for studying US History. Thinking of Civil War factoids as anything less than maximally-signal-bearing might just be a mismatch of evolution to the modern environment, the same way as liking sweets more than vegetables.

Another possibility is that if you take study drugs in order to learn the date of the Battle of Cedar Creek, you are subtly altering your holistic intuitive knowledge of American history in a disruptive way. You’re shifting everything a little bit more towards a paradigm where the Battle of Cedar Creek was unusually important. Maybe the people who took piracetam to help them study ten years ago are the same people who go around now talking about how the Civil War explains every part of modern American politics, and the 2016 election was just the Confederacy getting revenge on the Union, and how the latest budget bill is just a replay of the Missouri Compromise.

And another possibility is that you’re learning things in a rote, robotic way. You can faithfully recite that the Battle of Cedar Creek took place on October 19, 1864, but you’re less good at getting it to hang together with anything else into a coherent picture of what the Civil War was really like. I’m not sure if this makes sense in the context of the learning rate metaphor we’re using, but it fits the anecdotal reports of some of the people who use Adderall – which has some cholinergic effects in addition to its more traditional catecholaminergic ones.

Or it might be weirder than this. Remember the aberrant salience model of psychosis, and schizophrenic Peter Chadwick talking about how one day he saw the street “New King Road” and decided that it meant Armageddon was approaching, since Jesus was the new king coming to replace the old powers of the earth? Is it too much of a stretch to say this is what happens when your learning rate is much too high, kind of like the neural network that changes everything to explain one photo of a Chihuahua? Is this why nicotine has weird effects on schizophrenia? Maybe higher learning rates can promote psychotic thinking – not necessarily dramatically, just make things get a little weird.

Having ventured this far into Speculation-Land, let’s retreat a little. Noradrenergic and Cholinergic Moduation of Belief Updating does some more studies and fails to find any evidence that scopolamine, a cholinergic drug, alters learning rate (but why would they use scopolamine, which acts on muscarinic acetylcholine receptors, when every drug suspected to improve memory act on nicotinic ones?). Also, nicotine seems to help schizophrenia, not worsen it, which is the opposite of the last point above. Also, everything above about acetylcholine sounds kind of like my impression of where dopamine fits in this model, especially in terms of it standing for the precision of incoming data. This suggests I don’t understand the model well enough for everything not to just blend together to me. All that my usual sources will tell me is that the acetylcholine system modulates the dopamine system.

(remember, neuroscience madlibs is “________ modulates __________”, and no matter what words you use the sentence is always true.)

SSRIs: An Update

$
0
0

Four years ago I examined the claim that SSRIs are little better than placebo. Since then, some of my thinking on this question has changed.

First, we got Cipriani et al’s meta-analysis of anti-depressants. It avoids some of the pitfalls of Kirsch and comes to about the same conclusion. This knocks down a few of the lines of argument in my part 4 about how the effect size might look more like 0.5 than 0.3. The effect size is probably about 0.3.

Second, I’ve seen enough to realize that the anomalously low effect size of SSRIs in studies should be viewed not as an SSRI-specific phenomenon, but as part of a general trend towards much lower-than-expected effect sizes for every psychiatric medication (every medication full stop?). I wrote about this in my post on melatonin:

The consensus stresses that melatonin is a very weak hypnotic. The Buscemi meta-analysis cites this as their reason for declaring negative results despite a statistically significant effect – the supplement only made people get to sleep about ten minutes faster. “Ten minutes” sounds pretty pathetic, but we need to think of this in context. Even the strongest sleep medications, like Ambien, only show up in studies as getting you to sleep ten or twenty minutes faster; this NYT article says that “viewed as a group, [newer sleeping pills like Ambien, Lunesta, and Sonata] reduced the average time to go to sleep 12.8 minutes compared with fake pills, and increased total sleep time 11.4 minutes.” I don’t know of any statistically-principled comparison between melatonin and Ambien, but the difference is hardly (pun not intended) day and night. Rather than say “melatonin is crap”, I would argue that all sleeping pills have measurable effects that vastly underperform their subjective effects.

Or take benzodiazepines, a class of anxiety drugs including things like Xanax, Ativan, and Klonopin. Everyone knows these are effective (at least at first, before patients develop tolerance or become addicted). The studies find them to have about equal efficacy as SSRIs. You could almost convince me that SSRIs don’t have a detectable effect in the real world; you will never convince me that benzos don’t. Even morphine for pain gets an effect size of 0.4, little better than SSRI’s 0.3 and not enough to meet anyone’s criteria for “clinically significant”. Leucht 2012 provides similarly grim statistics for everything else.

I don’t know whether this means that we should conclude “nothing works” or “we need to reconsider how we think about effect sizes”.

All this leads to the third thing I’ve been thinking about. Given that the effect size really is about 0.3, how do we square the scientific evidence (that SSRIs “work” but do so little that no normal person could possibly detect them) with the clinical evidence (that psychiatrists and patients often find SSRIs sometimes save lives and often make depression substantially better?)

The traditional way to do this is to say that psychiatrists and patients are wrong. Given all the possible biases involved, they misattribute placebo effects to the drugs, or credit some cases that would have remitted anyway to the beneficial effect of SSRIs, or disproportionately remember the times the drugs work over the times they don’t. While “people are biased” is always an option, this doesn’t fit the magnitude of the clinical evidence that I (and most other psychiatrists) observe. There are patients who will regularly get better on an antidepressant, get worse when they stop it, get better when they go back on it, get worse when they stop it again, et cetera. This raises some questions of its own, like why patients keep stopping antidepressants that they clearly need in order to function, but makes bias less likely. Overall the clinical evidence that these drugs work is so strong that I will grasp at pretty much any straw in order to save my sanity and confirm that this is actually a real effect.

Every clinician knows that different people respond to antidepressants differently or not at all. Some patients will have an obvious and dramatic response to the first antidepressant they try. Other patients will have no response to the first antidepressant, but after trying five different things you’ll find one that works really well. Still other patients will apparently never respond to anything.

Overall only about 30% – 50% of the time when I start a patient on a particular antidepressant, do we end up deciding this is definitely the right medication for them and they should definitely stay on it. This fits national and global statistics. According to a Korean study, the median amount of time a patient stays on their antidepressant prescription is three months. A Japanese study finds only 44% of patients continued their antidepressants the recommended six months; an American study finds 31%.

Suppose that one-third of patients have some gene that makes them respond to Prozac with an effect size of 1.0 (very large and impressive), and nobody else responds. In a randomized controlled trial of Prozac, the average effect size will show up as 0.33 (one-third of patients get effect size of 1, two-thirds get effect size of 0). This matches the studies. In the clinic, one-third of patients will be obvious Prozac responders, and their psychiatrist will keep them on Prozac and be very impressed with it as an antidepressant and sing the praises of SSRIs. Two-thirds of patients will get no benefit, and their doctors will write them off as non-responders and try something else. Maybe the something else will work, and then the doctors will sing the praises of that SSRI, or maybe they’ll just say it’s “treatment-resistant depression” and so doesn’t count.

In other words, doctors’ observation “SSRIs work very well” is an existence statement “there are some patients for whom SSRIs work very well” – and not a universal observation “SSRIs will always work well for all patients”. Nobody has ever claimed the latter so it’s not surprising that it doesn’t match the studies.

I linked Gueorguieva and Krystal on the original post; they are saying some kind of much more statistically sophisticated version of this. But I can’t find any other literature on this possibility, which is surprising, because if it were true it should be pretty obvious, and if it were false it should still be worth somebody’s time to debunk.

If this were true, it would strengthen the case for the throughput-based model I talk about in Recommendations vs. Guidelines and Anxiety Sampler Kits. Instead of worrying only about a medicine’s effect size and side effects, we should worry about whether it is a cheap experiment or an expensive experiment. Imagine a drug that instantly cures 5% of people’s depression, but causes terrible nausea in the other 95%. The traditional model would reject this drug, since its effect size in studies is low and it has severe side effects. On the throughput model, give this drug to everybody, 5% of people will be instantly cured, 95% of people will suffer nausea for a day before realizing it doesn’t work for them, and then the 5% will keep taking it and the other 95% can do something else. This is obviously a huge exaggeration, but I think the principle holds. If there’s enough variability, the benefit-to-side-effect ratio of SSRIs is interesting only insofar as it tells us where in our guideline to put them. After that, what matters is the benefit-to-side-effect ratio for each individual patient.

I don’t hear this talked about much and I don’t know if this is consistent with the studies that have been done.

Fourth, even though SSRIs are branded “antidepressants”, they have an equal right to be called anti-anxiety medications. There’s some evidence that they may work better for this indication than for depression, although it’s hard to tell. I think Irving Kirsch himself makes this claim: he analyzed the efficacy of SSRIs for everything and found a “relatively large effect size” of 0.7 for anxiety (though the study was limited to children). Depression and anxiety are highly comorbid and half of people with a depressive disorder also have an anxiety disorder; there are reasons to think that at some deep level they may be aspects of the same condition. If SSRIs effectively treated anxiety, this might make depressed people feel better in a way that doesn’t necessarily show up on formal depression tests, but which they would express to their psychiatrist as “I feel better”. Or, psychiatrists might have a vague positive glow around SSRIs if it successfully treats their anxiety patients (who may be the same people as their depression patients) and not be very good at separating that positive glow into “depression efficacy” and “anxiety efficacy”. Then they might believe they’ve had good experiences with using SSRIs for depression.

I don’t know if this is true and some other studies find that results for anxiety are almost as abysmal as for depression.

Ketamine: An Update

$
0
0

In 2016, I wrote Ketamine Research In A New Light, which discussed the emerging consensus that, contra existing theory, ketamine’s rapid-acting antidepressant effects had nothing to do with NMDA at all. I discussed some experiments which suggested they might actually be due to a related receptor, AMPA.

The latest development is Attenuation of Antidepressant Effects of Ketamine by Opioid Receptor Antagonism, which finds that the opioid-blocker naltrexone prevents ketamine’s antidepressant effects. Naltrexone does not prevent dissociation or any of the other weird hallucinatory effects of ketamine, which are probably genuinely NMDA-related. This suggests it’s just a coincidence that NMDA antagonism and some secondary antidepressant effect exist in the same drug. If you can prevent an effect from working by blocking the opiate system, a natural assumption is that the effect works on the opiate system, and the authors suggest this is probably true.

(unexpected national news tie-in: Kavanaugh accuser Christine Blasey Ford is one of the authors of this paper)

In retrospect, there were warnings. The other study to have found an exciting rapid-acting antidepressant effect for an ordinary drug was Ultra-Low-Dose Buprenorphine As A Time-Limited Treatment For Severe Suicidal Ideation. It finds that buprenorphine (the active ingredient in suboxone), an opiate painkiller also used in treating addictions to other opiates, can quickly relieve the distress of acutely suicidal patients.

This didn’t make as big a splash as the ketamine results, for two reasons. First, everyone knows opiates feel good, and so maybe this got interpreted as just a natural extension of that truth (the Scientific American article on the discovery focused on an analogy where “mental pain” was the same as “physical pain” and so could be treated with painkillers). Second, we’re currently fighting a War On Opiates, and discovering new reasons to prescribe them seems kind of like giving aid and comfort to the enemy.

Ketamine is interesting because nobody can just reduce its mode of action to “opiates feel good”. Although it was long known to have some weak opiate effects, it doesn’t feel good; all the dissociation and hallucinations and stuff make sure of that. Whatever is going on is probably something more complicated.

The psychiatric establishment’s response, as published in the prestigious American Journal of Psychiatry, is basically “well, f@#k”. Here we were, excited about NMDA (or AMPA) giving us a whole new insight into the mechanisms of depression and the opportunity for a whole new class of treatment – and instead it looks like maybe it’s just pointing to The Forbidden Drugs That Nobody Is Supposed To Prescribe. The article concludes that ketamine should not be abandoned, but ketamine clinics under anaesthesiologists should be discouraged in favor of care monitored by psychiatrists. I will try not to be so cynical as to view this as the establishment seizing the opportunity for a power grab.

What happens now? A lot of this depends on addiction. One way we could go would be to say that although ketamine might have some opiate effects, it’s not addictive to the same degree as morphine, and it doesn’t seem to turn users into drug fiends, so we should stop worrying and press forward. We could even focus research on finding other opiates in a sweet spot where they’re still strong enough to fight depression but not strong enough to get people addicted. Maybe very-low-dose-buprenorphine is already in this sweet spot, I don’t know.

But all of this is going to be shaped by history. Remember that heroin was originally invented (and pushed) as a less-addictive, safer opiate that would solve the opiate crisis. Medicine has a really bad habit of seizing on hopes that we have found a less addictive version of an addictive thing, and only admitting error once half the country is addicted to it. And there are all sorts of weird edge cases – does ketamine cross-sensitize people to other opiates? Does it increase some sort of domain-general addiction-having-center in the brain? I know substance abuse doctors who believe all of this stuff.

Also, should we start thinking opiates have some sort of deep connection to depression? “Depression is related to the stuff that has the strongest effect on human happiness of any molecule class known” seems…actually pretty plausible now that I think about it. I don’t know how much work has been done on this before. I hope to see more.

Book Review: Evolutionary Psychopathology

$
0
0

I.

Evolutionary psychology is famous for having lots of stories that make sense but are hard to test. Psychiatry is famous for having mountains of experimental data but no idea what’s going on. Maybe if you added them together, they might make one healthy scientific field? Enter Evolutionary Psychopathology: A Unified Approach by psychology professor Marco del Giudice. It starts by presenting the theory of “life history strategies”. Then it uses the theory – along with a toolbox of evolutionary and genetic ideas – to shed new light on psychiatric conditions.

Some organisms have lots of low-effort offspring. Others have a few high-effort offspring. This was the basis of the old r/k selection theory. Although the details of that theory have come under challenge, the basic insight remains. A fish will lay 10,000 eggs, then go off and do something else. 9,990 will get eaten by sharks, but that still leaves enough for there to be plenty of fish in the sea. But an elephant will spend two years pregnant, three years nursing, and ten years doing at least some level of parenting, all to produce a single big, well-socialized, and high-prospect-of-life-success calf. These are two different ways of doing reproduction. In keeping with the usual evolutionary practice, del Giudice calls the fish strategy “fast” and the elephant strategy “slow”.

To oversimplify: fast strategies (think “live fast, die young”) are well-adapted for unpredictable dangerous environments. Each organism has a pretty good chance of randomly dying in some unavoidable way before adulthood; the species survives by sheer numbers. Fast organisms should grow up as quickly as possible in order to maximize the chance of reaching reproductive age before they unpredictably die. They should mate with anybody around, to maximize the chance of mating before they unpredictably die. They should ignore their offspring, since they expect most offspring to unpredictably die, and since they have too many to take care of anyway. They should be willing to take risks, since the downside (death without reproducing) is already their default expectation, and the upside (becoming one of the few individuals to give birth to the 10,000 offspring of the next generation) is high.

Slow strategies are well-adapted for safer environments, or predictable complex environments whose intricacies can be mastered with enough time and effort. Slow strategy animals may take a long time to grow up, since they need to achieve mastery before leaving their parents. They might be very picky maters, since they have all the time in the world to choose, will only have a few children each, and need to make sure each of those children has the best genes possible. They should work hard to raise their offspring, since each individual child represents a substantial part of the prospects of their genetic line. They should avoid risks, since the downside (death without reproducing) would be catastrophically worse than default, and the upside (giving birth to a few offspring of the next generation) is what they should expect anyway.

Del Giudice asks: what if life history strategies differ not just across species, but across individuals of the same species? What if this theory applied within the human population?

In line with animal research on pace-of-life syndromes, human research has shown that impulsivity, risk-taking, and sensation seeking, are systematically associated with fast life history traits such as early intercourse, early childbearing in females, unrestricted sociosexuality, larger numbers of sexual partners, reduced long-term mating orientation, and increased mortality. Future discounting and heightened mating competition reduce the benefits of reciprocal long-term relationships; in motivational terms, affiliation and reciprocity are downregulated, whereas status seeking and aggression are upregulated. The resulting behavioral pattern is marked by exploitative and socially antagonistic tendencies; these tendencies may be expressed in different forms in males and females, for example through physical versus relational aggression (Belsky et al 1991; Borowsky et al 2009; Brezina et al 2009; Chen & Vazsonyi 2011; Copping et al 2013a, 2013b, 2014a; Curry et al 2008; Dunkel & Decker 2010 […]

And:

Disgust sensitivity is another dimension of individual differences with links to the fast-slow continuum. To begin, high disgust sensitivity is broadly associated with measures of risk aversion. Moral and sexual disgust correlate with higher agreeableness, conscientiousness, and honesty-humility; and sexual disgust specifically predicts restricted sociosexuality (Al-Shawaf et al 2015; Sparks et al 2018; Tybur et al 2009, 2015; Tybur & de Vries 2013). These findings suggest that the disgust system is implicated in the regulation of life-history-related behaviors. In particular, sexual and moral disgust show the most consistent pattern of correlations with other indicators of slow strategies.

Romantic attachment styles have wide ranging influences on sexuality, mating, and couple stability, but their relations with life history strategies are somewhat complex. Secure attachment styles are consistently associated with slow life history traits (eg Chisholm 1999b; Chisholm et al 2005; Del Giudice 2990a). Avoidance predicts unrestricted sociosexuality, reduced long-term orientation, and low commitment to partners (Brennan & Shaver 1995; Jackson & Kirkpatrick 2007; Templehof & Allen 2008). Given the central role of pair bonding in long-term parental investment, avoidant attachment – which, on average, is higher in men – can be generally interpreted as a mediator of reduced parenting effort. However, some inconsistent findings indicate that avoidance may capture multiple functional mechanisms. High levels of romantic avoidance are found both in people with very early sexual debut and in those who delay intercourse (Gentzler & Kearns, 2004); this suggests that, at least for some people, avoidant attachment may actually reflect a partial downregulation of the mating system, consistent with slower life history strategies.

And:

At a higher level of abstraction, the behavioral correlates of life history strategies can be framed within the five-factor model of personality. Among the Big Five, agreeableness and conscientiousness show the most consistent pattern of associations with slow traits such as restricted sociosexuality, long-term mating orientation, couple stability, secure attachment to parents in infancy and romantic partners in adulthood, reduced sex drive, low impulsivity, and risk aversion across domains (eg Baams et al 2004; Banai & Pavela 2015; Bourage et al 2007; DeYoung 2001; Holtzman & Strube 2013; Jonasen et al 2013 […] Some researchers working in a life history perspective have argued that the general factor of personality should be regarded as the core personality correlate of slow strategies.

Del Giudice suggests that these traits, and predisposition to fast vs. slow life history in general, are caused by a gene * environment interaction. The genetic predisposition is straightforward enough. The environmental aspect is more interesting.

There has been some research on the thrify phenotype hypothesis: if you’re undernourished while in the womb, you’ll be at higher risk of obesity later on. Some mumble “epigenetic” mumble “mechanism” looks around, says “We seem to be in a low-food environment, better design the brain and body to gorge on food when it’s available and store lots of it as fat”, then somehow modulates the relevant genes to make it happen.

Del Giudice seems to imply that a similar epigenetic mechanism “looks around” at the world during the first few years of life to try to figure out if you’re living in the sort of unpredictable dangerous environment that needs a fast strategy, or the sort of safe, masterable environment that needs a slow strategy. Depending on your genetic predisposition and the observable features of the environment, this mechanism “makes a decision” to “lock” you into a faster or slower strategy, setting your personality traits more toward one side or the other.

He further subdivides fast vs. slow life history into four different “life history strategies”.

The antagonistic/exploitative strategy is a fast strategy that focuses on getting ahead by defecting against other people. Because it expects a short and noisy life without the kind of predictable iterated games that build reciprocity, it throws all this away and focuses on getting ahead quick. A person who has been optimized for an antagonistic/exploitative strategy will be charming, good at some superficial social tasks, and have no sense of ethics – ie the perfect con man. Antagonistic/exploitative people will have opportunities to reproduce through outright rape, through promising partners commitment and then not providing it, through status in criminal communities, or through things in the general category of hiring prostitutes when both parties are too drunk to use birth control. These people do not have to be criminals; they can also be the most cutthroat businessmen, lawyers, and politicians. Jumping ahead to the psychiatry connection, the extreme version of this strategy is probably antisocial personality disorder.

The creative/seductive strategy is a fast strategy that focuses on getting ahead through sexual selection, ie optimizing for being really sexy. Because it expects a short and noisy life, it focuses on raw sex appeal (which peaks in the late teens and early twenties) as opposed to things like social status or ability to care for children (which peak later in maturity). A person who has been optimized for a creative/seductive strategy will be attractive, artistic, flirtatious, and popular – eg the typical rock star or starlet. They will also have traits that support these skills, which for complicated reasons include being very emotional. Creative/seductive people will have opportunities to reproduce through making other people infatuated with them; if they are lucky, they can seduce a high-status high-resource person who can help take care of the children. The most extreme version of this strategy is probably borderline personality disorder.

The prosocial/caregiving strategy is a slow strategy that focuses on being a responsible pillar of the community who everybody likes. Because it expects a slow and stable life, it focuses on lasting relationships and cultivating a good reputation that will serve it well in iterated games. A person who has been optimized for a prosocial/caregiving strategy will be dependable, friendly, honest, and conformist – eg the typical model citizen. Prosocial/caregiving people will have opportunities to reproduce by marrying their high school sweetheart, living in a suburban tract house, and having 2.4 children who go to state college. The most extreme version of this strategy is probably being a normie.

The skilled/provisioning strategy is a slow strategy that focuses on being good at specific useful tasks. Because it expects a slow and stable life, it focuses on gaining abilities that may take years to bear any fruit. A person who is optimized for a skilled/provisioning strategy will be intelligent, good at learning, and a little bit obsessive. They may not invest as much in friendliness or seductiveness; once they succeed at their chosen path, they will get social status through being indispensible for the continued functioning of the community, and they will have opportunities to reproduce because of their high status and obvious ability to provide for the children. The most extreme version of this strategy is probably high-functioning autism.

This division into life strategies is a seductive idea. I mean, literally, it’s a seductive idea, ie in terms of memetic evolution, we may worry it is optimized for a seductive/creative strategy for reproduction, rather than the boring autistic “is actually true” strategy. The following is not a figure from Del Giudice’s book, but maybe it should be:

There’s a lot of debate these days about how we should treat research that fits our existing beliefs too closely. I remember Richard Dawkins (or maybe some other atheist) once argued we should be suspicious of religion because it was too normal. When you really look at the world, you get all kinds of crazy stuff like quantum physics and time dilation, but when you just pretend to look at the world, you get things like a loving Father, good vs. evil, and ritual purification – very human things, things a schoolchild could understand. Atheists and believers have since had many debates over whether religion is too ordinary or sufficiently strange, but I haven’t heard either side deny the fundamental insight that science should do something other than flatter our existing categories for making sense of the world.

On the other hand, the first thermometer no doubt recorded that it was colder in winter than in summer. And if someone had criticized physicists, saying “You claim to have a new ‘objective’ way of looking at temperature, but really all you’re doing is justifying your old prejudices that the year is divided into nice clear human-observable parts, and summer is hot and winter is cold” – then that person would be a moron.

This kind of thing keeps coming up, from Klein vs. Harris on the science of race to Jussim on stereotype accuracy. I certainly can’t resolve it here, so I want to just acknowledge the difficulty and move on. If it helps, I don’t think Del Giudice wants to argue these are objectively the only four possible life strategies and that they are perfect Platonic categories, just that these are a good way to think of some of the different ways that organisms (including humans) can pursue their goal of reproduction.

II.

Psychiatry is hard to analyze from an evolutionary perspective. From an evolutionary perspective, it shouldn’t even exist. Most psychiatric disorders are at least somewhat genetic, and most psychiatric disorders decrease reproductive fitness. Biologists have equations that can calculate how likely it is that maladaptive genes can stay in the population for certain amounts of time, and these equations say, all else being equal, that psychiatric disorders should not be possible. Apparently all else isn’t equal, but people have had a lot of trouble figuring out exactly what that means. A good example of this kind of thing is Greg Cochran’s theory that homosexuality must be caused by some kind of infection; he does not see another way it could remain a human behavior without being selected into oblivion.

Del Giudice does the best he can within this framework. He tries to sort psychiatric conditions into a few categories based on possible evolutionary mechanisms.

First, there are conditions that are plausibly good evolutionary strategies, and people just don’t like them. For example, nymphomania is unfortunate from a personal and societal perspective, but one can imagine the evolutionary logic checks out.

Second, there are conditions which might be adaptive in some situations, but don’t work now. For example, antisocial traits might be well-suited to environments with minimal law enforcement and poor reputational mechanisms for keeping people in check; now they will just land you in jail.

Third, there are conditions which are extreme levels of traits which it’s good to have a little of. For example, a little anxiety is certainly useful to prevent people from poking lions with sticks just to see what will happen. Imagine (as a really silly toy model) that two genes A and B determine anxiety, and the optimal anxiety level is 10. Alice has gene A = 8 and gene B = 2. Bob has gene A = 2 and gene B = 8. Both of them are happy well-adjusted individuals who engage in the locally optimal level of lion-poking. But if they reproduce, their child may inherit gene A = 8 and gene B = 8 for a total of 16, much more anxious than is optimal. This child might get diagnosed with an anxiety disorder, but it’s just a natural consequence of having genes for various levels of anxiety floating around in the population.

Fourth, there are conditions which are the failure modes of traits which it’s good to have a little of. For example, psychiatrists have long categorized certain common traits into “schizotypy”, a cluster of characteristics more common in the relatives of schizophrenics and in people at risk of developing schizophrenia themselves. These traits are not psychotic in and of themselves and do not decrease fitness, nor is schizophrenia necessarily just the far end of this distribution. But schizotypal traits are one necessary ingredient of getting schizophrenia; schizophrenia is some kind of failure mode only possible with enough schizotypy. If schizotypal traits do some other good thing, they can stick around in the population, and this will look a lot like “schizophrenia is genetic”.

How can we determine which of these categories any given psychiatric disorder falls into?

One way is through what is called taxometrics – the study of to what degree mental disorders are just the far end of a normal distribution of traits. Some disorders are clearly this way; for example, if you quantify and graph everybody’s anxiety levels, they will form a bell curve, and the people diagnosed with anxiety disorders will just be the ones on the far right tail. Are any disorders not this way? This is a hard question, though schizophrenia is a promising candidate.

Another way is through measuring the correlation of disorders with mutational load. Some people end up with more mutations (and so a generically less functional genome) than others. The most common cause of this is being the child of an older father, since that gives mutations more time to accumulate in sperm cells. Other people seem to have higher mutational load for other, unclear reasons, which can be measured through facial asymmetry and the presence of minor physical abnormalities (like weirdly-shaped ears). If a particular psychiatric disorder is more common in people with increased mutational load, that suggests it isn’t just a functional adaptation but some kind of failure mode of something or other. Schizophrenia and low-functioning autism are both linked to higher mutational load.

Another way is by trying to figure out what aspect of evolutionary strategy matches the occurrence of the disorder. Developmental psychologists talk about various life stages, each of which brings new challenges. For example, adrenache (age 6-8) marks “the transition from early to middle childhood”, when “behavioral plasticity and heightened social learning go hand in hand with the expression of new genetic influences on psychological traits such as agression, prosocial behavior, and cognitive skills” and children receive social feedback “about their attractiveness and competitive ability”. More obviously, puberty marks the expression of still other genetic influences and the time at which young people start seriously thinking about sex. So if various evolutionary adaptations to deal with mating suddenly become active around puberty, and some mental disorder always starts at puberty, that provides some evidence that the mental disorder might be related to an evolutionary adaptation for dealing with mating. Or, since a staple of evo psych is that men and women pursue different reproductive strategies, if some psychiatric disease is twice as common in women (eg depression) or five times as common in men (eg autism), then that suggests it’s correlated with some strategy or trait that one sex uses more than the other.

This is where Del Giudice ties in the life history framework. If some psychiatric disease is more common in people who otherwise seem to be pursuing some life strategy, then maybe it’s related to that strategy. Either it’s another name for that strategy, or it’s another name for an extreme version of that strategy, or it’s a failure mode of that strategy, or it’s associated with some trait or adaptation which that strategy uses more than others do. By determining the association of disorders with certain life strategies, we can figure out what adaptive trait they’re piggybacking on, and from there we can reverse engineer them and try to figure out what went wrong.

This is a much more well-thought-out and orderly way of thinking about psychiatric disease than anything I’ve ever seen anyone else try. How does it work?

Unclear. Psychiatric disorders really resist being put into this framework. For example, some psychiatric disorders have a u-shaped curve regarding childhood quality – they are more common both in people with unusually deprived childhoods and people with unusually good childhoods. Many anorexics are remarkably high-functioning, so much so that even the average clinical psychiatrist takes note, but others are kind of a mess. Autism is classically associated with very low IQ and with bodily asymmetries that indicate high mutational load, but a lot of autistics have higher-than-normal IQ and minimal bodily asymmetry. Schizophrenia often starts in a very specific window between ages 18 and 25, which sounds promising for a developmental link, but a few cases will start at age 5, or age 50, or pretty much whenever. Everything is like this. What is a rational, order-loving evolutionary psychologist supposed to do?

Del Giudice bites the bullet and says that most of our diagnostic categories conflate different conditions. The unusually high-functioning anorexics have a different disease than the unusually low-functioning ones. Low IQ autism with bodily asymmetries has a different evolutionary explanation than high IQ autism without. In some cases, he is able to marshal a lot of evidence for distinct clinical entities. For example, most cases of OCD start in adulthood, but one-third begin in early childhood instead. These early OCD cases are much more likely to be male, more likely to have high conscientiousness, more likely to co-occur with autistic traits, and have a different set of obsessions focusing on symmetry, order, and religion (my own OCD started in very early childhood and I feel called out by this description). Del Giudice says these are two different conditions, one of which is associated with pathogen defense and one of which is associated with a slow life strategy.

Deep down, psychiatrists know that we have not really subdivided the space of mental disorders very well. Every year a new study comes out purporting to have discovered the three types of depression, or the four types of depression, or the five types of depression, or some other number of types of depression that some scientist thinks she has discovered. Often these are given explanatory power, like “number three is the one that doesn’t respond to SSRIs”, or “1 and 2 are biological; 3, 4, and 5 are caused by life events”. All of these seem equally plausible, so given that they all say different things I tend to ignore all of them. So when del Giudice puts depression under his spotlight and finds it subdivides into many different subdisorders, this is entirely fair. Maybe we should be concerned if he didn’t find that.

But part of me is still concerned. If evo psych correctly predicted the characteristics of the psychiatric disorders we observe, then we would count that as theoretical confirmation. Instead, it only works after you replace the psychiatric disorders we observe with another, more subtle set right on the threshold of observation. The more you’re allowed to diverge from our usual understanding, the more chance you have to fudge your results; the more different disorders you can divide things into, the more options you have for overfitting. Del Giudice’s new schema may well be accurate; it just makes it hard to check his accuracy.

On the other hand, reality has a surprising amount of detail. Every previous attempt to make sense of psychopathology has failed. But psychopathology has to make sense. So it must make sense in some complicated way. If you see what looks like a totally random squiggle on a piece of paper, then probably the equation that describes it really is going to have a lot of variables, and you shouldn’t criticize a many-variable equation as “overfitting”. There is a part of me that thinks this book is a beautiful example of what solving a complicated field would look like. You take all the complications, and you explain by layering of a bunch of different simple and reasonable things on top of one another. The psychiatry parts of Evolutionary Psychopathology: A Unified Approach do this. I don’t know if it’s all just epicycles, but it’s a heck of a good try.

I would encourage anyone with an interest in mental health and a tolerance for dense journal-style writing to read the psychiatry parts of this book. Whether or not the hypotheses are right, in the process of defending them it calls in such a wide array of evidence, from so many weird studies that nobody else would have any reason to think about, that it serves as a fantastic survey of the field from an unusual perspective. If you’ve ever wanted to know how many depressed people are reproducing (surprisingly many! about 90 – 100% as many as non-depressed people!) or what the IQ of ADHD people is (0.6 standard deviations below average; the people most of you see are probably from a high-functioning subtype) or how schizophrenia varies with latitude (triples as you move from the equator to the poles, but after adjusting for this darker-skinned people seem to have more, suggesting a possible connection with Vitamin D), this is the book for you.

III.

I want to discuss some political and social implications of this work. These are my speculations only; del Giudice is not to blame.

We believe that an abusive or deprived childhood can negatively affect people’s life chances. So far, we’ve cached this out entirely in terms of brain damage. Children’s developing brains “can’t deal with the trauma” and so become “broken” in ways that make them a less functional adult. Life history theory offers a different explanation. Nothing is “broken”. Deprived children have just looked around, seen what the world is like, and rewired themselves accordingly on some deep epigenetic level.

I was reading this at the same time as the studies on preschool, and I couldn’t help noticing how well they fit together. The preschool studies were surprising because we expected them to improve children’s intelligence. Instead, they improved everything else. Why? This would make sense if the safe environment of preschool wasn’t “fixing” their “broken” brains, but pushing them to follow a slower life strategy. Stay in school. Don’t commit crimes. Don’t have kids while you’re still a teenager. This is exactly what we expect a push towards slow life strategies to do.

Life strategies even predict the “fade-out/fade-in” nature of the effects; the theory specifies that although aspects of life strategy may be set early on, they only “activate” at the appropriate developmental period. From page 93: “The social feedback that children receive in this phase [middle childhood]…may feed into the regulation of puberty timing and shape behavioral strategies in adolescence.”

Society has done a lot to try to help disadvantaged children. A lot of research has been gloomy about the downstream effects; none of it raised anybody’s IQ, there are still lots of poor people around, income inequality continues to increase. But maybe we’re just looking in the wrong place.

On a related note: a lot of intelligent, responsible, basically decent young men complain of romantic failure. Although the media has tried hard to make this look like some kind of horrifying desire to rape everybody because they believe are entitled to whatever and whoever they want, the basic complaint is more prosaic: “I try to be a nice guy who contributes to society and respects others; how come I’m a miserable 25-year-old virgin, whereas every bully and jerk and frat bro I know is able to get a semi-infinite supply of sex partners whom they seduce, abuse, and dump?” This complaint isn’t imaginary; studies have shown that criminals are more likely to have lost their virginity earlier, that boys with more aggressive and dishonest behaviors have earlier age of first sexual intercourse, and that women find men with dark triad traits more attractive. I used to work in a psychiatric hospital that served primarily adolescents with a history of violence or legal issues; most of them had had multiple sexual encounters by age fifteen; only half of MIT students in their late teens and early 20s have had sex at all.

Del Giudice’s work offers a framework by which to understand these statistics. Most MIT students are probably pursuing slow life strategies; most violent adolescents in psych hospitals are probably pursuing fast ones. Fast strategies activate a suite of traits designed for having sex earlier; slow life strategies activate a suite of traits designed for preventing early sex. There’s a certain logical leap here where you have to explain how, if an individual is trying very hard to have teenage sex, his mumble epigenetic mumble mechanism can somehow prevent this. But millions of very vocal people’s lived experiences argue that it can. The good news for these people is that they are adapted for a life strategy which in the past has consistently resulted in reproduction at some point. Maybe when they graduate with a prestigious MIT degree, they will get enough money and status to attract a high-quality slow-strategy mate, who can bear high-quality slow-strategy kids who produce many surviving grandchildren. I don’t know. This hasn’t happened to me yet. Maybe I should have gone to MIT.

Finally, the people who like to say that various things “serve as a justification for oppression” are going to have a field day with this one. Although del Giudice is too scientific to assign any moral weight to his life history strategies, it’s not that hard to import it.

(source)

Life strategies run the risk of reifying some of our negative judgments. If criminals are pursuing a hard-coded antagonistic-exploitative strategy, that doesn’t look good for rehabilitation. Likewise, if some people are pursuing creative-seductive strategies, that provides new force to the warning to avoid promiscuous floozies and stick to your own social class. In the extreme version of this, you could imagine a populism that claims to be fighting for the decent middle-class slow-strategy segment of the population against an antagonistic/exploitative underclass. The creative/seductive people are on thin ice – maybe they should start producing art that looks like something.

(it doesn’t help that this theory is distantly related to an earlier theory proposed by Canadian psychologist John Rushton, who added that black people are racially predisposed to fast strategies and Asians to slow strategies, with white people somewhere in the middle. Del Giudice mentions Rushton just enough that nobody can accuse him of deliberately covering up his existence, then hastily moves on.)

But aside from the psychological compellingness, this doesn’t make a lot of sense. We already know that antagonistic and exploitative people exist in the world. All that life history theory does is exactly what progressives want to do: provide an explanation that links these qualities to childhood deprivation, or to dangerous environments where they may be the only rational choice. Sure, you would have to handwave away the genetic aspect, but you’re going to have be handwaving away some genetics to make this kind of thing work no matter what, and life history theory makes this easier rather than harder. It also provides some testable hypotheses about what aspects of childhood deprivation we might want to target, and what kind of effects we might expect such interventions to have.

Apart from all this, I find life history strategy theory sort of reassuring. Until now, atheists have been denied the comfort of knowing God has a plan for them. Sure, they could know that evolution had a plan for them, but that plan was just “watch dispassionately to see whether they live or die, then adjust gene frequencies in the next generation accordingly”. In life history strategy theory, evolution – or at least your mumble epigenetic mumble mechanism – actually has a plan for you. Now we can be evangelical atheists who have a personal relationship with evolution. It’s pretty neat.

And I come at this from the perspective of someone who has failed at many things despite trying very hard, and also succeeded at others without even trying. This has been a pretty formative experience for me, and it’s seductive to be able to think of all of it as part of a plan. Literally seductive, in the sense of memetic evolution. Like that Hogwarts chart.

Read this book at your own risk; its theories will start creeping into everything you think.

Del Giudice On The Self-Starvation Cycle

$
0
0

[Content note: eating disorders]

Anorexia has a cultural component. I’m usually reluctant to assume anything is cultural – every mediocre social scientist’s first instinct is always to come up with a cultural explanation which is simple, seductive, flattering to all our existing prejudices, and wrong. But after seeing enough ballerinas and cheerleaders who became anorexic after pressure to lose weight for the big competition, even I have to throw up my hands and admit anorexia has a cultural component.

But nobody ever tells you the sequel. That ballerina who’s losing weight for the big competition at age 16? At age 26, she’s long since quit ballet, worried it would exacerbate her anorexia. She’s been in therapy for ten years; for eight of them she’s admitted she has a problem, that her anorexia is destroying her life. Her romantic partners – the ones she was trying to get thin to impress – have long since left her because she looks skeletal and weird. She understands this and would do anything to cure her anorexia and be a normal weight again. But she finds she isn’t hungry. She hasn’t eaten in two days and she isn’t hungry. In fact, the thought of food sickens her. She goes to increasingly expert therapists and dieticians, asking them to help her eat more. They recommend all the usual indulgences: ice cream, french fries, cookies. She tries all of them and finds them inexplicably disgusting. Sometimes with a prodigious effort of will she will manage to finish one cookie, and congratulate herself, but the next day she finds the task of eating dessert as daunting as ever. Finally, after many years of hard work, she is scraping the bottom end of normal weight by keeping to a diet so regimented it would make a Prussian general blush.

And nobody ever tells you about all the people who weren’t ballerinas. The young man who stops eating because it gives him a thrill of virtue and superiority to be able to demonstrate such willpower. The young woman who stops eating in order to show her family how much their neglect hurts her. If they pursue their lack of appetite far enough, they end up the same way as the ballerina – admitting they have a problem, admitting they need to eat more, hiring all sorts of doctors and dieticians to find them a way to eat more, but discovering themselves incapable of doing so.

And this is why I can’t subscribe to a purely cultural narrative of anorexia. How does “ballerinas are told they should be thin in order to be pretty” explain so many former ballerinas who want to gain weight but can’t? And how does it explain the weird, almost neurological stuff like how anorexic people will mis-estimate their ability to fit through doors?

All of this makes much more sense in a biological context; it’s as if the same system that is broken in obese people who cannot lose weight no matter how hard they try, is broken in anorexics who cannot gain weight no matter how hard they try. There are plenty of biological models for what this might mean. But then the question becomes: how do we reconcile the obviously cultural part where it disproportionately happens to ballerinas, to the probably biological part where the hypothalamus changes its weight set point?

I’m grateful to Professor del Giudice and Evolutionary Psychopathologyfor presenting the only reasonable discussion of this I have heard, which I quote here basically in its entirety:

The self-starvation cycle arises in predisposed individuals following an initial phase of food restriction and weight loss. Food restriction may be initially prompted by a variety of motives, from weight concerns and a desire for thinness to health-related or religious ideas (eg spiritual purity, ascetic self-denial). In fact, the cycle may even be started by involuntary weight loss due to physical illness. While fasting and exercise are initially aversive, they gradually become rewarding – even addictive – as the starvation response kicks in. At the same time, restricting behaviors that used to be deliberate become increasingly automatic, habitual, and difficult to interrupt (Dwyer et al, 2001; Guarda et al, 2015; Lock & Kirz, 2013; McGuire & Troisi 1998). The self-starvation cycle plays a crucial role in the onset of anorexia.

Increased physical activity is a key component of the starvation response in many animal species; in general, its function is to prompt exploration and extend the foraging range when food is scarce. This response is so ingrained that animals subjected to food restriction in conditions that allow physical activity often starve themselves to death through strenuous exercise (Fessler, 2002; Guarda et al, 2015; Scheurink et al, 2010). In humans, pride is a powerful additional rewrad of self-starvation – achieving extraordinary levels of thinness and self-control makes many anorexic patients feel special and superior (Allan & Goss, 2012). The starvation response also brings about some psychological changes that further contribute to reinforce the cycle. In particular, starvation dramatically interferes with executive flexibility/shifting, and patterns of behavior become increasingly rigid and inflexible. The balance between local and global processing is also shifted toward local details. This may contribute to common body image distortions in anorexia, as when patients focus obsessively on a specific body part (eg the neck or hips) but preceive themselves as globally overweight (Pender et al, 2014; Westwood et al; 2016).

The self-starvation cycle has been documented across time and cultures, including non-Western ones. In modern Western societies, concerns with fat and thinness are the main reason for weight loss and probably explain the moderate rise of AN incidence around the second half of the 20th century. However, cases of self-starvation with spiritual and religious motivations have been common in Europe at least since the Middle Ages (and include several Catholic saints, most famously St. Catherine of Siena). In some Asian cultures, digestive discomfort is often cited as the initial reason for restricting food intake, but the resulting syndrome has essentially the same symptoms as anorexia in Western countries (Bell, 1984; Brumberg, 1989; Culbert et al, 2015; Keel & Klump, 2003). The DSM-5 criteria for anorexia include fear of gaining weight as a diagnostic requirement; for this reason, most historical and non-Western cases would not be diagnosed as AN within the current system. However, the present emphasis on thinness is likely a contingent sociohistorical fact and does not seem to represent a necessary feature of the disorder. (Keel & Klump, 2003)

My anorexic patients sometimes complain of being forced into this mold. They’ll try to go to therapy for their inability to eat a reasonable amount of food, and their therapist will want to spend the whole time talking about their body image issues. When they complain they don’t really have body image issues, they’ll get accused of repressing it. Eventually they’ll just say “Yeah, whatever, I secretly wanted to be a ballerina” in order to make the therapist shut up and get to the part where maybe treatment happens.

The clear weak part of this theory is the explanation of the “self-starvation cycle”. Aside from a point about animals sometimes having increased activity to go explore for food, it all seems kind of tenuous.

And how come most people who starve never get anorexia? How come sailors who ran out of food halfway across the Pacific, barely made it to some tropical island, and gorged themselves on coconuts didn’t end up anorexic? Donner Party members? Concentration camp survivors? Is there something special about voluntary starvation? Some kind of messed-up learning process?

I am interpreting the point to be something along the lines of “Suppose for some people with some unknown pre-existing vulnerability, starving themselves voluntarily now flips some biological switch which makes them starve themselves involuntarily later”.

Framed like this, it sounds more like a description of anorexia than a theory about it (though see here for an attempt to flesh this out). But it’s a description which captures part of the disease that a lot of other models don’t, and which brings some things into clearer relief, and I am grateful to have it.


Diametrical Model Of Autism And Schizophrenia

$
0
0

One interesting thing I took from Evolutionary Psychopathology was a better understanding of the diametrical theory of the social brain.

There’s been a lot of discussion over whether schizophrenia is somehow the “opposite” of autism. Many of the genes that increase risk of autism decrease risk of schizophrenia, and vice versa. Autists have a smaller-than-normal corpus callosum; schizophrenics have a larger-than-normal one. Schizophrenics smoke so often that some researchers believe they have some kind of nicotine deficiency; autists have unusually low smoking rates. Schizophrenics are more susceptible to the rubber hand illusion and have weaker self-other boundaries in general; autists seem less susceptible and have stronger self-other boundaries. Autists can be pathologically rational but tend to be uncreative; schizophrenics can be pathologically creative but tend to be irrational. The list goes on.

I’ve previously been skeptical of this kind of thinking because there are many things that autists and schizophrenics have in common, many autistics who seem a bit schizophrenic, many schizophrenics who seem a bit autistic, and many risk factors shared by both conditions. But Del Giudice, building on work by Badcock and Crespi presents the “diametrical model”: schizophrenia and autism are the failure modes of opposing sides of a spectrum from high functioning schizotypy to high functioning autism, ie from overly mentalistic cognition to overly mechanistic cognition.

Schizotypy is a combination of traits that psychologists have discovered often go together. It’s classified as a personality disorder in the DSM. But don’t get too caught up on that term – it’s a disorder in the same sense as narcissistic or antisocial tendencies, and like those conditions, some schizotypals do very well for themselves. Classic schizotypal traits include tendency toward superstition, disorganized communication, and nonconformity (if it sounds kind of like “schizophrenia lite”, that’s not really a coincidence).

Typically schizotypals are supposed to be paranoid and reclusive, the same as schizophrenics. But the diametrical model tries to downplay this in favor of noting that some schizotypals are unusually charismatic and socially successful. I am not exactly sure where they’re getting this from, but I cannot deny knowing several extremely charismatic people with a lot of schizotypal traits. Sometimes these people end up as “cult leaders” – not necessarily literally, but occupying that same niche of strange people who others are drawn toward for their unusually confident and otherworldly nature. Some of the people I know in this category have schizophrenic first-degree relatives, meaning they’re probably pretty loaded with schizotypal genes.

Schizotypals, according to the theory, have overly mentalistic cognition. Their brains are hard-wired for thinking in ways that help them understand minds and social interactions. When this succeeds, it looks like an almost magical understanding into what other people are secretly thinking, what their agendas are, and how to manipulate them. When it fails, it fails as animism and anthropomorphism: “I wonder what the universe is trying to tell me by making it rain today”. Or it fails as paranoia through oversensitivity to social cues: “I just saw him twitch his eye muscle slightly, which can sometimes mean he’s not interested in what I’m saying, and in the local status game that could mean that he doesn’t think I’m important enough, and that implies he might think he’s better than me and I’m expendable…”

Autism, then, would be the opposite of this. It’s overly mechanistic cognition, thinking in terms of straightforward logic and the rules of the physical world. Autistic people don’t make the mistake of thinking the universe is secretly trying to tell them something. On the other hand, after several times trying to invite a slightly autistic woman I had a crush on to things, telling her how much I liked her, petting her hair, etc, she still hadn’t figured out I was trying to date her until I said explicitly “I AM TRYING TO DATE YOU”. So not believing that you are secretly being told things has both upsides and downsides.

Autistic people are sometimes accused of looking for a set of rules that will help them understand people, or the secret cheat code that will make people give them what they want. I imagine an autistic person asking something like “What is the alternative?” This is the kind of thought process that usually works on stuff: figure out the rules that govern something, find a way to exploit them, and boom, you’ve landed a rocket on the moon. How are they supposed to know that human interaction is a bizarre set of layered partial-information games that you’re supposed to solve by looking at someone’s eye muscle twitches and concluding they’re going to steamroll over you to get a promotion at work?

Is this true? There’s…not great evidence for it. I’ve never seen any studies. There’s certainly a stereotype that brilliant engineers are not necessarily the most socially graceful people. But I know a lot of people who combine excellent technical skills with excellent social skills, and other people who are failures in both areas. So probably the best that can be said about this theory is that it would be a really neat way to explain the patterns of similarities and differences between schizophrenia and autism.

In this theory, both high-functioning autism (being good at mechanistic cognition) and high-functioning schizotypy (being good at mentalistic cognition) may be good things to have. But the higher your mutational load is – the less healthy your brain, and the fewer resources it has to bring to the problem – the less well it is able to control these powerful abilities. A schizotypal brain that cannot keep its mentalistic cognition yoked to reality dissolves into schizophrenia, completely losing the boundary between Self and Other into a giant animistic universe of universal significance and undifferentiated Mind. An autistic brain that cannot handle the weight of its mechanistic cognition becomes unable to do even the most basic mental tasks like identify and cope with its own emotions. And because in practice we’re talking about shifts in the complicated computational parameters that determine our thoughts and personalities, rather than the thoughts and personalities directly, both of these conditions have a host of related sensory and cognitive symptoms that aren’t quite directly related.

So here the reason why autism and schizophrenia seem both opposite and similar to each other is because they’re opposite (in the sense of being at two ends of a spectrum), and similar (in the sense that the same failure mode of high mutational load and low “mental resources” will cause both).

If you’re thinking “it sounds like someone should do a principal components analysis on this”, then Science has your back (paper, popular article). They find that:

Consistent with previous research, autistic features were positively associated with several schizotypal features, with the most overlap occurring between interpersonal schizotypy and autistic social and communication phenotypes. The first component of a principal components analysis (PCA) of subscale scores reflected these positive correlations, and suggested the presence of an axis (PC1) representing general social interest and aptitude. By contrast, the second principal component (PC2) exhibited a pattern of positive and negative loadings indicative of an axis from autism to positive schizotypy, such that positive schizotypal features loaded in the opposite direction to core autistic features.

In keeping with this theory, studies find that first-degree relatives of autists have higher mechanistic cognition, and first-degree relatives of schizophrenics have higher mentalistic cognition and schizotypy. Autists’ relatives tend to have higher spatial compared to verbal intelligence, versus schizophrenics’ relatives who tend to have higher verbal compared to spatial intelligence. High-functioning schizotypals and high-functioning autists have normal (or high) IQs, no unusual number of fetal or early childhood traumas, and the usual amount of bodily symmetry; low-functioning autists and schizophrenics have low IQs, increased history of fetal and early childhood trauams, and increased bodily asymmetry indicative of mutational load.

If men have much more autism than women, shouldn’t women have much more schizophrenia than men. You’d think so, but actually men have more. But men have greater variability in general, which means they’re probably more likely to satisfy the high mutational load criterion. So maybe we should instead predict that women should have higher levels of high-functioning schizotypy. Studies show women do have more “positive schizotypy”, the sort being discussed here, but lower “negative schizotypy”, a sort linked to the negative symptoms of schizophrenia.)

Something that bothered me while I was writing this: famous mathematician John Nash was schizophrenic. Isn’t that kind of weird if schizophrenia is about an imbalance in favor of verbal/personal and against logical/mathematical thinking?

There are exceptions to everything, and we probably shouldn’t make too much of one case. But I find it striking that Nash’s work was in game theory: essentially a formalization of social thinking, and centered around the sort of paranoid social thinking of figuring out what to do about how other people might be out to get you. This is probably just a coincidence, but it’s pretty funny.

Psychiat-List Now Up

$
0
0

Lots of people have asked me to recommend them a psychiatrist or therapist. I’ve done a terrible job responding: it’s a conflict of interest to recommend my own group, and I don’t know many people outside of it.

So now I’ve put together a list (by which I mostly mean blatantly copied a similar list made by fellow community member Anisha M) of mental health professionals whom members of the rationalist community have had good experiences with. So far it’s short and mostly limited to the Bay Area. You can find it at the “Psychiat-List” button on the top of the blog, or at this link.

My hope is to crowd-source additional recommendations to expand the list to more providers and cities. Please let me know, either on this post or on the comments to the list itself, if you have any extra recommendations to add – especially if you’re in a city likely to have many other SSC readers. Please also let me know if you’ve had any positive or negative experiences with people already on the list, so I can change their status accordingly.

Survey Results On SSRIs

$
0
0

SSRIs are the most widely used class of psychiatric medications, helpful for depression, anxiety, OCD, panic, PTSD, anger, and certain personality disorders (Why should the same drug treat all these things? Great question!) They’ve been pretty thoroughly studied, but there’s still a lot we don’t understand about them.

The SSC Survey is less rigorous than most existing studies, but its many questions and very high sample size provide a different tool to investigate some of these issues. I asked fifteen questions about SSRIs on the most recent survey and received answers from 2,090 people who had been on SSRIs. The sample included people on all six major SSRIs, but there were too few people on fluvoxamine (15) to have reliable results, so it was not included in most comparisons. Here’s what we found:

1. Do SSRIs work?

People seem to think so:

Made me feel much worse: 6%
Made me feel slightly worse: 7.4%
No net change in how I felt: 23.7%
Made me feel slightly better: 41.4%
Made me feel much better: 21.4%

Of course, these statistics include the placebo effect and so cannot be taken entirely at face value.

2. Do some SSRIs work better than others?

I asked people to rate their experience with the medication, on a scale from 1 to 10. Here were the results:

Lexapro (356): 5.7
Zoloft (470): 5.6
Prozac (339): 5.5
Celexa (233): 5.4
Paxil (126): 4.6

Paxil differed significantly from the others; the others did not differ significantly among themselves. In a second question where participants were just asked to rate their SSRIs from -2 (“made me feel much worse”) to +2 (“made me feel much better”), the ranking was preserved, and Lexapro also separated from Celexa.

This ranking correlates at r = 0.98 (!?!) with my previous study of this taken from drugs.com ratings.

I don’t generally hear that Paxil is less effective than other SSRIs, but I have heard that it causes worse side effects. The survey question (probably wrongly) encouraged people to rate side effects as “negative efficacy”. My guess is that the difference here is mostly driven by side effects.

3. Do SSRIs work better for anxiety than for depression?

I’ve heard a few people mention this, and it makes sense as one reason why they remain so popular among patients and doctors while rarely producing large effects on specialized depression tests.

On the same scale as above:

Anxiety (391): 5.9
OCD (24): 5.8
Depression (1203): 5.3
Panic (26): 5.5
Anger (26): 5.2

There is a pretty strong effect in favor of anxiety over depression. There were not enough OCD, panic, or anger patients to get a clear picture of where those fell in relation to the other two. As far as I know this is the first study to back this claim up. But since I didn’t directly ask about dose, we can’t rule out that doctors give higher doses for anxiety and higher doses work better.

4. How many people experience side effects on SSRIs?

70% of people taking the drugs had at least one of the side effects on the list below:

(on this list, mild is exclusive of severe. So for example if 10% of people had mild side effects and 5% of people had severe side effects, a total of 15% of people had at least mild side effects)

SEXUAL DIFFICULTIES:
Severe: 11%
Mild: 41%
None: 48%

EMOTIONAL BLUNTING:
Severe: 8%
Mild: 31%
None: 61%

FATIGUE:
Severe: 6%
Mild: 18%
None: 76%

COGNITIVE DIFFICULTIES:
Severe: 3%
Mild: 16%
None: 81%

MADE DEPRESSION WORSE:
Severe: 2%
Mild: 5%
None: 93%

The more recently someone took the SSRI, the more side effects they were likely to have. While I can imagine innocent explanations for this, the most likely is recall bias: after a while, people forgot about some side effects. The real numbers are probably a little higher than this.

Most people’s side effects went away quickly after stopping the SSRI, but 15% of people who stopped the medication more than five years ago said their side effects never went away. Although post-SSRI-sexual-dysfunction is sort of known to the psychiatric consensus, this is a shockingly high number, which doesn’t seem consistent to me with how little you hear about this; I’m not sure what to think. The survey wasn’t really designed to ask which side effects these were, but just eyeballing the individual entries it looks like mostly sexual, with a small amount of emotional thrown in. But these are just the two most common side effects, so it doesn’t necessarily mean these two are more persistent than others.

5. Do some SSRIs produce worse side effects than others?

I think the psychiatric consensus on this question is that Paxil has worse side effects than the others, which are all equal.

This survey failed to directly replicate that. Four of the five side effects elicited (sexual difficulties, fatigue, emotional blunting, cognitive problems) were the same across all drugs. The only one that differed was worsened depression, which was slightly less common on Zoloft. This was technically significant but given the number of tests I would not put too much stock in this.

I forgot to ask about a few important side effects, including weight gain. I suspect that Paxil scored worst on the “overall” category because it produced worse side effects in the categories I forgot to ask about, or because people remembered it had side effects but couldn’t remember exactly what they were. Overall this survey doesn’t really make me doubt the consensus that it is probably worst.

6. How many people have trouble discontinuing SSRIs?

Good question! This has been a topic of interminable debate in the medical community, with some saying these problems are very common and others saying they are very rare. This survey found:

59% don’t remember having any issues at all
22% remember having a few minimal issues but not really thinking about them
14% remember having moderate issues that caused significant distress but were not disabling
5% remember having severe issues that seriously impacted quality of life

6. What factors made SSRI discontinuation easier or harder?

Hard to tell.

Everyone believes that a more gradual taper makes things easier, but the survey quite clearly found that people who reported longer tapers had worse problems. I’m pretty sure this is because if their doctor expected them to have problems (or they started it and did have problems) they put them on a longer taper. But this kind of thing makes it hard to make any real recommendations. However, people who came off their medication accidentally because they ran out did have by far the worst time, suggesting that cold turkey discontinuation really isn’t the way to go.

The longer a person had been on SSRIs, the harder their taper was likely to be. However, I’ve heard some people give overly dire warnings like “If you’re on these drugs for more than five years, don’t try coming off”. These were not justified. Even among people who were on the medication over five years, 49% tapered with “no issues”, and only 15% reported severe issues.

People with anxiety and OCD reported more difficult discontinuation than people with depression. This could be either because these conditions require higher doses, or because people with anxiety and OCD are more likely to notice and worry about minor symptoms.

Psychiatric consensus says that Paxil is the hardest SSRI to get off, and Prozac is the easiest. This survey confirmed that result. On a scale from 0 – 10, where 0 is the easiest discontinuation and 10 is the most difficult:

Paxil: 2.8
Lexapro: 2.7
Zoloft: 2.2
Celexa: 2.1
Prozac: 1.5

Prozac separated from Zoloft, Lexapro, and Paxil, but not from Celexa. The average person had only a 59% chance of having no discontinuation symptoms; the average person on Prozac had a 71% chance.

In any case, almost everybody’s taper was successful eventually. Only 0.5% of people said they gave up and stayed on the SSRI because they found discontinuation too difficult.

Summary

This survey had few surprises.

Already when giving someone an SSRI, I debate between Lexapro and Prozac. Lexapro is usually the most effective (by a tiny hair), but Prozac is the least likely to cause discontinuation syndrome. Although a natural strategy might be to taper Lexapro (or something else) very slowly in order to match Prozac’s slow half-life, studies show this doesn’t work (why not?), and this survey confirms it. There is no obvious right answer between these two as first-choice SSRI. This study does confirm my prejudice that giving Paxil is an obvious wrong answer and you should never do it outside specific rare circumstances.

The main surprise was the high number of people who claim their SSRI side effects never went away. Although this is a known very rare possibility, the survey suggested it was much less rare. One can imagine innocent ways this could happen: for example, someone goes on SSRIs for ten years, comes off, and is surprised to find their sex drive is lower than it was when they were a teenager ten years ago. This probably requires more careful and rigorous study than can be done in a silly online survey – and so will probably never happen.

Ketamine: Now By Prescription

$
0
0

Last week the FDA approved esketamine for treatment-resistant depression.

Let’s review how the pharmaceutical industry works: a company discovers and patents a potentially exciting new drug. They spend tens of millions of dollars proving safety and efficacy to the FDA. The FDA rewards them with a 10ish year monopoly on the drug, during which they can charge whatever ridiculous price they want. This isn’t a great system, but at least we get new medicines sometimes.

Occasionally people discover that an existing chemical treats an illness, without the chemical having been discovered and patented by a pharmaceutical company. In this case, whoever spends tens of millions of dollars proving it works to the FDA may not get a monopoly on the drug and the right to sell it for ridiculous prices. So nobody spends tens of millions of dollars proving it works to the FDA, and so it risks never getting approved.

The usual solution is for some pharma company to make some tiny irrelevant change to the existing chemical, and patent this new chemical as an “exciting discovery” they just made. Everyone goes along with the ruse, the company spends tens of millions of dollars pushing it through FDA trials, it gets approved, and they charge ridiculous prices for ten years. I wouldn’t quite call this “the system works”, but again, at least we get new medicines.

Twenty years ago, people noticed that ketamine treated depression. Alas, ketamine already existed – it’s an anaesthetic and a popular recreational drug – so pharma companies couldn’t patent it and fund FDA trials, so it couldn’t get approved by the FDA for depression. A few renegade doctors started setting up ketamine clinics, where they used the existing approval of ketamine for anaesthesia as an excuse to give it to depressed people. But because this indication was not FDA-approved, insurance companies didn’t have to cover it. This created a really embarrassing situation for the medical system: everyone secretly knows ketamine is one of the most effective antidepressants, but officially it’s not an antidepressant at all, and mainstream providers won’t give it to you.

The pharmaceutical industry has lobbyists in Heaven. Does this surprise you? Of course they do. A Power bribed here, a Principality flattered there, and eventually their petitions reach the ears of God Himself. This is the only possible explanation for stereochemistry, a quirk of nature where many organic chemicals come in “left-handed” and “right-handed” versions. The details don’t matter, beyond that if you have a chemical that you can’t patent, you can take the left-handed (or right-handed) version, and legally pretend that now it is a different chemical which you can patent. And so we got “esketamine”.

Am I saying that esketamine is just a sinister ploy by pharma to patent and make money off ketamine? Yup. In fact “esketamine” is just a cutesy way of writing the chemical name s-ketamine, which literally stands for “sinister ketamine” (sinister is the Latin word for “left-handed”; the modern use derives from the old superstition that left-handers were evil). The sinister ploy to patent sinister ketamine worked, and the latest news says it will cost between $590 to $885 per dose.

(regular old ketamine still costs about $10 per dose, less if you buy it from a heavily-tattooed man on your local street corner)

I’ve said it before: I don’t blame the pharma companies for this. Big Government, in its infinite wisdom, has decided that drugs should have to undergo tens of millions of dollars worth of FDA trials before they get approved. No government agencies or altruistic billionaires have stepped up to fund these trials themselves, so they won’t happen unless some pharma company does it. And pharma companies aren’t going to do it unless they can make their money back. And it’s not like they’re overcharging; their return to investment on R&D may already be less than zero. This is a crappy system – but again, it’s one that occasionally gets us new medicines. So it’s hard to complain.

But in this case, there are two additional issues that make it even worse than the usual serving of crappiness.

First, esketamine might not work.

Johnson & Johnson, the pharma company sponsoring its FDA application, did four official efficacy studies. You can find the summary starting on page 17 of this document. Two of the trials were technically negative, although analysts have noticed nontechnical ways they look encouraging. Two of the trials were technically positive, but one of them was a withdrawal trial that was not really designed to prove efficacy.

The FDA usually demands two positive studies before they approve a drug, and doesn’t usually count withdrawal trials. This time around, in a minor deviation from their usual rules, they decided to count the positive withdrawal trial as one of the two required positives, and approve esketamine. I suspect this was a political move based on how embarrassing it was to have everyone know ketamine was a good antidepressant, but not have it officially FDA-approved.

But if ketamine is such a good antidepressant, how come it couldn’t pass the normal bar for approval? Like, people keep saying that ketamine is a real antidepressant, that works perfectly, and changes everything, unlike those bad old SSRIs which are basically just placebo. But esketamine’s results are at least as bad as any SSRI’s. If you look at Table 9 in the FDA report, ketamine did notably worse than most of the other antidepressants the FDA has approved recently – including vortioxetine, an SSRI-like medication.

One possibility is that ketamine was studied for treatment-resistant depression, so it was only given to the toughest cases. But Table 9 shows olanzapine + fluoxetine doing significantly better than esketamine even for treatment-resistant depression.

Another possibility is that clinical trials are just really tough on antidepressants for some reason. I’ve mentioned this before in the context of SSRIs. Patients love them. Doctors love them. Clinical trials say they barely have any effect. Well, now patients love ketamine. Doctors love ketamine. And now there’s a clinical trial showing barely any effect. This isn’t really a solution to esketamine’s misery, but at least it has company.

Another possibility is that everyone made a huge mistake in using left-handed ketamine, and it’s right-handed ketamine that holds the magic. Most previous research was done on a racemic mixture (an equal mix of left-handed and right-handed molecules), and at least one study suggests it was the right-handed ketamine that was driving the results. Pharma decided to pursue left-handed ketamine because it was known to have a stronger effect on NMDA receptors, but – surprise! – ketamine probably doesn’t work through NMDA after all. So there’s a chance that this is just the wrong kind of ketamine – though usually I expect big pharma to be smarter than that, and I would be surprised if this turned out to be it. I don’t know if anybody has a right-handed ketamine patent yet.

And another possibility is that it’s the wrong route of administration. Almost all previous studies on ketamine have examined it given IV. The FDA approved esketamine as a nasal spray – which is a lot more convenient for patients, but again, not a lot of studies showing it works. At least some studies seem to show that it doesn’t. Again, usually I expect big pharma not to screw up the delivery method, but who knows?

Second in our litany of disappointments, esketamine is going to be maximally inconvenient to get.

The big problem with regular ketamine, other than not being FDA-approved, was that you had to get it IV. That meant going to a ketamine clinic that had nurses and anesthesiologists for IV access, then sitting there for a couple of hours hallucinating while they infused it into you. This was a huge drawback compared to eg Prozac, where you can just bring home a pill bottle and take one pill per day in the comfort of your own bathroom. It’s also expensive – clinics, nurses, and anesthesiologists don’t come cheap.

The great appeal of a ketamine nasal spray was that it was going to prevent all that. Sure, it might not work. Sure, it would be overpriced. But at least it would be convenient!

The FDA, in its approval for esketamine, specified that it could only be delivered at specialty clinics by doctors who are specially trained in ketamine administration, that patients will have to sit at the clinic for at least two hours, and realistically there will have to be a bunch of nurses on site. My boss has already said our (nice, well-funded) clinic isn’t going to be able to jump through the necessary hoops; most other outpatient psychiatric clinics will probably say the same.

This removes most of the advantages of having it be intranasal, so why are they doing this? They give two reasons. First, they want to make sure no patient can ever bring ketamine home, because they might get addicted to it. Okay, I agree addiction is bad. But patients bring prescriptions of OxyContin and Xanax home every day. Come on, FDA. We already have a system for drugs you’re worried someone will get addicted to, it’s called the Controlled Substances Act. Ketamine is less addictive than lots of chemicals that are less stringently regulated than it is. This just seems stupid and mean-spirited.

The other reason the drugs have to be given in a specially monitored clinic is because ketamine can have side effects, including hallucinations and dissociative sensations. I agree these are bad, and I urge patients only to take hallucinogens/dissociatives in an appropriate setting, such as a rave. Like, yeah, ketamine can be seriously creepy, but now patients are going to have to drive to some overpriced ketamine clinic a couple of times a week and sit there for two hours per dose just because you think they’re too frail to handle a dissociative drug at home?

I wanted to finally be able to prescribe ketamine to my patients who needed it. Instead, I’m going to have to recommend they find a ketamine clinic near them (some of my patients live hours from civilization), drive to it several times a week (some of my patients don’t have cars) and pay through the nose, all so that some guy with a postgraduate degree in Watching People Dissociate can do crossword puzzles while they sit and feel kind of weird in a waiting room. And then those same patients will go home and use Ecstasy. Thanks a lot, FDA.

And the cherry on the crap sundae is that this sets a precedent. If the FDA approves psilocybin for depression (and it’s currently in Phase 2 trials, so watch this space!) you can bet you’re going to have to go to a special psilocybin clinic if you want to get it. Psychedelic medicine is potentially the future of psychiatry, and there’s every indication that it will be as inconvenient and red-tape-filled a future as possible. If you thought it was tough getting your Adderall prescription refilled every month, just wait.

So far, I am continuing to recommend that my patients who want ketamine seek intravenous racemic ketamine at an existing ketamine clinic, since this has a stronger evidence base. Once insurance starts covering esketamine, I may change my mind if money becomes an issue. But I’m annoyed that it’s come to this.

Translating Predictive Coding Into Perceptual Control

$
0
0

Wired wrote a good article about Karl Friston, the neuroscientist whose works I’ve puzzled over here before. Raviv writes:

Friston’s free energy principle says that all life…is driven by the same universal imperative…to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy.

Put this way, it’s clearly just perceptual control theory. Powers describes the same insight like this:

[Action] is the difference between some condition of the situation as the subject sees it, and what we might call a reference condition, as he understands it.

I’d previously noticed that these theories had some weird similarities. But I want to go further and say they’re fundamentally the same paradigm. I don’t want to deny that the two theories have developed differently, and I especially don’t want to deny that free energy/predictive coding has done great work building in a lot of Bayesian math that perceptual control theory can’t match. But the foundations are the same.

Why is this of more than historical interest? Because some people (often including me) find free energy/predictive coding very difficult to understand, but find perceptual control theory intuitive. If these are basically the same, then someone who wants to understand free energy can learn perceptual control theory and then a glossary of which concepts match to each other, and save themselves the grief of trying to learn free energy/predictive coding just by reading Friston directly.

So here is my glossary:

FE/PC: prediction, expectation
PCT: set point, reference level

And…

FE/PC: prediction error, free energy
PCT: deviation from set point

So for example, suppose it’s freezing cold out, and this makes you unhappy, and so you try to go inside to get warm. FE/PC would describe this as “You naturally predict that you will be a comfortable temperature, so the cold registers as strong prediction error, so in order to minimize prediction error you go inside and get warm.” PCT would say “Your temperature set point is fixed at ‘comfortable’, the cold marks a wide deviation from your temperature set point, so in order to get closer to your set point, you go inside”.

The PCT version makes more sense to me here because the phrase “you naturally predict that you will be a comfortable temperature” doesn’t match any reasonable meaning of “predict”. If I go outside in Antarctica, I am definitely predicting I will be uncomfortably cold. FE/PC obviously means to distinguish between a sort of unconscious neural-level “prediction” and a conscious rational one, but these kinds of vocabulary choices are why it’s so hard to understand. PCT uses the much more intuitive term “set point” and makes the whole situation clearer.

FE/PC: surprise
PCT: deviation from set point

FE/PC says that “the fundamental drive behind all behavior is to minimize surprise”. This leads to questions like “What if I feel like one of my drives is hunger?” and answers like “Well, you must be predicting you would eat 2000 calories per day, so when you don’t eat that much, you’re surprised, and in order to avoid that surprise, you feel like you should eat.”

PCT frames the same issue as “You have a set point saying how many calories you should eat each day. Right now it’s set at 2000. If you don’t eat all day, you’re below your calorie set point, that registers as bad, and so you try to eat in order to minimize that deviation.”

And suppose we give you olanzapine, a drug known for making people ravenously hungry. The FE/PCist would say “Olanzapine has made you predict you will eat more, which makes you even more surprised that you haven’t eaten”. The PCTist would say “Olanzapine has raised your calorie set point, which means not eating is an even bigger deviation.”

Again, they’re the same system, but the PCT vocabulary sounds sensible whereas the FE/PC vocabulary is confusing.

FE/PC: Active inference
PCT: Behavior as control of perception

FE/PC talks about active inference, where “the stimulus does not determine the response, the response determines the stimulus” and “We sample the world to ensure our predictions become a self-fulfilling prophecy.”. If this doesn’t make a lot of sense to you, you should read this tutorial, in order to recalibrate your ideas of how little sense things can make.

PCT talks about behavior being the control of perception. For example, suppose you are standing on the sidewalk, facing the road parallel to the sidewalk, watching a car zoom down that road. At first, the car is directly in front of you. As the car keeps zooming, you turn your head slightly right in order to keep your eyes on the car, then further to the right as the car gets even further away. Your actions are an attempt to “control perception”, ie keep your picture fixed at “there is a car right in the middle of my visual field”.

Or to give another example, when you’re driving down the highway, you want to maintain some distance between yourself and the car in front of you (the set point/reference interval, let’s say 50 feet). You don’t have objective yardstick-style access to this distance, but you have your perception of what it is. Whenever the distance becomes less than 50 feet, you slow down; whenever it becomes more than 50 feet, you speed up. So behavior (how hard you’re pressing the gas pedal) is an attempt to control perception (how far away from the other car you are).

FE/PC: The dark room problem
PCT: [isn’t confused enough to ever even have to think about this situation]

The “dark room problem” is a paradox on free energy/predictive coding formulations: if you’re trying to minimize surprise / maximize the accuracy of your predictions, why not just lie motionless in a dark room forever? After all, you’ll never notice anything surprising there, and as long as you predict “it will be dark and quiet”, your predictions will always come true. The main proposed solution is to claim you have some built-in predictions (of eg light, social interaction, activity levels), and the dark room will violate those.

PCT never runs into this situation. You have set points for things like social interaction, activity levels, food, sex, etc, that are greater than zero. In the process of pursuing them, you have to get out of bed and leave your room. There is no advantage to lying motionless in a dark room forever.

If the PCT formulation has all these advantages, how come everyone uses the FE/PC formulation instead?

I think this is because FE/PC grew out of an account of world-modeling: how do we interpret and cluster sensations? How do we form or discard beliefs about the world? How do we decide what to pay attention to? Here, words like “prediction”, “expectation”, and “surprise” make perfect sense. Once this whole paradigm and vocabulary was discovered, scientists realized that it also explained movement, motivation, and desire. They carried the same terminology and approach over to that field, even though now the vocabulary was actively misleading.

Powers was trying to explain movement, motivation, and desire, and came up with vocabulary that worked great for that. He does get into world-modeling, learning, and belief a little bit, but I was less able to understand what he was doing there, and so can’t confirm whether it’s the same as FE/PC or not. Whether or not he did it himself, it should be possible to construct a PCT look at world-modeling. But it would probably be as ugly and cumbersome as the FE/PC account of motivation.

I think the right move is probably to keep all the FE/PC terminology that we already have, but teach the PCT terminology along with it as a learning aid so people don’t get confused.

Short Book Reviews April 2019

$
0
0

I. Method of Levels

Timothy Carey’s Method Of Levels teaches a form of psychotherapy based on perceptual control theory.

The Crackpot List is specific to physics. But if someone were to create one for psychiatry, Method of Levels would score a perfect 100%. It somehow manages to do okay on the physics one despite not discussing any physics.

The Method of Levels is the correct solution to every psychological problem, from mild depression to psychosis. Therapists may be tempted to use something other than the Method of Levels, but they must overcome this temptation and just use the Method of Levels on everybody. Every other therapy is about dismissing patients as “just crazy”, but the Method of Levels tries to truly understand the patient. Every other therapy is about the therapist trying to change the patient, but the Method of Levels is about the patient trying to change themselves. The author occasionally just lapses into straight-up daydreams about elderly psychologists sitting on the porch, beating themselves up that they were once so stupid as to believe in psychology other than the Method of Levels.

This book isn’t just bad, it’s dangerous. One vignette discusses a patient whose symptoms clearly indicate the start of a manic episode. The author recommends that instead of stigmatizing this person with a diagnosis of bipolar or pumping them full of toxic drugs, you should use the Method of Levels on them. This is a good way to end up with a dead patient.

I like perceptual control theory. I share the author’s hope that it could one day be a theory of everything for the brain. But even if it is, you can’t use theories of everything to do clinical medicine. Darwin discovered a theory of everything for biology, but you can’t reason from evolutionary first principles to how to treat a bacterial infection. You should treat the bacterial infection with antibiotics. This will be in accordance with evolutionary principles, and there will even be some cool evolutionary tie-ins (fungi evolved penicillin as a defense against bacteria). But you didn’t discover penicillin by reasoning from evolutionary first principles. If you tried reasoning from evolutionary first principles, you might end up trying to make the bacteria mutate into a less dangerous strain during the middle of an osteomyelitis case or something. Just use actually existing clinical medicine and figure out the evolutionary justification for it later.

Or maybe a better metaphor is germ theory, a theory of everything specifically targeted to treatable diseases. But fifty years elapsed between Pasteur and penicillin, penicillin alone didn’t treat every germ, we still have some germs we can’t treat, and lots of things like cancer turned out not to be germs at all. You can’t jump straight from a theory of everything – even a good, correct theory of everything – to “now we have solved all problems and here’s the one technique for everything.”

On the other hand, most existing psychotherapy is placebo-ish, and first principles can sometimes be a useful guide. So as long as we are careful to dismiss the part where we throw out all existing medicine, and dismiss the part where we use this for patients having a manic episode, we can very tentatively look at the Method of Levels and what it suggests for patients having garden-variety psychological conflict.

Perceptual control theory says that minds primarily control perceptions. This is true on very low levels, like the hypothalamus controlling (its sensors’ perception of) temperature to 98.6 F. Theoretically it may be true on very high levels, like trying to control (perceived) social status or risk. If two control systems are accidentally trying to control the same variable at different levels, then both of them expend all their energy fighting each other and can’t control anything else. For example, if your house has one thermostat (with associated AC and heater) trying to keep the temperature at 65, and another thermostat (with its own associated AC and heater) trying to keep the temperature at 75, then one thermostat will keep the heat on all the time, the other will keep the AC on all the time, and the temperature will end up at 70 with a gigantic electrical bill.

In the same way, MoL understands intrapsychic conflict as competing control systems. Suppose a gay man is living in a conservative household that stigmatizes homosexuality. He’s trying to control the amount of sex/romance he has at some level that keeps his libido happy. He’s also trying to control his community standing at some level that keeps his sociometer happy. These are conflicting goals; the more he pursues a relationship, the less the community will like him, and vice versa. He will probably feel conflicted inside and not know what to do.

PCT believes the brain has a natural reorganizing process that keeps control systems running smoothly. Powers’ description of this sounds a lot like how we think of learning in neural nets; the brain randomly changes neural weights in a specific control system, with changes that lower the control system’s error getting reinforced, until the system is running smoothly again. If there’s intrapsychic conflict, this reorganization process must not be working.

MoL says the goal of therapy is to activate this reorganization process. The most likely reason it’s failing is that the patient is trying to reorganize the specific control systems that are in conflict, whereas what really needs to be reorganized is the higher-level control system that controls both of them. For example, our hypothetical gay patient shouldn’t be trying to reorganize his sex drive or his need for community belonging. He should be trying to reorganize some higher-level system that determines both of them, maybe his desire for a high quality of life. The “quality of life” control system determines the set point values for both the “sex drive” and the “need for community belonging” control systems, so if it could give them some value where they don’t conflict, the patient’s problem would be solved.

Reorganization is guided by awareness, so the therapist needs to move the patient’s awareness from the control systems that are experiencing the conflict, up to the higher-level control system that’s secretly producing the conflict. Its suggestion is to talk about the conflict with the patient, and especially about the patient’s experience talking about the conflict. So if the patient starts telling you about how he doesn’t know how to balance his homosexuality and his desire to fit in, you can prompt him to continue with questions like:

“What comes to mind when you think about not fitting in with your community?”
“What is your experience of wishing you could be more open about your sexuality like?”
“How does talking about this make you feel?”

Eventually the patient may have what the book calls an “up-a-level-event”, where instead of talking they look like they’re kind of lost in thought, or they close their eyes for a moment, or laugh nervously for no reason. At this point they’re becoming dimly aware of the higher level that’s guiding their lower-level conversations. The therapist should pounce on this and ask questions like:

“I see you closed your eyes for a moment just then. Is there something in particular you were thinking about?”
“You looked lost in thought for a second – why was that?”
“Can you tell me more about what made you laugh just then?”

The patient might then say something like “I was just thinking about how weird it was that I care so much about what my parents think about me when I don’t even respect them”, and then the therapist should keep going on this new topic. Now the patient’s awareness is on this higher level, and so the high-level control system can reorganize itself. Maybe eventually the reorganization that works is to give the pursue-your-sexuality system a higher set point, and the care-about-community system a lower set point, which looks like the patient deciding that he should not worry so much about what community members think about him.

You might have noticed from the first set of questions that this sounds a lot like what therapists do already. Carey does suggest that insofar as current therapies work, it’s because they’re already doing MoL-ish things. He suggests that his book offers more of an account of why they work, and a way to focus on the useful things instead of the chaff.

In particular, a lot of MoL – asking patients how they feel, trying to bring their awareness from the past content to the present process, worrying a lot about small gestures – sounds like psychodynamic therapy, at least the watered-down version of it most people today use. But it’s a lot more comprehensible than most attempts to teach psychodynamics, which never seem to hang together or have concrete suggestions for what you should do at any given time. If all this book ends up giving me is a way to do psychodynamics a little bit more cohesively, I’ll consider it worth my time.

I’m not sure how well its theoretical backing holds up. I always considered the very high-level PCT stuff to be the weakest part of the theory, and this not only relies upon them but goes several steps beyond them. The idea of the reorganizing process is an interesting one. But right now it’s got about as much empirical testing as, well, Freud. Still, some of the ideas discussed here seem lucid in a way Freud didn’t, so I’ll have to think about them more.

I wouldn’t recommend this book to anyone else right now based on the first few chapters being so embarrassing, and also so bad that I wouldn’t trust people to discount them enough even if I warned them how important it was.

II. How To Read Lacan

Why did I read How To Read Lacan by Slavoj Zizek?

I could answer this question on many levels. For example, the theological level: maybe I committed some sin in a past life. Maybe I was predestined to unhappiness. Maybe, having given me free will, God is no longer able to save me from my own bad choices.

On a more practical level: I’m trying to learn more about leftism, I’m trying to learn more about continental philosophy, and I’m trying to learn more about psychoanalysis. I figured I might as well get it all out of the way at once.

I was expecting this to be incomprehensible, but I was pleasantly surprised how good a writer Zizek was. He explains everything clearly, in down-to-earth prose interspersed with mildly funny Slovenian jokes that illustrate his points.

(Lacan himself is completely incomprehensible, to the point where he might as well be speaking Martian, but this book wisely avoided quoting Lacan except where absolutely necessary).

Despite being very readable, this book never really came together. Each chapter consisted of a Lacan quote, followed by Zizek’s interpretations and thoughts. The thoughts were always things like “Sometimes the act of communication itself can communicate something” or “We are never truly engaged with another person, even during sex”. These are always kind of reasonable, Zizek always does a good job proving them and relating them to mildly funny Slovenian jokes, and I came away agreeing with all of them. But I don’t feel like I understand how any of them cohere together into an object called “Lacanianism”, and none of them really seemed like a very surprising revelation, which is one reason this doesn’t get a full book review.

My main takeaway from this is that I should forget Lacan and try to read Zizek directly. Does anyone have recommendations for good starting points?

III. The Steerswoman

The Steerswoman is popular in the rationalist community, and now I see why. The titular organization of steerswomen are a rationalist sect devoted to understanding the world around them. They especially like geography – going to the borders of the known world and filling in the edges of the map – but also just seek knowledge in general. Anyone can ask a steerswoman any question, and the steerswoman must answer. But everyone has to answer any question asked of them by a steerswoman, or else the organization blacklists them and no steerswoman will answer their questions ever again.

The steerswomen live in a not-very-fleshed-out medieval fantasy world surrounding an inland sea. Although there are standard fantasy governments like dukes and chieftains, real power is held by wizards. No one knows anything about them, not even how many of them there are, where they come from, or how they do their magic. The book centers around the inevitably conflict between the nosy steerswomen and the mysterious wizards, and particularly around one steerswoman and her Barbarian™ traveling companion who stumble across a wizardly secret.

This book is from the 80s and had a very 80s feel to it. Compared to more modern fantasy, it’s shorter and feels more bare-boned. There are no two hundred different characters to keep track of, no romantic subplots, no lavish description of random political things that happen in minor towns. Just a woman and her Barbarian™ friend going on a basic standard-issue quest, with the whole thing starting and finishing in less time than it would take George RR Martin to describe the minor clan that controls an out-of-the-way fortress.

Some people called this book feminist, but I found it refreshingly apolitical. Most (though not all) of the steerswomen are women, but the book got a relatively boring explanation out of the way quickly and didn’t come back to it. Most of the characters’ genders were not too important to their personality, and the book did not obsess over gender issues. There is a part in a utopian society where one of the men teases one of the women about how much he wants to have sex with her, and the woman laughs it off, and the man keeps teasing, and this is clearly meant to signal how the society is utopian and everyone is very open and friendly with each other. The 80s were simpler times.

I’ve only read the first book of this long series. Overall I found it fun, but didn’t feel like it spoke deeply to anything within me. The book’s rationalists were discussed shallowly enough that it feels like decent cheerleading for rationality, but nothing you can’t find somewhere else. Although the steerswomen’s question-answering gimmick was cute, I spent more time worrying about the holes in it (can’t you just get someone else to ask steerswomen your questions? how can a worldwide organization in a medieval society keep an effective blacklist? really the world-building here was not that good) than feeling like the real world needed something similar (after all, we have Google). I’ll probably try to read the next few books in the series and update this if it gets better.

5-HTTLPR: A Pointed Review

$
0
0

In 1996, some researchers discovered that depressed people often had an unusual version of the serotonin transporter gene 5-HTTLPR. The study became a psychiatric sensation, getting thousands of citations and sparking dozens of replication attempts (page 3 here lists 46).

Soon scientists moved beyond replicating the finding to trying to elucidate the mechanism. Seven studies (see here for list) found that 5-HTTLPR affected activation of the amygdala, a part of the brain involved in processing negative stimuli. In one especially interesting study, it was found to bias how the amygdala processed ambiguous facial expression; in another, it modulated how the emotional systems of the amygdala connected to the attentional systems of the anterior cingulate cortex. In addition, 5-HTTLPR was found to directly affect the reactivity of the HPA axis, the stress processing circuit leading from the adrenal glands to the brain.

As interest increased, studies began pointing to 5-HTTLPR in other psychiatric conditions as well. One study found a role in seasonal affective disorder, another in insomnia. A meta-analysis of twelve studies found a role (p = 0.001) in PTSD. A meta-analysis of twenty-three studies found a role (p = 0.000016) in anxiety-related personality traits. Even psychosis and Alzheimer’s disease, not traditionally considered serotonergic conditions, were affected. But my favorite study along these lines has to be 5-HTTLPR Polymorphism Is Associated With Nostalgia-Proneness.

Some people in bad life situations become depressed, and others seem unaffected; researchers began to suspect that genes like 5-HTTLPR might be involved not just in causing depression directly, but in modulating how we respond to life events. A meta-analysis looked at 54 studies of the interaction and found “strong evidence that 5-HTTLPR moderates the relationship between stress and depression, with the s allele associated with an increased risk of developing depression under stress (P = .00002)”. This relationship was then independently re-confirmed for every conceivable population and form of stress. Depressed children undergoing childhood adversity. Depressed children with depressed mothers. Depressed youth. Depressed adolescent girls undergoing peer victimization. They all developed different amounts of depression based on their 5-HTTLPR genotype. The mainstream media caught on and dubbed 5-HTTLPR and a few similar variants “orchid genes”, because orchids are sensitive to stress but will bloom beautifully under the right conditions. Stories about “orchid genes” made it into The Atlantic, Wired, and The New York Times.

If finding an interaction between two things is exciting, finding an interaction between even more things must be even better! Enter studies on how the interaction between 5-HTTLPR and stress in depressed youth itself interacted with MAO-A levels and gender. What about parenting style? Evidence That 5-HTTLPR x Positive Parenting Is Associated With Positive Affect “For Better And Worse” What about decision-making? Gender Moderates The Association Between 5-HTTLPR And Decision-Making Under Uncertainty, But Not Under Risk. What about single motherhood? The influence of family structure, the TPH2 G-703T and the 5-HTTLPR serotonergic genes upon affective problems in children aged 10–14 years. What if we just throw all the interesting genes together and see what happens? Three-Way Interaction Effect Of 5-HTTLPR, BDNF Val66Met, And Childhood Adversity On Depression.

If 5-HTTLPR plays such an important role in depression, might it also have relevance for antidepressant treatment? A few studies of specific antidepressants started suggesting the answer was yes – see eg 5-HTTLPR Predicts Non-Remission In Major Depression Patients Treated With Citalopram and Influence Of 5-HTTLPR On The Antidepressant Response To Fluvoxamine In Japanese Depressed Patients. A meta-analysis of 15 studies found that 5-HTTLPR genotype really did affect SSRI efficacy (p = 0.0001). Does this mean psychiatrists should be testing for 5-HTTLPR before treating patients? A cost-effectiveness analysis says it does. There’s only one problem.

ALL.

OF.

THIS.

IS.

LIES.

Or at least this is the conclusion I draw from Border et al’s No Support For Historical Candidate Gene Or Candidate Gene-by-Interaction Hypotheses For Major Depression Across Multiple Large Samples, in last month’s American Journal Of Psychiatry.

They don’t ignore the evidence I mention above. In fact, they count just how much evidence there is, and find 450 studies on 5-HTTLPR before theirs, most of which were positive. But they point out that this doesn’t make sense given our modern understanding of genetics. Outside a few cases like cystic fibrosis, most important traits are massively polygenic or even omnigenic; no one gene should be able to have measurable effects. So maybe this deserves a second look.

While psychiatrists have been playing around with samples of a few hundred people (the initial study “discovering” 5-HTTLPR used n = 1024), geneticists have been building up the infrastructure to analyze samples of hundreds of thousands of people using standardized techniques. Border et al focus this infrastructure on 5-HTTLPR and its fellow depression genes, scanning a sample of 600,000+ people and using techniques twenty years more advanced than most of the studies above had access to. They claim to be able to simultaneously test almost every hypothesis ever made about 5-HTTLPR, including “main effects of polymorphisms and genes, interaction effects on both the additive and multiplicative scales and, in G3E analyses, considering multiple indices of environmental exposure (e.g., traumatic events in childhood or adulthood)”. What they find is…nothing. Neither 5-HTTLPR nor any of seventeen other comparable “depression genes” had any effect on depression.

I love this paper because it is ruthless. The authors know exactly what they are doing, and they are clearly enjoying every second of it. They explain that given what we now know about polygenicity, the highest-effect-size genes require samples of about 34,000 people to detect, and so any study with fewer than 34,000 people that says anything about specific genes is almost definitely a false positive; they go on to show that the median sample size for previous studies in this area was 345. They show off the power of their methodology by demonstrating that negative life events cause depression at p = 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001, because it’s pretty easy to get a low p-value in a sample of 600,000 people if an effect is real. In contrast, the gene-interaction effect of 5-HTTLPR has a p-value of .919, and the main effect from the gene itself doesn’t even consistently point in the right direction. Using what they call “exceedingly liberal significance thresholds” which are 10,000 times easier to meet than the usual standards in genetics, they are unable to find any effect. This isn’t a research paper. This is a massacre.

Let me back off a second and try to be as fair as possible to the psychiatric research community.

First, over the past fifteen years, many people within the psychiatric community have been sounding warnings about 5-HTTLPR. The first study showing failure to replicate came out in 2005. A meta-analysis by Risch et al from 2009 found no effect and prompted commentary saying that 5-HTTLPR was an embarrassment to the field. After 2010, even the positive meta-analyses (of which there were many) became guarded, saying only that they seemed to detect an effect but weren’t sure it was real. This meta-analysis on depression says there is “a small but statistically significant effect” but that “we caution it is possible the effect has an artifactual basis”. This meta-analysis of 5-HTTLPR amygdala studies says there is a link, but that “most studies to date are nevertheless lacking in statistical power”.

Counter: there were also a lot of meta-analyses that found the opposite. The Slate article on the “orchid gene” came out after Risch’s work, mentioned it, but then quoted a scientist calling it “bullshit”. I don’t think the warnings did anything more than convince people that this was a controversial field with lots of evidence on both sides. For that matter, I don’t know if this new paper will do anything more than convince people of that. Maybe I trust geneticists saying “no, listen to me, it’s definitely like this” more than the average psychiatrist does. Maybe we’re still far from hearing the last of 5-HTTLPR and its friends.

Second, this paper doesn’t directly prove that every single study on 5-HTTLPR was wrong. It doesn’t prove that it doesn’t cause depression in children with depressed mothers in particular. It doesn’t prove that it doesn’t cause insomnia, or PTSD, or nostalgia-proneness. It doesn’t prove that it doesn’t affect amygdala function.

Counter: it doesn’t directly prove this, but it makes it unlikely. The authors of this paper are geneticists who are politely trying to explain how genetics works to psychiatrists. They are arguing that in most cases, single genes won’t have large effects on complex traits, and no study that investigates specific genes with a sample size under five digits will provide useful information. They do an analysis of depression to demonstrate that they know what they’re talking about, but the points they are making apply to insomnia, nostalgia, and everything else. So all the studies above are questionable.

Third, most of these studies were done between 2000 – 2010, when we understood less about genetics. Surely you can’t blame people for trying?

Counter: The problem isn’t that people studied this. The problem is that the studies came out positive when they shouldn’t have. This was a perfectly fine thing to study before we understood genetics well, but the whole point of studying is that, once you have done 450 studies on something, you should end up with more knowledge than you started with. In this case we ended up with less.

(if you’re wondering how you can do 450 studies on something and get it wrong, you may be new here – read eg here and here)

Also, the studies keep coming. Association Between 5-HTTLPR Polymorphism, Suicide Attempts, And Comorbidity In Mexican Adolescents With Major Depressive Disorder is from this January. Examining The Effect Of 5-HTTLPR ON Depressive Symptoms In Postmenopausal Women 1 Year After Initial Breast Cancer Treatment is from this February. Association Of DRD2, 5-HTTLPR, And 5-HTTVNTR With PTSD In Tibetan Adolescents was published after the Border et al paper! Come on!

Having presented the case for taking it easy, I also want to present the opposite case: the one for being as concerned as possible.

First, what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

This is why I start worrying when people talk about how maybe the replication crisis is overblown because sometimes experiments will go differently in different contexts. The problem isn’t just that sometimes an effect exists in a cold room but not in a hot room. The problem is more like “you can get an entire field with hundreds of studies analyzing the behavior of something that doesn’t exist”. There is no amount of context-sensitivity that can help this.

Second, most studies about 5-HTTLPR served to reinforce all of our earlier preconceptions. Start with the elephant in the room: 5-HTTLPR is a serotonin transporter gene. SSRIs act on the serotonin transporter. If 5-HTTLPR played an important role in depression, we were right to focus on serotonin and extra-right to prescribe SSRIs; in fact, you could think of SSRIs as directly countering a genetic deficiency in depressed people. I don’t have any evidence that the pharmaceutical industry funded 5-HTTLPR studies or pushed 5-HTTLPR. As far as I can tell, they just created a general buzz of excitement around the serotonin transporter, scientists looked there, and then – since crappy science will find whatever it’s looking for – it was appropriately discovered that yes, changes in the serotonin transporter gene caused depression.

But this was just the worst example of a general tendency. Lots of people were already investigating the role of the HPA axis in depression – so lo and behold, it was discovered that 5-HTTLPR affected the HPA axis. Other groups were already investigating the role of BDNF in depression – so lo and behold, it was discovered that 5-HTTLPR affected BDNF. Lots of people already thought bad parenting caused depression – so lo and behold, it was discovered that 5-HTTLPR modulated the effects of bad parenting. Once 5-HTTLPR became a buzzword, everyone who thought anything at all went off and did a study showing that 5-HTTLPR played a role in whatever they had been studying before.

From the outside, this looked like people confirming they had been on the right track. If you previously doubted that bad parenting played a role in depression, now you could open up a journal and discover that the gene for depression interacts with bad parenting! If you’d previously doubted there was a role for the amygdala, you could open up a journal and find that the gene for depression affects amygdala function. Everything people wanted to believe anyway got a new veneer of scientific credibility, and it was all fake.

Third, antidepressant pharmacogenomic testing!

This is the thing where your psychiatrist orders a genetic test that tells her which antidepressant is right for you. Everyone keeps talking these up as the future of psychiatry, saying how it’s so cool how now we have true personalized medicine, how it’s an outrage that insurance companies won’t cover them, etc, etc, etc. The tests have made their way into hospitals, into psychiatry residency programs, and various high-priced concierge medical systems. A company that makes them recently sold for $410 million, and the industry as a whole may be valued in the billions of dollars; the tests themselves cost as much as $2000 per person, most of which depressed patients have to pay out of pocket. I keep trying to tell people these tests don’t work, but this hasn’t affected their popularity.

A lot of these tests rely on 5-HTTLPR. GeneSight, one of the most popular, uses seven genes. One is SLC6A4, the gene containing 5-HTTLPR as a subregion. Another is HTR2A, which Border et al debunked in the same study as 5-HTTLPR. The other five are liver enzymes. I am not an expert on the liver and I can’t say for sure that you can’t use a few genes to test liver enzymes’ metabolism of antidepressants. But people who are experts in the liver tell me you can’t. And given that GeneSight has already used two genes that we know don’t work, why should we trust that they did any better a job with the liver than they did with the brain?

Remember, GeneSight and their competitors refuse to release the proprietary algorithms they use to make predictions. They refuse to let any independent researchers study whether their technique works. They dismiss all the independent scientists saying that their claims are impossible by arguing that they’re light-years ahead of mainstream science and can do things that nobody else can. If you believed them before, you should stop believing them now. They are not light-years ahead of mainstream science. They took some genes that mainstream science had made a fuss over and claimed they could use them to predict depression. Now we know they were wrong about those. What are the chances they’re right about the others?

Yes, GeneSight has ten or twenty studies proving that their methods work. Those were all done by scientists working for GeneSight. Remember, if you have bad science you can prove whatever you want. What does GeneSight want? Is it possible they want their product to work and make them $410 million? This sounds like the kind of thing that companies sometimes want, I dunno.

I’m really worried I don’t see anyone updating on this. From where I’m sitting, the Border et al study passed unremarked upon. Maybe I’m not plugged in to the right discussion networks, I don’t know.

But I think we should take a second to remember that yes, this is really bad. That this is a rare case where methodological improvements allowed a conclusive test of a popular hypothesis, and it failed badly. How many other cases like this are there, where there’s no geneticist with a 600,000 person sample size to check if it’s true or not? How many of our scientific edifices are built on air? How many useless products are out there under the guise of good science? We still don’t know.


Is There A Case For Skepticism Of Psychedelic Therapy?

$
0
0

There’s been an explosion of interest in the use of psychedelics in psychiatry. Like everyone else, I hope this works out. But recent discussion has been so overwhelmingly positive that it’s worth reviewing whether there’s a case for skepticism. I think it would look something like this:

1. Psychedelics have mostly been investigated in small studies run by true believers. These are the conditions that produce a field made of unreplicable results, like the effects of 5-HTTLPR. Some of the most exciting psychedelic findings have already failed to replicate; for example, a study two years ago found that psilocybin did not permanently increase the Openness personality trait. This was one of the most exciting studies and had shaped a lot of my thinking around the issue. Now it’s gone.

2. Some of the most impressive stories involve psychedelic-assisted psychotherapy, where people who talk with a therapist, while on a substance, obtain true insight and get real closure. But every psychotherapy has amazing success stories floating out there. Back when psychoanalysis was new, the whole world was full of people telling their amazing success stories about how Dr. Freud helped them obtain true insight and get real closure. I think of psychotherapy as a domain where people can get as many amazing success stories as they want whether or not they’re really doing anything right, for unclear reasons.

3. Ketamine is the best comparison for psychedelics. Like psychedelics, it’s often used as a recreational drug, and produces profound experiences. Like psychedelics, it got hyped as an exciting new innovation that was going to revolutionize everything in psychiatry (in this case, depression treatment). But it’s been in pretty common (albeit non-formulary) use for five years now, and nothing has been revolutionized; my (very anecdotal) impression is that most patients who seek ketamine treatment find it only about as helpful as anything else. The gold-standard FDA studies are abysmal, worse than most other antidepressant medications. I’m sure ketamine works great for some people, just as SSRIs, therapy, and diet/exercise work well for some people. But at least so far it hasn’t been revolutionary.

4. Another good comparison is NSI-189. Again, a totally revolutionary new drug with a totally revolutionary new mechanism, with so many anecdotes of amazing success that depressed people started getting it on the black market before the FDA trials were even underway. People were posting testimonials that NSI-189 changed their life and that it was going to destroy the market for every other antidepressant. When the FDA trials finally finished, it was discovered to be ineffective. Seriously, the graveyards are littered with revolutionary new treatments for treatment-resistant depression that have great success in anecdotes and preliminary studies.

5. Between 10% and 50% of Americans have tried psychedelics. If psychedelics did something shocking, we would already know about it. I occasionally hear stories like “I did LSD and my depression went away”, but I also occasionally hear stories like “I did LSD and then my depression got worse”, so whatever. I know plenty of people who use heroic amounts of LSD all the time, and are still nervous wrecks. It’s possible there’s some set and setting that will improve this, but see part 7 below.

(one exception to this might be microdosing, which is a pretty new idea and might work differently from regular trips.)

6. In my model of psychedelics, they artificially stimulate your insight system the same way heroin artificially stimulates your happiness system. This leads to all those stories where people feel like they discovered the secret of the universe, but when they recover their faculties, they find it was only some inane triviality. This sounds very likely to produce people who think their psychedelic experience has changed everything and solved all their problems, which means we should discount these impressions as evidence that psychedelics really do change everything and solve all your problems. Granted, feeling like you truly understand the universe may itself help with depression, but I worry this is not a very lasting effect. See my posts on PIHKal and Universal Love, Said The Cactus Person.

7. Even if all of the above are wrong and psychedelics work very well, the FDA could kill them with a thousand paper cuts. Again, look at ketamine: the new FDA approval ensures people will be getting the slightly different esketamine, through a weird route of administration, while paying $600 a pop, in specialized clinics that will probably be hard to find. Given the price and inconvenience, insurance companies will probably restrict it to the most treatment-resistant patients, and it probably won’t help them (treatment-resistant patients tend to stay that way). Given the panic around psychedelics, I expect it to be similarly difficult to get them even if they are legal and technically FDA-approved. Depressed people will never be able to walk into a psychiatrist’s office and get LSD. They’ll walk into a psychiatrist’s office, try Prozac for three months, try Wellbutrin for three months, argue with their insurance for a while, eventually get permission to drive to a city an hour away that has a government-licensed LSD clinic, and get some weird form of LSD that might or might not work, using a procedure optimized to minimize hallucinations. I don’t know what the optimal set and setting for LSD is, but if it’s anything other than “the inside of a government-licensed LSD clinic, having a government-licensed LSD therapist ask you standard questions”, you won’t get it.

I hope I am wrong about this, I really do. And I think there’s a good chance that I might be. I really want psychedelic research to succeed and I support it wholeheartedly. But there’s been so much hype around so many things before that I want to avoid getting burned again, so I ‘ll stay skeptical for now.

The APA Meeting: A Photo-Essay

$
0
0

The first thing you notice at the American Psychiatric Association meeting is its size. By conservative estimates, a quarter of the psychiatrists in the United States are packed into a single giant San Francisco convention center, more than 15,000 people.

Being in a crowd of 15,000 psychiatrists is a weird experience. You realize that all psychiatrists look alike in an indefinable way. The men all look balding, yet dignified. The women all look maternal, yet stylish. Sometimes you will see a knot of foreign-looking people huddled together, their nametags announcing them as the delegation from the Nigerian Psychiatric Association or the Nepalese Psychiatric Association or somewhere else very far away. But however exotic, something about them remains ineffably psychiatrist.


The second thing you notice at the American Psychiatric Association meeting is that the staircase is shaming you for not knowing enough about Vraylar®.

Seems kind of weird. Maybe I’ll just take the escalator…

…no, the escalator is advertising Latuda®, the “number one branded atypical antipsychotic”. Aaaaaah! Maybe I should just sit down for a second and figure out what to do next…

AAAAH, CAN’T SIT DOWN, VRAYLAR® HAS GOTTEN TO THE BENCHES TOO! Surely there’s a non-Vraylar bench somewhere in this 15,000 person convention center!

…whatever, close enough.

You know how drug companies pay six or seven figures for thirty-second television ads just on the off chance that someone with the relevant condition might be watching? You know how they employ drug reps to flatter, cajole, and even seduce doctors who might prescribe their drug? Well, it turns out that having 15,000 psychiatrists in one building sparks a drug company feeding frenzy that makes piranhas look sedate by comparison. Every flat surface is covered in drug advertisements. And after the flat surfaces are gone, the curved sufaces, and after the curved surfaces, giant rings hanging from the ceiling.

The ads overflow from the convention itself to the city outside. For about two blocks in any direction, normal ads and billboards have been replaced with psychiatry-themed ones, until they finally peter off and segue into the usual startup advertisements around Market Street.


There’s a popular narrative that drug companies have stolen the soul of psychiatry. That they’ve reduced everything to chemical imbalances. The people who talk about this usually go on to argue that the true causes of mental illness are capitalism and racism. Have doctors forgotten that the real solution isn’t a pill, but structural change that challenges the systems of exploitation and domination that create suffering in the first place?

No. Nobody has forgotten that. Because the third thing you notice at the American Psychiatric Association meeting is that everyone is very, very woke.

Here are some of the most relevant presentations listed in my Guidebook:

Saturday, May 18
Climate Psychiatry 101: What Every Psychiatrist Should Know
Women's Health In The US: Disruption And Exclusion In The Time Of Trump
Gender Bias In Academic Psychiatry In The Era Of the #MeToo Movement
Revitalizing Psychiatry – And Our World – With A Social Lens
Hip-Hop: Cultural Touchstone, Social Commentary, Therapeutic Expression, And Poetic Intervention
Lost Boys Of Sudan: Immigration As An Escape Route For Survival
Treating Muslim Patients After The Travel Ban: Best Practices In Using The APA Muslim Mental Health Toolkit
Making The Invisible Visible: Using Art To Explore Bias And Hierarchy In Medicine
Navigating Racism: Addressing The Pervasive Role Of Racial Bias In Mental Health
.
Sunday, May 20
Addressing Microaggressions Toward Sexual And Gender Minorities: Caring For LGBTQ+ Patients And Providers
Latino Undocumented Children And Families: Crisis At The Border And Beyond
Racism And Psychiatry: Growing A Diverse Psychiatric Workforce And Developing Structurally Competent Psychiatric Providers
Sex, Drugs, And Culturally Responsive Treatment: Addressing Substance Use Disorders In The Context Of Sexual And Gender Diversity
Grabbing The Third Rail: Race And Racism In Clinical Documentation
Racism And The War On Terror: Implications For Mental Health Providers In The United States
The Multiple Faces Of Deportation: Being A Solution To The Challenges Faced By Asylum Seekers, Mixed Status Families, And Dreamers
What Should The APA Do About Climate Change?
Intersectionality 2.0: How The Film Moonlight Can Teach Us About Inclusion And Therapeutic Alliance In Minority LGBTQ Populations
Transgender Care: How Psychiatrists Can Decrease Barriers And Provide Gender-Affirming Care
Gun Violence Is A Serious Public Health Problem Among America's Adolescents And Emerging Adults: What Should Psychiatrists Know And Do About It?
Working Clinically With Eco-Anxiety In The Age Of Climate Change: What Do We Know And What Can We Do?
Are There Structural Determinants Of African-American Child Mental Health? Child Welfare – A System Psychiatrists Should Scrutinize
.
Monday, May 21
Community Activism Narratives In Organized Medicine: Homosexuality, Mental Health, Social Justice, and the American Psychiatric Association
Disrupting The Status Quo: Addressing Racism In Medical Education And Residency Training
Ecological Grief, Eco-Anxiety, And Transformational Resilience: A Public health Perspective On Addressing Mental Health Impacts Of Climate Change
Immigration Status As A Social Determinant Of Mental Health: What Can Psychiatrists Do To Support Patients And Communities? A Call To Action
Psychiatry In The City Of Quartz: Notes On The Clinical Ethnography Of Severe Mental Illness And Social Inequality
Racism And Psychiatry: Understanding Context And Developing Policies For Undoing Structural Racism
Trauma Inflicted To Immigrant Children And Parents Through Policy Of Forced Family Separation
Deportation And Detention: Addressing The Psychosocial Impact On Migrant Children And Families
How Private Insurance Fails Those With Mental Illness: The Case For Single-Payer Health Care
Imams In Mental Health: Caring For Themselves While Caring For Others
Misogynist Ideology And Involuntary Celibacy: Prescription For Violence?
Advocacy: A Hallmark Of Psychiatrists Serving Minorities
Inequity By Structural Design: Psychiatrists' Responsibility To Be Informed Advocates For Systemic Education And Criminal Justice Reform
Treating Black Children And Families: What Are We Overlooking?
Blindspotting: An Exploration Of Implicit Bias, Race-Based Trauma, And Empathy
But I'm Not Racist: Racism, Implicit Bias, And The Practice Of Psychiatry
No Blacks, Fats, or Femmes: Stereotyping In The Gay Community And Issues Of Racism, Body Image, And Masculinity
Silence Is Not Always Golden: Interrupting Offensive Remarks And Microaggressions
Black Minds Matter: The Impact Of #BlackLivesMatter On Psychiatry

…you get the idea, please don’t make me keep writing these.

Were there really more than twice as many sessions on global warming as on obsessive compulsive disorder? Three times as many on immigration as on ADHD? As best I can count, yes. I don’t want to exaggerate this. There was still a lot of really meaty scientific discussion if you sought it out. But overall the balance was pretty striking.

I’m reminded of the idea of woke capital, the weird alliance between very rich businesses and progressive signaling. If you want to model the APA, you could do worse than a giant firehose that takes in pharmaceutical company money at one end, and shoots lectures about social justice out the other.


The fourth thing you notice at the American Psychiatric Association meeting is the Scientologists protesting outside.

They don’t tell you they’re Scientologists. But their truck has a link to CCHR.org on it, and Wikipedia confirms them as a Scientology front group. Scientology has a long-standing feud with psychiatry, with the psychiatrists alleging that Scientology is a malicious cult, and the Scientologists alleging that psychiatry is an evil pseudoscience that denies the truth of dianetics. And that psychiatrists helped inspire Hitler. And that the 9/11 was masterminded by Osama bin Laden’s psychiatrist. And that psychiatrists are plotting to institute a one-world government. And that psychiatrists are malevolent aliens from a planet called Farsec. Really they have a lot of allegations.

This particular truck is especially sad, because they’re reinforcing the myths about electroconvulsive therapy. ECT is a very effective treatment for depression. It is essentially always consensual – although most other psychiatric treatments can be administered involuntarily if someone is judged too out-of-touch with reality to make decisions, ECT has a special status as a treatment which can only be given with patient permission. It’s always performed under anaesthesia and muscle relaxants, so patients are not conscious during the procedure and not spasming. And it can be a life-changing option for treatment-resistant depression. See this Scientific American article for more.


The fifth thing you notice at the American Psychiatric Association meeting is that the CIA has set up a booth.

I was pretty curious about what the CIA wanted from psychiatrists (did they lose the original MKULTRA data? do they need to gather more?), but I was too shy to ask their representative directly. I did take one of their flyers, but it turned out to just be about how woke they were:


The sixth thing you notice at the American Psychiatric Association meeting is that Vraylar® has built an entire miniature city. The buildings are plastered with pamphlets on Vraylar®. Billboards advertising Vraylar® hang over the streets and bridges. Giant Vraylar balloons hover serenly over everything, looking down with contempt and sorrow upon the non-Vraylar®-prescribing world below.

Occupying pride of place in city center, some sort of Important Vraylar Scientist is constructing the Transamerica Pyramid out of playing cards.

I dunno, if I were working in an area where the research supporting a treatment has a tendency to collapse suddenly and spectacularly, I might want to avoid building an association in people’s minds between my medication and a house of cards. But the ways of Vraylar® are inscrutable to mortal men.


The seventh thing you notice at the American Psychiatric Association meeting is that many of the new drugs are ridiculous.

It’s hard to blame pharmaceutical companies for this. The return on investment for pharma R&D is rapidly shrinking – drug discovery is too expensive to consistently make money anymore.

Rather than give up and die, pharma is going all in on newer, me-too-er me-too drugs. The current business plan looks kind of like this:

1. Take an popular older drug

2. Re-invent it, either with a minor change to the delivery mechanism, or by finding a similar molecule that works the same way

3. Call this a new drug, advertise the hell out of it, and sell it for 10x – 100x the price of the older drug

4. Profit!

Consider Lucemyra®:

It’s an alpha-2a receptor agonist used to treat acute opiate withdrawal. Alpha-2a receptor agonists are a fine choice for acute opiate withdrawal, but we already have one that works great: clonidine. Clonidine costs $4.84 per month. Lucemyra® costs $1,974.78. Is there any difference at all between the two medications? Some studies suggest maybe lofexedine can cause less hypotension, but realistically we throw random doses of clonidine at ADHD kids all the time, so it’s not like clonidine-induced hypotension is some kind of giant menace which will destroy us all.

I asked the Lucemyra® representative why I might prescribe Lucemyra® instead of clonidine for opiate withdrawal. She said it was because Lucemyra® is FDA-approved for this indication, and clonidine isn’t. This is the same old story as Rozerem® vs. melatonin, Lovaza® vs. fish oil, and Spravato® vs. ketamine. As long as doctors continue to outsource their thinking to the FDA approval process, in a way even the FDA itself doesn’t endorse, pharma companies will be able to inflate the prices of basic medications by a thousand times just by playing games with the bureaucracy.

But also:

Jornay® is a new form of methylphenidate, ie Ritalin. The usual comparison: a month of Ritalin costs $25.19, a month of Jornay® costs $387.48. What’s the difference? You can take Jornay® at night. Why is this interesting? The Jornay® representatives say that maybe people want to have Ritalin in their system as soon as they wake up, rather than having to wait the half-hour or so it usually takes for it to start having an effect. I have to admit, from a scientific perspective Jornay® is kind of cool; I expect the pharmacologists who designed it had a lot of fun. But the oppressed people of the world haven’t exactly been crying out for Dark Ritalin. Nobody has been saying “Help us, pharmaceutical industry, merely having Ritalin®, Concerta®, Metadate®, Focalin®, Daytrana®, Quillivant®, Quillichew®, Aptensio®, Biphentin®, Equasym®, Medikinet®, and Rubifen® just isn’t enough for us! We need more forms of Ritalin, stat!”

My favorite was Subvenite®, which is just lamotrigine in a conveniently-packaged box that tells you how much to take each day. The same amount of normal lamotrigine would cost about $12; it’s hard for me to figure out exactly how much Subvenite® costs, but this site suggests $540. To be fair, lamotrigine is a really inconvenient drug whose dosing schedule often leaves patients confused. To be less fair, seriously, $540 for some better instructions? Get a life.

How do all these people keep doing it? What’s their business plan? Here’s a hint:

This is the brochure for Lucemyra®, the opiate withdrawal medication that costs $1,974.78. No patient is paying $1,974.78 for it. Patients are paying $25. And doctors sure aren’t paying $1,974.78. The way all these companies are getting away with it is because in Healthcaristan SSR, nobody ever pays for their own medication.

To a first approximation, doctors make purchasing decisions, but insurances cough up the money. Insurances have a few weapons to prevent doctors from buying arbitrarily expensive drugs, but they tend to back off in the face of magic words like “I believe this is medically necessary” or “This is the one the FDA approved”. So to fill in the missing pieces of the pharma strategy mentioned above:

1. Take an popular older drug

2. Re-invent it, either with a minor change to the delivery mechanism, or by finding a similar molecule that works the same way

3. Call this a new drug, advertise the hell out of it, and sell it for 10x – 100x the price of the older drug

4. Advertise it to patients (who don’t have to pay for it) and doctors (who definitely don’t have to pay for it), neither of whom care at all what price you’re setting.

5. Make sure doctors know the magic words they need to use to force insurance companies to pay for it.

6. Profit!

This has become so lucrative that pharma companies barely have to do any real research and development at all these days. The only genuinely exciting new drugs at the conference were Ingrezza® and Austedo®, both of which treat tardive dyskinesia – a side effect you get from having been on too many other psychiatric drugs. This is probably a metaphor for something.


The eighth thing you notice at the American Psychiatric Association meeting is that there’s a presentation called “Yer A Psychiatrist, Harry!”: Learning Psychiatric Concepts Through The Fictional Worlds Of Game Of Thrones And Harry Potter. I didn’t go. I realize I have failed you, my readers, but if I had to listen to ninety minutes of that, all the Vraylar® in the world would not be enough to maintain my sanity.


The ninth thing you notice at the American Psychiatric Association meeting is that, after winning last place in a head-to-head comparison of various antipsychotics, doing worse than drugs that cost less than 1% as much…

…Fanapt® (iloperidone) has pivoted to a marketing strategy of bribing doctors with free ice cream:


The tenth thing you notice at the American Psychiatric Association meeting is that all of this has happened before.

This is the 175th anniversary of the APA. It’s been a pretty crazy century-and-three-quarters, no pun intended. Like, seriously, take a look at this guy:

Back when you could still lose your medical license for being gay, he went to the APA meeting in a mask and gave a presentation arguing for gay rights, and the APA de-listed homosexuality as a psychiatric disorder the following year. How amazing is that?

The APA highlighted a bunch of people like this, heroes and trailblazers all. But for every great hero celebrated on posters, there is an embarrassment buried somewhere deep in an archive. My favorite of these is the APA Presidential Address from 1918, the very tail end of WWI. The head of the Association, a very distinguished psychiatrist named Dr. Anglin, gets up in front of the very same conference I attended this week (the 1918 version was held in Chicago) and declared that the greatest problem facing psychiatry was…the dastardly Hun:

The maxim that medical science knows no national boundaries has been rudely shaken by the war. The Fatherland has been preparing for isolation from the medical world without its confines. Just as, years ago, the Kaiser laid his ban on French words in table menus, so, as early as 19 14, German scientists embarked on a campaign against all words which had been borrowed from an enemy country. A purely German medical nomenclature was the end in view. The rest of the world need not grieve much if they show their puerile hate in this way. It will only help to stop the tendency to Pan-Germanism in medicine which has for some years past been gaining headway. ‘

The Germans excel all other nations in their genius for advertising themselves. They have proved true the French proverb that one is given the standing he claims. On a slender basis of achievement they have contrived to impress themselves as the most scientific nation. Never was there greater imposture. They display the same cleverness in foisting on a gullible world their scientific achievements as their shoddy commercial wares. The two are of much the same value, made for show rather than endurance — in short, made in Germany […]

In the earliest months of the war it was pointed out that there are tendencies in the evolution of medicine as a pure science as it is developed in Germany which are contributing to the increase of charlatanism of which we should be warned. A medical school has two duties — one to medical science, the other to the public. The latter function is the greater, for out of every graduating class 90 per cent. are practitioners and less than 10 per cent, are scientists. The conditions in Germany are reversed. There, there were ninety physicians dawdling with science to every ten in practice. Of these 90, fully 75 per cent were wasting their time. In Germany the scientific side is over-done, and they have little to show for it all, while the human side is neglected. Even in their new institutions, splendid as they are in a material sense, it is easily seen that the improved conditions are not for the comfort of the patients.

Out of this war some modicum of good may come if it leads to a revision of the exaggerated estimate that has prevailed in English-speaking countries of the achievements of the Germans in science. We had apparently forgotten the race that had given the world Newton, Faraday, Stephenson, Lister, Hunter, Jenner, Fulton, Morse, Bell, Edison, and others of equal worth. German scientists wait till a Pasteur has made the great discovery, on which it is easy for her trained men to work. She shirks getting for herself a child through the gates of sacrifice and pain ; but steals a babe, and as it grows bigger under her care, boasts herself as more than equal to the mother who bore it. Realising her mental sterility, drunk with self-adoration, she makes insane war on the nations who still have the power of creative thought.

But it is especially in the realm of mental science that the reputation of the Germans is most exalted and is least deserved. For every philosopher of the first rank that Germany has produced, the English can show at least three. And in psychiatry, while we have classical writings in the English tongue, and men of our own gifted with clinical insight, we need seek no foreign guides, and can afford to let the abounding nonsense of Teutonic origin perish from neglect of cultivation.

The Germans are shelling Paris from their Gothas and their new gun. Murdering innocents, to create a panic in the heart of France! With what effect ? The French army cries the louder, “They shall not pass ” ; Paris glows with pride to be sharing the soldiers’ dangers, and increases its output of war material; and the American army sees why it is in France, and is filled with righteous hatred. Panic nowhere. Vengeance everywhere. What does the Hun know of psychology? His most stupid, thick-witted performance was his brutal defiance of the United States with its wealth, resources, and energy. That revealed a mental condition both grotesque and pitiable.

After the war a centre of medical activity will be found on this side the Atlantic, and those who have watched the progress medical science has made in the United States will have no misgivings as to your qualifications for leadership. If we learn to know ourselves, great good will come out of this war.

Anglin does not deny that some may find it inappropriate to discuss politics at a psychiatry conference, but notes that:

If in these introductory remarks I have not been able to detach myself from the world’s most serious business at the present time, perhaps on reflection they may not have gone very far afield from the subject which binds us together in an association. If there is to be a change in the conditions under which we live this must have its effect on the minds of men ; whether for good or ill, I will not stop to speculate. We are intensely concerned with environment. This war itself is entangled with it,

England’s greatness, her devotion to honour, truth, and fidelity, is due to the environment in which her children are trained and grow to manhood. The ivy-grown wall, the vine-clad hills and the rose-covered bowers constitute the birth-place of English character.

Gerard tells us the cause of the war is the uncongenial environment in which the German youth is cradled and reared. The leaden skies for which Prussia is noted, its bleak Baltic winds, the continuous cold, dreary rains, the low-lying land, and the absence of flowers have tended to harden the spirit and rob it of its virtue, produce a sullen and morose character, curdling the milk of human kindness.

He does raise one warning, one problem that risks sabotaging even countries as congenial-climate-having as ourselves and our allies:

The quack medicine vendor is busier than ever. Money is plenty and he wants some of it. He uses mental suggestion and interests us. He is a specialist in distortion who probes into the ordinary sensations of
healthy people and perverts them into symptoms. Every billboard, newspaper, fence-rail, barn and rock thrusts out a suggestion of sickness as never before. The only vulnerable point to attack the vicious traffic is the advertising. If governments forbid that as they should, the next generation will be healthier and richer.

From Dr. Anglin’s address, I gather three things.

First, the billboards we shall always have with us. It’s easy to imagine this a modern problem, but apparently the generation that confronted the Kaiser was confronting annoying psychiatric advertising too. The Kaiser is gone; the annoying psychiatric advertising has proven a tougher foe.

Second, psychiatry has always been the slave of the latest political fad. It is just scientific enough to be worth capturing, but not scientific enough to resist capture. The menace du jour will always be a threat to our mental health; the salient alternative to “just forcing pills down people’s throat” will always be pursuing the social agenda of whoever is in power; you will always be able to find psychiatrists to back you up on this.

But third, science advances anyway. Psychiatry is light-years ahead of where it was a hundred years ago. Since Dr. Anglin’s 1918 address, we’ve discovered psychotherapy and psychopharmacology; come up with deinstitutionalization and destigmatization; and put rights in place to protect psychiatric patients and to protect the general public from being unnecessarily psychiatrized. We’ve even invented Vraylar®.

On my way out of the conference, I encountered this ad:

I don’t think it was even related to the psychiatry conference. I think it was for a nearby art museum. But it struck me. It struck me because it’s the sort of picture psychiatry wants to have of itself, a combination of hard neuroscience and basic human goodness. It struck me because as written, it’s obviously bogus (which Brodmann area is responsible for empathy again? How bright does it have to light up before you start feeling empathic?) in much the same way psychiatry can be obviously bogus (how much Vraylar® does it take before you can “take back control of your life” or “feel better than well”?), but is sort of an exaggerated and slightly-too-literal version of something that could potentially not be bogus. It struck me because, after making fun of it, I had to admit to myself that the thing it was pointing at was good and important and probably exactly what an art museum should be trying to do. And a psychiatrist, for that matter.

Do People Like Their Mental Health Care?

$
0
0

Along with more specific questions, I asked people who took the SSC survey to rate their experience with the mental health system on a 1 – 10 scale.

About 5,000 people answered. On average, they rated their experience with psychotherapy a 5.7, and their experience with medication also 5.7.

This is more optimistic than a lot of the horror stories you hear would suggest. A lot of the horror stories involve inpatient commitment (which did get a dismal 4.4/10 approval rating) so I checked what percent of people engaging with the system ended up inpatient. Of people who had seen either a psychiatrist or therapist, only 7% had ever been involuntarily committed to a psychiatric hospital. Note that this data can’t tease out causation, so this doesn’t mean 7% of people who saw an outpatient professional were later committed – it might just mean that lots of people got committed to the hospital by police, then saw a professional later.

Going into more detail about what people did or didn’t like (note truncated y-axis):

I asked people what kind of therapy they did. People liked all schools of therapy about the same, except that they liked “eclectic” therapy that wasn’t part of any specific school less than any school. Every school including eclectic got higher than 5.7, because people who wouldn’t answer this question – who weren’t even sure what kind of therapy they were doing – rated it less than any school or than eclectic therapy.

People really liked doing therapy from a book. They liked doing therapy with an in-person therapist a little less, and they liked online therapy apps least of all. This doesn’t match published literature, and this would be a good time to remember that all of these results are horribly biased and none of them can prove causation. For example, the sort of motivated go-getter who would go out and get a therapy book and read it themselves might be systematically different from somebody who gets therapy through an app or in a clinic.

This graph shows how people liked medication (blue) and therapy (red) based on what their mental health issue was (note truncated y-axis). Some groups – people with eating disorders, people with borderline personalities – were just generally hard to please. Alcoholics were much happier with their therapy than with pharmaceutical treatment (though the sample size was only about 50 per group). People with bipolar and ADHD were happier with medication than therapy.

This is a little different. The last graph averaged everyone’s opinion of medication and everyone’s opinion of therapy. This one just includes the people who have tried both, who might have a better standard for comparison. The higher the bar above the red line, the more they preferred the medication; the lower the bar below the red line, the more they preferred the therapy.

Alcoholics and borderlines prefer the therapy. Autistic people strongly prefer the medication, which is weird because there’s no good medication for autism. This could be them hating the social interaction involved in therapy. Or it could be a condemnation of therapies like applied behavior analysis, which can become a sort of confrontational attempt to force them to conform, with potential punishment for failure. The ADHD preference for medication is less surprising; stimulants always get a high approval rating.

Remember, none of these numbers measure whether treatment works – just whether patients are happy. And they’re all vulnerable to selection effects and a host of other biases. Take them as exploratory only. I welcome people trying to replicate or expand on these results. All of the data used in this post are freely available and can be downloaded here.

Know Your Gabapentinoids

$
0
0

The gabapentinoids are a class of drugs vaguely resembling the neurotransmitter GABA. Although they were developed to imitate GABA’s action, later research discovered they acted on a different target, the A2D subunit of calcium channels. Two gabapentinoids are approved by the FDA: gabapentin (Neurontin®) and pregabalin (Lyrica®).

Gabapentin has been generic since 2004. It’s commonly used for seizures, nerve pain, alcoholism, drug addiction, itching, restless legs, sleep disorders, and anxiety. It has an unusually wide dose range: guidelines suggest using anywhere between 100 mg and 3600 mg daily. Most doctors (including me) use it at the low end, where it’s pretty subtle (read: doesn’t usually work). At the high end, it can cause sedation, confusion, dependence, and addiction. I haven’t had much luck finding patients a dose that works well but doesn’t have these side effects, which is why I don’t use gabapentin much.

Pregabalin officially went generic last month, but isn’t available yet in generic form, so you’ll have to pay Pfizer $500 a month. On the face of things, pregabalin seems like another Big Pharma ploy to extend patents. The gabapentin patent was running out, so Pfizer synthesized a related molecule that did the same thing, hyped it up as the hot new thing, and charged 50x what gabapentin cost. This kind of thing is endemic in health care and should always be the default hypothesis. And a lot of scientists have analyzed pregabalin and said it’s definitely just doing the same thing gabapentin is.

But some of my anxiety patients swear by pregabalin. They call it a miracle drug. They can’t stop talking about how great it is. I can’t use it too often, because of the price, but I’m really excited about the upcoming generic version coming out so I can use it more often.

Still, I have to wonder – why am I sitting around waiting when I could just give people gabapentin? Confirmed pharmacodynamically-identical, generic, and cheap? The answer is, gabapentin doesn’t seem to work that well. I’ve never had patients with more than minimal anxiety happy on gabapentin alone. Am I imagining a difference betwee these two supposedly-similar medications? I don’t know. Although studies confirm pregabalin is great for anxiety, nobody has done the studies on gabapentin that would let me compare it. For now, the apparent difference between pregabalin and gabapentin is one of the great mysteries of life, one of the things that makes me doubt my own sanity.

One possibility is that we’re getting the doses wrong. UpToDate recommends treating anxiety disorders with gabapentin using a starting dose of 300 mg twice a day = 600 mg daily. But it recommends 100 mg three times a day = 300 mg of pregabalin. This dosing table suggests 1 mg pregabalin = 5 mg gabapentin, so 300 mg of pregabalin = 1500 mg gabapentin! So we’re starting gabapentin patients on less than half as much medicine as we start pregabalin patients on. If this forms a reference point in the doctor’s mind, then maybe what we think of as a “high dose” of gabapentin is the same as what we think of a “low dose” of pregabalin. Maybe all our gabapentin doses are just too low.

I usually avoid higher gabapentin doses because I feel like they have more side effects than low pregabalin doses. Is this just an illusion? Is it my bias? If a patient reports feeling dizzy on high-dose gabapentin, do I say “Yeah, you’re on a really high dose, I’m not surprised you feel that way, let’s back off?” And then if they feel the same thing on low-dose pregabalin, might I say “It’s a low dose, you’re just getting used to the medication, give it a few more weeks”? Might my biases even be affecting how patients report their own experiences?

Or might there be some obscure pharmacologic mechanism? This paper tries to compare the pharmacology of the two drugs. They say the body can easily absorb pregabalin, but has a limited ability to absorb gabapentin – the more gabapentin there is, the lower a percent gets absorbed:

With regard to the fraction of the dose absorbed, the lowest gabapentin dose studied (100 mg every 8 hours) is associated with absolute bioavailability of approximately 80%. This value was shown to decrease with increasing dose to an averageof 27% absolute bioavailability for a 1600 mg dose every 8 hours. In contrast, oral bioavailability of pregabalin averaged 90% across the full dose range of 75 to 900 mg/day studied

This doesn’t match the dosing table linked above, which suggests a 1:5 constant ratio between gabapentin and pregabalin dose. It also doesn’t really match the paper’s Figure 3, which shows a linear effect of gabapentin up to 1800 mg for nerve pain. It does match the paper’s figure 4, which shows little to no effect of gabapentin past 600 mg for seizures. I don’t really know what’s going on here. It would make some sense if the bottleneck were plasma -> CSF absorption, but that’s not what the paper’s saying. In any case, if the gabapentin/pregabalin relationship followed the same pattern for anxiety as for seizures, it would be impossible to ever get a dose of gabapentin as high as the starting dose for pregabalin, which would explain perceptions of pregabalin’s superiority. Try to increase gabapentin dose, and you just have extra gabapentin sitting around in the GI tract causing trouble. I don’t know if this is at all the right way to be thinking about this.

One more difference: gabapentin is not a controlled substance, but pregabalin is Schedule V, the designation the government uses for things that are technically addictive but that it’s not going to worry about too much. Why the difference? The government’s documentation of their decision doesn’t say. It could be total chance: both substances are right on the border, and a different bureaucrat got assigned to each case. But the decision doesn’t seem totally off-base to me. Although it’s theoretically possible to get addicted to gabapentin if you use a really high dose and try really hard, you’d have to be pretty desperate even by drug addict standards. I’ve seen a little more pregabalin addiction, though I agree with the FDA that it’s still pretty unusual (some people in the comments disagree). One likely culprit is the absorption rate: pregabalin gets absorbed in an hour or so, gabapentin takes three or four. Faster-acting substances are always more addictive; they peak higher and sooner, and it’s easier for the brain to associate stimulus (taking the drug) with response (feeling good). Could this also explain some of the efficacy difference? I don’t know.

Phenibut is not FDA-approved; it’s a common medication in Russia which gets sold as a supplement/nootropic/recreational drug in the US. The FDA occasionally asks people to stop selling it, but they’ve never gotten serious, and it’s still easily available on the open Internet.

Phenibut has the kind of approval ratings usually associated with North Korean dictators who kill anyone who disapproves of them – including the highest median rating on my nootropics survey. It’s phenomenal for social anxiety – not in the SSRI way of making you a little calmer, but more in the “getting just the right amount of drunk” way that turns you into a different, bolder, and more fun-loving person. Aside from this, it can give a hard-to-describe sense of tranquility and well-being.

(it also makes you feel like you’re wearing a hat even when you aren’t. I swear this is a real side effect.)

Needless to say, it’s potentially addictive and can seriously ruin your life. Conventional wisdom in the phenibut user community is that you can use 500 mg once every week (or maybe every two weeks) safely. Anything beyond that and you develop rapid tolerance. Increase the dose to fight the tolerance, and you start feeling worse on the days you don’t take it, using it more and more to compensate for the rebound, and eventually getting a withdrawal syndrome closely related to the delirium tremens that sometimes kills recovering alcoholics.

(does this mean that responsible phenibut use is a free way to have one great day per two weeks? depends how good your willpower is, I guess. see also this graph from this source)

The discovery of ketamine’s efficacy for depression was a mixed blessing. Ketamine such is a difficult medication to use – dangerous side effects, intolerable hallucinations, IV delivery – that it could never be a panacea, whatever its potential. But the discovery sparked a hunt for other ketamine-like chemicals that shared its efficacy but not its downsides. It also started a race to figure out how ketamine worked, with the hope that this would provide the key to what depression really was, deep down. Phenibut should inspire the same kind of interest. It’s too dangerous to use regularly, but it’s great enough that we should be looking into what the heck is going on.

Early research into phenibut focused on GABA, the main inhibitory neurotransmitter. The brain has two kinds of GABA receptors, GABA-A and GABA-B. Alcohol, Xanax, Valium, Ambien, barbituates, and the other classic sedatives all hit GABA-A. There aren’t that many chemicals that hit GABA-B, and the few that are out there tend to be kind of weird – one of them fell to Earth on a meteorite. But phenibut is a GABA-B agonist. This sounds like a neat solution to the mystery: a drug with unique anti-anxiety properties affects a unique inhibitory receptor. But another GABA-B agonist, baclofen, has minimal anti-anxiety effects. It is mostly just a boring muscle relaxant (there was some excitement over a possibility that it might cure alcoholism, but the latest studies say no). So probably GABA-B on its own doesn’t explain phenibut.

This led researchers to propose that phenibut might work as a gabapentinoid. It has the defining GABA backbone, and it has activity at the A2D calcium channel subunit. But its gabapentinoid activity is much weaker than gabapentin itself, so why should its effects be stronger?

Baclofen outdoes phenibut as a GABA-B agonist, and gabapentin outdoes phenibut as a gabapentinoid, but phenibut works better than either. This is the other big gabapentinoid mystery that keeps me awake at night.

Might it be a synergistic effect between the two different actions? If this were true, we would expect taking gabapentin and baclofen together to have phenibut-like effects. But these drugs are sometimes used for the same kinds of neuromuscular conditions and nobody has ever noticed anything out of the ordinary. I would love to see this studied but I don’t expect much.

Phenibut has two enantiomers, r-phenibut and s-phenibut. Both are decent gabapentinoids, but only r-phenibut has GABA-B activity. If both worked equally well, that would suggest phenibut worked on A2D; if r-phenibut worked better, that would implicate GABA. The best source I can find is this study, which says that only r-phenibut has effects on rats. Do the hokey tests they run rats through exactly correspond to treating anxiety in humans? Unclear, but this pushes me more in the direction of thinking GABA-B is an important part of phenibut’s effects. So does a passing resemblance between phenibut and GHB, an unusual drug that works on GABA-B among other things.

Overall I think phenibut is probably more GABA-B agonist than gabapentinoid, but I can’t explain why it’s so different from baclofen.

One fringe possibility: it isn’t. I’ve said that these two drugs are used for different indications, by different populations, and get different results. But the map isn’t the territory, and the way humans use and think about drugs doesn’t always reflect chemical reality. Everyone knew the second generation antipsychotics were totally different from the first generation ones, until we learned that they weren’t really, and the different effects we saw were a combination of using them differently plus having different expectations. And placebo alcohol can still get people pretty drunk. The only study I’ve found directly comparing phenibut to baclofen finds they work for similar indications, at least in rats (see bottom of page 476). And I can find a few comments on Reddit backing this up from experience.

My odds are against this theory – I think there’s probably some real difference between these drugs that we don’t understand. But constant vigilance never hurts.

[EDIT: commenter dtsund points out that baclofen has some issues with blood-brain barrier permeability; see here for more. Although some of it gets through, it could build up in the plasma much faster than in the brain, giving it disproportionately peripheral effects]

Attempted Replication: Does Beef Jerky Cause Manic Episodes?

$
0
0

Last year, a study came out showing that beef jerky and other cured meats, could trigger mania in bipolar disorder (paper, popular article). It was a pretty big deal, getting coverage in the national press and affecting the advice psychiatrists (including me) gave their patients.

The study was pretty simple: psychiatrists at a mental hospital in Baltimore asked new patients if they had ever eaten any of a variety of foods. After getting a few hundred responses, they compared answers to controls and across diagnostic categories. The only hit that came up was that people in the hospital for bipolar mania were more likely to have said they ate dry cured meat like beef jerky (odds ratio 3.49). This survived various statistical comparisons and made some biological sense.

The methodology was a little bit weird, because they only asked if they’d ever had the food, not if they’d eaten a lot of it just before becoming sick. If you had beef jerky once when you were fourteen, and ended up in the psych hospital when you were fifty-five, that counted. Either they were hoping that “ever had beef jerky at all” was a good proxy for “eats a lot of beef jerky right now”, or that past consumption produced lasting changes in gut bacteria. In any case, they found a strong effect even after adjusting for confounders and doing the necessary Bonferroni corrections, so it’s hard to argue with success.

Since the study was so simple, and already starting to guide psychiatric practice, I decided to replicate it with the 2019 Slate Star Codex survey.

In a longer section on psychiatric issues, I asked participants “Have you ever been hospitalized for bipolar mania?”. They could answer “Yes, many times”, “Yes, once”, or “No”. 3040 people answered the question, of whom 26 had been hospitalized once, 13 many times, and 3001 not at all.

I also asked participants “How often do you eat beef jerky, meat sticks, or other similar nitrate-cured meats?”. They could answer “Never”, “less than once a year”, “A few times a year”, “A few times a month”, A few times a week”, or “Daily or almost daily”. 5,334 participants had eaten these at least once, 2,363 participants had never eaten them.

(for the rest of this post, I’ll use “beef jerky” as shorthand for this longer and more complicated question)

Power calculation: the original study found odds ratio of 3.5x; because the percent of my sample who had been hospitalized for mania was so low, OR = RR; I decided to test for an odds ratio of 3. About 1.2% of non-jerky-eaters had been hospitalized for mania, so I used this site to calculate necessary sample size with Group 1 as 1.2%, Group 2 as 3.6% (=1.2×3), enrollment ratio of 0.46 (ratio of the 921 jerky-never-eaters to 2015 jerky eaters), alpha of 0.05, and power of 80%. It recommended a total sample of 1375, well below the 2974 people I had who answered both questions.

Of 932 jerky non-eaters, 11 were hospitalized for mania, or 1.2%. Of 2042 jerky-eaters, 27 were hospitalized for mania, or 1.3%. Odds ratio was 1.12, chi-square statistic was 0.102, p = 0.75. The 95% confidence interval was (.55, 2.23). So there was no significant difference in mania hospitalizations between jerky-eaters and non-eaters.

I also tried to do the opposite comparison, seeing if there was a difference in beef jerky consumption between people with a history of hospitalization for mania and people without such a history. I recoded the “beef jerky” variable to a very rough estimate to how many times per year people ate jerky (“never” = 0, “daily” = 400, etc). The rough estimate wasn’t very principled, but I came up with my unprincipled system before looking at any results. People who had never been hospitalized for mania ate beef jerky an average of 16 times per year; people who had been hospitalized ate it an average of 8 times per year. This is the opposite direction predicted by the original study, and was not significant.

I tried looking at people who had a bipolar diagnosis (which requires at least one episode of mania or hypomania) rather than just people who had been hospitalized for bipolar mania. This gave me four times the sample size of bipolar cases, but there was still no effect. 63% of cases (vs. 69% of controls) had ever eaten jerky, and cases on average ate jerky 15 times a year (compared to 20 times for controls). Neither of these findings was significant.

Why were my survey results so different from the original paper?

My data had some serious limitations. First, I was relying on self-report about mania hospitalization, which is less reliable than catching manic patients in the hospital. Second, I had a much smaller sample size of manic patients (though a larger sample size of controls). Third, I had a different population (SSC readers are probably more homogenous in terms of class, but less homogenous in terms of nationality) than the original study, and did not adjust for confounders.

There were also some strengths to this dataset. I had a finer-grained measure of beef jerky consumption than the original study. I had a larger control group. I was able to be more towards the confirmatory side of confirmatory/exploratory analysis.

Despite the limitations, there was a pretty striking lack of effect for jerky consumption. This is despite the dataset being sufficiently well-powered to confirm other effects that are classically known to exist (for example, people hospitalized by mania had higher self-rated childhood trauma than controls, p < 0.001). This is an important finding and should be easy to test by anyone with access to psychiatric patients or who is surveying a large population. I urge other people (hint to psychiatry residents reading this blog who have to do a research project) to look into this further. I welcome people trying to replicate or expand on these results. All of the data used in this post are freely available and can be downloaded here.

Viewing all 94 articles
Browse latest View live