Skip to content
Forschung

Headache-Specific and All-Condition Reviews Undercut Broad Therapeutic Claims for CST

Ready explainer focused on how review-level evidence narrows or weakens broad marketing claims. A headache-specific 2023 review found statistically significant but clinically unimportant pain change with very low-certainty evidence, while broader 2024 reviews concluded CST showed no meaningful benefits across assessed conditions.

2026-03-21

Two kinds of recent reviews have shaped discussion of CST evidence: a 2023 review focused on headache, and a pair of 2024 reviews that looked across many conditions at once. Both complicate simple positive claims about CST, in different ways and for different reasons.

Understanding what the reviews found, and what their conclusions do and don't mean for individuals, takes a bit of effort. The story is more nuanced than either "CST works for headaches" or "CST doesn't work," and the nuance matters if you're trying to decide.

This article walks through each review, what it found, and how to read it honestly without losing sight of what individual patients actually report.

The 2023 headache review

The 2023 systematic review and meta-analysis on CST for headache is notable because headache is one of the more common reasons people try CST. The review pooled results from multiple RCTs and found a statistically significant reduction in headache pain intensity in patients receiving CST compared with control groups.

That sounds positive, and in a narrow technical sense it is: the effect crossed statistical significance. But the reviewers also concluded that the average effect was clinically unimportant. The change in pain scores, while detectable, was small enough that they judged it unlikely to translate into a meaningful difference in day-to-day life for most people.

They also rated the certainty of evidence as very low. They had concerns about blinding, sample sizes, outcome measurement, and study conduct. The combination — statistical significance, clinically unimportant effect size, very low certainty — produces a finding that's genuinely hard to interpret.

What 'clinically unimportant' means

Researchers use the idea of minimal clinically important difference (MCID) to separate effects that are statistically detectable from effects that would actually matter to a patient. An effect can be real and measurable but still fall below the threshold most patients would notice as meaningful improvement.

When the 2023 review called the average effect clinically unimportant, it was saying that across all the included studies, the average improvement fell below that threshold. That's a statement about the average across a research sample, not about any particular patient.

Some people in those trials almost certainly experienced relief that felt meaningful to them. Others may have seen little change. Research averages smooth over individual variation. Knowing the average effect was small in research terms is important context, but it doesn't tell you with certainty that CST won't help your headaches — especially given how low the certainty of the evidence is overall.

The 2024 all-condition reviews

The two 2024 reviews — Ceballos-Laita (15 RCTs across multiple conditions) and Amendolara (24 RCTs, 1,613 participants) — both set out to look across a broader range of conditions than any single-condition review could. Both reached the conclusion that there were no significant benefits for any condition studied.

These are significant negative findings. Amendolara, as the largest CST meta-analysis to date, carries considerable weight. Two independent reviews reaching similar conclusions is consistency that deserves to be taken seriously.

At the same time, the all-condition pooling approach has a known limitation. If CST genuinely helps with some specific conditions but not others, combining everything dilutes or erases the positive signals. This isn't an excuse for the negative findings. It's a structural reality of how broad meta-analyses work, and it's why individual condition-specific RCTs still matter alongside the broad reviews.

Holding both findings together

The honest picture is that the evidence for CST is weak at the level of broad meta-analysis and mixed or modest at best in condition-specific reviews. That's a real limitation of the current evidence base, and it's worth acknowledging rather than minimising.

It also doesn't mean that individual trials showing positive results are fabricated or worthless. It doesn't mean people who experience relief from CST are imagining things. Research averages and individual experience are different things, and a field with a small, variable evidence base isn't one where confident negative conclusions are fully warranted either.

If you're considering CST for headache: the picture is mixed and the evidence uncertain. Individual trials have found some positive signals. The reviewers who synthesised them found the signals modest and uncertain. That's the honest state of play, and worth knowing as you decide.

The headache and all-condition reviews are part of a complex evidence picture. Neither closes the question of whether CST might help individuals with headache. They do tell you the evidence isn't strong, and that going in with realistic expectations is sensible.