If you've been reading about CST research, you may have noticed something confusing. Individual trials sometimes report meaningful positive results. Broader reviews of the same evidence conclude that CST doesn't work. The contradiction can feel disorienting, especially if you've experienced benefit yourself or are trying to make an informed decision about whether to try it.
The short answer is that both can be true at once. A handful of individual trials really have found positive results for specific conditions. Broader meta-analyses that pool results across many conditions really have found no significant effect overall. Understanding why means looking at how the research is designed and what happens when you combine studies that may have very little in common.
This isn't a story about one side being right. It's a story about how research works — and why specific findings still matter even when they vanish inside a larger statistical pool.
What individual trials have found
Several well-designed RCTs have reported positive results for CST in specific conditions. A 2017 trial in migraine patients found meaningful reductions in headache frequency and intensity. Studies in infant colic have shown improvements in crying duration and parental stress. Trials in neck pain and fibromyalgia have found pain reductions and improvements in quality of life.
These aren't minor or poorly conducted studies. Some have reasonably good methodology, adequate sample sizes, and sham controls. The positive signals were real, within those specific populations. Reading the individual reports, the conclusion looks fairly clear: CST may help with this particular condition.
The picture gets more complicated when you step back to the broader review literature.
Why meta-analyses dilute condition-specific effects
Meta-analyses combine results from multiple studies into a single overall estimate. This is statistically powerful, but it has a real catch: you can only meaningfully combine studies that are measuring similar things in similar populations. When you mix migraine studies with autism, low back pain, ADHD, and surgical recovery studies, you're asking a single analysis to absorb an enormous amount of variation.
The 2024 reviews that found no significant effects — including the Ceballos-Laita review (15 RCTs) and the Amendolara review (24 RCTs across 1,613 participants) — covered a wide range of conditions. When CST works well for some conditions and shows no effect for others, combining everything tends to produce a result near zero. The positive signals in migraine or colic don't disappear because CST stopped working. They get averaged out against conditions where the evidence is weaker or absent.
This is a known issue in systematic review methodology, not a flaw unique to CST research. It's one reason that condition-specific reviews are often more informative than all-condition pooled analyses.
Study quality and what it does
Not all trials in a meta-analysis are equally well-conducted. Studies with small samples, weaker blinding, or methodological limitations produce noisier, less reliable results. When lower-quality studies are included alongside higher-quality ones, the overall effect estimate can be dragged toward zero — not because the therapy doesn't work, but because some of the studies can't reliably detect an effect either way.
The 2024 reviews were broad by design, which means they included studies at a range of quality levels. More inclusive reviews are valuable, but the trade-off is that more noise enters the analysis. A positive result from a well-designed trial in a specific population can get statistically outweighed by several smaller, weaker trials that found nothing.
Reviewers account for this with quality ratings and sensitivity analyses, but even with those adjustments, pooling heterogeneous studies remains a real analytical challenge in CST research.
Publication bias and what it means here
Publication bias — positive results get published while null results stay in file drawers — is a problem across all medical research. In a small field like CST, where the total number of trials is still limited, this dynamic can meaningfully skew what's available for reviewers to analyse.
If positive trials are more likely to make it to publication, the published record overrepresents success. Meta-analyses try to correct for this with statistical tests, but those tests aren't perfect, especially with small numbers of studies. That's one more reason the picture is genuinely complex rather than cleanly settled in either direction.
What this means in practice: the individual positive signals in migraine, infant colic, neck pain, and fibromyalgia are real published findings from real trials. They deserve to be taken seriously on their own terms, not dismissed because a broad pooled analysis didn't replicate them at the aggregate level. The research is still developing, and condition-specific trials with strong methodology remain the most useful guide to where CST may genuinely help.
If a specific condition brought you to this page, the individual trial findings in that area are worth reading in full. The broader reviews tell you something important about the state of the field overall, but they're not the last word on whether CST might be worth exploring for your particular situation.