It's easy to see the appeal of a headline like 'RCT shows craniosacral therapy reduces migraine frequency.' A randomised controlled trial is the gold standard of clinical evidence. If an RCT found a positive effect, doesn't that settle it?
Not on its own. This isn't a criticism of CST specifically. It's how research works in general, and understanding the logic makes the evidence picture for any therapy clearer. A single positive trial is meaningful, but it's one data point in what needs to be an accumulating body of evidence before strong conclusions hold up.
Why a well-run trial can still be underpowered
Statistical power is a trial's ability to detect a real effect if one exists. A well-designed RCT with careful randomisation and a convincing sham control can still be underpowered if it doesn't have enough participants. With small sample sizes, chance plays a bigger role. A trial with 50 participants might find a positive effect partly because the groups happened to differ in some unmeasured way, not because the treatment was responsible.
Most CST trials are relatively small. That's not a failing of the researchers. It reflects the resources available for manual therapy research and the practical difficulty of running large clinical trials outside pharmaceutical funding. But it does mean a positive result from a small, well-conducted trial should be treated as preliminary. It raises a hypothesis worth investigating further, rather than confirming the treatment works.
Worth holding onto when you encounter reports of positive CST trials. A 2022 RCT finding CST reduced migraine frequency is interesting and worth taking seriously. It isn't, by itself, enough to say CST is an established treatment for migraine.
Publication bias and the file drawer problem
Publication bias is a well-documented thing in research. Studies with positive or interesting results are more likely to be published than studies with null or negative results. So the published literature in any field tends to overrepresent positive findings. Studies that found no effect, or found CST performed no better than the control condition, are more likely to sit unpublished in researchers' files.
This matters when you're trying to assess whether the overall pattern of results supports a conclusion. If ten trials were run and five found positive effects, those five might all get published. The other five, which found nothing, might not. Looking only at published research, you'd see a 100% positive rate, which would be deeply misleading.
The field has tools to detect and account for publication bias, including funnel plots and statistical tests for asymmetry in meta-analyses. When systematic reviews note concerns about publication bias in the CST literature, that's what they're talking about. It doesn't mean all positive CST results are wrong. It does add to the uncertainty.
How meta-analyses change the picture
A meta-analysis pools results from multiple trials to get a larger, more stable estimate of an effect. When a meta-analysis includes a positive trial alongside other trials that found no significant effect, the positive trial's contribution gets weighted against the others. Depending on the quality and sample size of each, a single positive result can easily be outweighed by a collection of null results.
This is why a positive single trial can sit alongside a sceptical systematic review. The meta-analysis isn't ignoring the positive trial. It's putting it in context with everything else that's been studied. More recent reviews of CST, covering larger study pools than earlier analyses, have sometimes reached more cautious conclusions precisely because they include a wider range of condition-specific trials, some of which showed little benefit.
None of this means CST doesn't work. It means the evidence base is still developing, and the picture is genuinely mixed. Treatments that have gone on to become well-established sometimes spent years in this zone of promising-but-not-yet-settled evidence. The honest position is to stay curious, take individual trials seriously as signals worth following up, and resist the pull of certainty in either direction.
The scientific process is iterative and slow, and no single study, however well-designed, carries the weight of a settled answer. That's a feature of how research works, not a flaw in CST trials specifically. The accumulating literature is worth watching, and the individual condition-specific results worth reading carefully.