In 2024, Ceballos-Laita and colleagues published a systematic review in the journal Healthcare with a straightforward title and an equally direct conclusion. Across 15 randomised controlled trials covering both musculoskeletal and non-musculoskeletal conditions, the review found that craniosacral therapy produced no benefits in any of the conditions assessed.
This is a significant finding, and it deserves honest engagement rather than dismissal. The authors did a systematic search, applied inclusion criteria, assessed each study's quality, and reached a conclusion that sits at the more critical end of the CST evidence spectrum.
At the same time, reading this review carefully — rather than just noting its conclusion — reveals important nuances about what it can and can't tell us. What the review found, how it was done, and what its own limitations are all matter for understanding what to make of it.
What the review covered
The review searched multiple databases for randomised controlled trials measuring the effect of craniosacral therapy on any health condition. Fifteen RCTs met the inclusion criteria. The conditions studied included musculoskeletal pain, headache, neck pain, and various other presentations. Each study was assessed for methodological quality using standard risk-of-bias tools.
The finding that 14 of the 15 included studies had high risk of bias is striking. High risk of bias doesn't mean the studies were fraudulent or carelessly done. It's a technical assessment based on criteria like blinding (whether participants and assessors knew who got which treatment), allocation concealment (whether the randomisation was properly protected), and whether the analysis was pre-specified. CST trials struggle particularly with blinding: it's hard to sham touch in a way participants can't distinguish from the real thing, and practitioners obviously know what they're delivering.
The review was limited to published, English-language RCTs, which means any positive effects found in smaller studies published in other languages or in formats other than RCTs wouldn't appear. It also means the heterogeneity across the included studies — different conditions, populations, CST protocols, outcome measures — makes it hard to draw firm conclusions about any specific use of CST.
What 'high risk of bias' actually means
When a systematic review says studies have high risk of bias, it's making a technical point about internal validity. A high-bias study isn't necessarily reporting false results. It's reporting results we can't be fully confident in, because the design leaves alternative explanations open.
For CST research specifically, the blinding problem is real and not easily solved. The gold standard in drug trials is a double-blind design where neither participants nor practitioners know who got the active treatment. With hands-on therapy, the practitioner always knows what they're doing. Participants often have a sense of which group they're in, particularly if they've had the therapy before. This isn't a failing of individual researchers. It's a structural feature of how hands-on therapies can and can't be studied.
What this means in practice is that the evidence base for CST, as assessed by standard RCT quality criteria, is genuinely weak. That's an honest assessment. It doesn't mean CST has been proven not to work — absence of proof isn't proof of absence — but it does mean anyone making strong clinical claims about its effectiveness should be able to engage with these methodological realities.
How to think about this review
If you're already finding benefit from CST, this review doesn't invalidate your experience. Population-level analyses of averaged outcomes across heterogeneous groups answer a different question from what happens between a specific practitioner and a specific client. The two kinds of knowing aren't in competition.
If you're considering trying CST and want to weigh the evidence, this review is the honest landscape: the RCT evidence base is thin, and the existing trials have significant methodological weaknesses. CST is generally safe, many people find it helpful, but the trial evidence for specific conditions isn't strong. That's a reasonable summary to take into your first session.
What the review doesn't address — and can't, given its methodology — is the quality of the experience itself, the value of a therapeutic relationship built on careful attention, or the possibility that some of what CST does works through pathways that standard pain and function scales don't capture well. These aren't get-out clauses. They're real limitations of what RCTs measure.
The 2024 Ceballos-Laita review is the most current systematic assessment of CST's clinical evidence base, and its conclusion is clear. Taking it seriously doesn't require abandoning CST. But it does ask for honesty about what the evidence currently shows and what it doesn't yet tell us.