Reading clinical trials is a skill, and most of us were never taught it. When a study reports a 'significant improvement' in people receiving craniosacral therapy, that sounds encouraging. But the meaning of the finding depends on something most coverage skips: whether the improvement was the trial's primary outcome or a secondary one.
This matters a lot, and understanding it doesn't take a statistics degree. It does take knowing how trials are designed and why the rules exist.
CST research has some clear examples of this in action. Working through them helps you read the evidence honestly, without dismissing real findings or overstating what they showed.
Primary and secondary outcomes
When researchers design a trial, they register it in advance and specify one main thing they're measuring. That's the primary outcome — the question the trial is built to answer. Everything else they measure is secondary: extra data collected to fill in the picture.
The reason for pre-specifying is statistical. If you measure twenty things in a study, probability alone says one or two will look significant by chance, even if the therapy does nothing. Committing in advance to one primary measure cuts the risk of treating a chance finding as a real one.
When a trial meets its primary outcome, the result carries strong statistical weight. When it misses the primary but finds positive results in secondary measures, those secondary findings still matter — but they need careful reading.
The 2016 low back pain trial
A 2016 randomised controlled trial of CST for low back pain shows how this plays out. The trial was reasonably well designed, with a sham control and an adequate sample. Researchers measured several outcomes, with the Roland-Morris Disability Questionnaire (RMDQ) as the primary endpoint — a validated measure of how much low back pain affects daily function.
The primary result: p=0.060. That just misses the conventional threshold of p<0.05. The trial's main question — does CST improve functional disability more than sham? — wasn't answered with statistical confidence.
The trial also measured pain intensity as a secondary outcome, and there the picture was more positive. Pain scores were meaningfully lower in the CST group. That's a real finding, not made up. But because it comes from a secondary outcome in a trial that missed its primary endpoint, it needs more careful reading.
Reading secondary findings fairly
The right response to a secondary finding like that one isn't to dismiss it. Researchers include secondary outcomes because they're worth measuring, and patterns across secondary outcomes in multiple trials can point to where larger confirmatory studies should go.
The right response is also not to treat it as strong evidence that CST works for low back pain. The honest framing: there's a signal here worth investigating, but we don't yet have definitive evidence from a well-powered trial with a pain primary endpoint.
This matters because health coverage often strips out the primary/secondary distinction entirely. A headline saying 'Study finds CST reduces back pain' isn't lying, but it leaves out the context that makes the finding interpretable.
Why this is useful, not discouraging
Knowing this doesn't make you a sceptic about everything. It gives you better tools for knowing when to take a finding seriously.
A trial that meets its primary outcome — where the main pre-specified measure showed a significant, clinically meaningful change — is a stronger signal than one where the primary missed but a secondary was positive. Both are worth knowing, but they don't carry equal weight.
For CST, this framework explains why the evidence is genuinely mixed rather than clearly positive or negative. Some conditions have strong primary-outcome results. Others have encouraging secondary findings that point toward but don't yet confirm a benefit. Knowing the difference helps you have a grounded conversation with a practitioner about what the research does and doesn't support for your situation.
CST research is still developing, and the primary/secondary distinction is one reason to read past the headlines. The findings that hold up are worth knowing — and the ones that need more research are worth asking about too.