You don't need a research background to get something useful from a clinical study. With a few basic concepts in hand, most CST trials become easier to evaluate — and you'll be better placed to tell a genuinely promising result from one that needs heavy caveats.
This guide covers ideas that come up again and again in CST research. The goal isn't to make you a methodologist. It's to give you enough context that when you see a headline saying 'study proves CST helps with X' or 'review finds no evidence for CST,' you have a sense of what that means. The answer is almost always more interesting, and more uncertain, than the headline suggests.
RCTs and systematic reviews
A randomised controlled trial (RCT) is the standard way to test whether a treatment works. Participants are randomly assigned to a treatment group or a control group, so differences between them are more likely to come from the treatment than from something else. The 'controlled' part means there's a comparison group — ideally either a convincing sham treatment or standard care. RCTs are the gold standard because randomisation rules out many biases.
A systematic review pools multiple RCTs (and sometimes other studies) and analyses them together. With a bigger combined dataset, you can draw more confident conclusions than any single trial allows. A meta-analysis is a systematic review that combines results numerically. Both are valuable, but only as good as the studies inside them. If the underlying trials are small or poorly designed, pooling doesn't fix that — it just scales the problem.
When you see a claim about CST, the type of evidence matters. A single positive RCT beats an anecdote but loses to a well-conducted systematic review. And a review that finds little evidence isn't the same as evidence that the therapy doesn't work — sometimes the existing trials just weren't good enough to show anything clearly.
Risk of bias and the sham problem
Risk of bias is shorthand for ways a study's design might skew its results. Common sources: participants knowing which group they're in (which can shape how they report symptoms), practitioners knowing who's getting the real treatment, or researchers with a stake in the outcome doing the analysis. Good trials limit bias through blinding and pre-registering outcomes.
In bodywork research, building a convincing sham is one of the hardest problems. For a drug trial, you can give a sugar pill that looks identical to the medication. For CST, you have to create a sham that feels plausible to the participant but lacks the active ingredient — whatever that is. No sham has been universally accepted as good enough. Some trials use a 'light touch sham' where a therapist places hands without the intentional quality of a real session. Whether that's enough to blind participants is debated. The limitation is real, but it applies to almost all manual therapy research — it doesn't single out CST.
Sample size and clinical significance
Sample size is how many participants are in a trial. Small samples cause two problems. They're more likely to throw up false positives — chance variations look like real effects when numbers are low. And they may be too small to detect a real effect that exists but is modest. Most CST trials run between 40 and 150 participants. That's enough for interesting preliminary data. It's usually not enough to anchor a definitive clinical recommendation.
Statistical significance is how researchers decide if a result is worth taking seriously. It's usually a p-value: a result with p<0.05 means there's less than a 5% chance of seeing a difference this large if there were no real effect. But statistical significance can be reached with large enough samples even when the actual difference is tiny. That's where clinical significance comes in: is the difference large enough to actually matter to a patient? A trial might find that CST reduced pain by 0.4 points on a 10-point scale with p=0.03. Statistically significant, but probably not clinically meaningful.
When reading CST trials, look for both. Effect sizes, confidence intervals, and the researchers' own discussion of clinical meaningfulness tell you more than a p-value alone.
Reading research critically isn't about looking for reasons to dismiss it. It's about understanding what a study can and can't tell you. CST research, read carefully rather than at face value in either direction, has genuine signals worth attention — particularly for specific conditions where better trials have been done.