Exercise has become a popular treatment for many musculoskeletal issues over the past few years but surprisingly little is actually understood, even after tons of research, about how to use it in clinical practice and maybe even if it’s worth using?
In this blog, we will look at the actual question of “does exercise ‘work’ for pain” as ‘works’ tends to be a phrase that gets bandied about a lot without much clarity. Similar to “the research shows” when often the deeper you dive into the research the LESS clear it often becomes, especially around pain.
I will write a few more posts on the “everything works/nothing works” perspective, how it might work and how specific we may need to be in future posts.
Well Does It Work?
Well, it depends on what you mean by work? Which condition? What are you comparing it against? Do you mean pain? Do you mean disability? Did it have an effect on physical function or some biomotor variable? All of these things are quite different questions that often get lumped into the catch-all term ‘works’.
We need to think as well about WHY we might come to a conclusion on if it ‘works’ or does not. Is it because I have read widely in this area? Is it because who I follow on Twitter tells me it does or doesn’t or am I simply following my biases? A little bit of epistemology perhaps.
Like ANY intervention, exercise should be thoroughly scrutinized and the basic reality is we have to be prepared that exercise will not work for everybody, it is not a magic bullet or panacea and a lot of what we do is a bunch of informed trial and error really, but we will come back to that later in another post. We have to remember we are dealing with HUMANS who tend to be wonderfully variable in their responses as most biological organisms are.
We now discuss pain as being a complex, multifactorial experience (blah blah blah) so why do we expect one thing to come along and solve it all for everybody? For some it will be revelatory, for some, it will do very little and for others, even flair them up, so we need a bit of perspective but as a standalone treatment I think there is a lot to like here especially with the benefits for our health and well-being.
Just Hurry Up And Tell Me…..
The whole idea of “it works” could stem from how we have traditionally looked at the research. To show a difference between two interventions or ‘usual care’ it has been common to use a significance level of p = 0.05 to indicate something ‘works’, so the observed difference in effect between two groups is likely to be at least as big as reported and this then is used to reject or accept the hypothesis of a study. Generally, something like “treatment A WORKS better than treatment B” or something along those lines. Is exercise better than manual therapy? Is it better than usual care? You get the picture. I am no statistician or researcher, only a humble clinician so bear with me here.
So we might say exercise is better than usual care or whatever else, but the real question should be HOW MUCH better or the actual magnitude/size of the difference. The p-value is a statistical tool and not a measure of the actual average size of the effect. Something can be statistically significant without really making a difference to our patients and this is where minimal clinical important difference comes in (MCID).
A clinically meaningful change for pain has been discussed as being somewhere between 1-2 points on an 11 point VAS/NPRS dependent on what it is being tested against such as ‘usual care’ or another specific intervention. Other magnitudes of clinical significance have pointed a 20% or 30% change from the baseline and this makes sense as a 2 point change for a baseline of 4 is far more significant than a 2 point change on a baseline of 8 for example.
We have to be aware that these cut-off values such as 0.05 are also a bit arbitrary. If we critique the significance of p = 0.05 then we probably have to do the same for MCID too. The real value of any effect may only really be possible via subjective evaluation by the person experiencing them and their expectations of what that change should be.
We may also have to consider how we view the ‘mean effect’ as this may not actually reflect THE effect that MY patient gets (for a whole load of potential reasons). The mean represents the average response and is sensitive to those that respond very highly and also people who respond lowly or even negatively. In trials with small sample sizes, as much of exercise & pain-related research is these more extreme values can significantly alter the mean.
We should also take into account the standard deviation of the mean response and this is a measure of the variation within the group of participants being studied. This could mean (get it…) that the variation in response when applying the treatment in the clinic could also be pretty wide too. A confidence interval (CI) is another measure of uncertainty/variability around the potential treatment effect on a wider population. The CI reflects the inherent variability/error in the process of sampling taking into account the size and variation within the sample.
The last question here is does exercise research always reflect clinical practice? Personally, I tinker with the type of exercise, intensity, frequency, volume etc to ‘optimise’ for the person whether that’s in relation to their response or ability to achieve the program. If I am not getting the desired response then I feel quite at home playing with the variables. Is this right or wrong? I have no idea but standardized programs used to study exercise often don’t do this.
You Really Didn’t Answer The Question…
So what was the point of all this, well we can start to see that “it works” is a pretty nebulous term really. It’s the classic clinical conundrum of applying the world of research to our patients and how we should expect them to respond. Predicting the future is always tough and worlds of research and clinical practice is definitely not a game of certainties.
We have to consider the actual size of what ‘works’ and how likely is my patient to actually respond in this way and I see it as a bit of a “probability wrapped up in a probability”. This often makes clinicians feel uncomfortable as we tend to like certainties and sometimes research can be portrayed as more certain than it really is IMO. But we really have to look at the trials, who they are studying, how many people, what exercise/dosage and what’s the spread of responses amongst other things to even get close to answering the question.
Next time we might actually answer the question ; )