Placebo effect in RCTs: A ‘theory of active research participation’
A forthcoming paper in Social Science & Medicine studies influences of participation in and outcomes of a randomized controlled trial. From the abstract:
The results indicate that trial recruitment and retention depend on a set of convictions forged largely as a result of contextual factors peripheral to the intervention, including the friendliness and helpfulness of research centre staff and status of the administering practitioner. These convictions also influence the reporting of the study outcomes, particularly if participants experience uncertainties when choosing an appropriate response. The findings suggest that participants in clinical trials are actively involved in shaping the research process, rather than passive recipients of treatment. Thus the outcomes of trials, notably those involving contact interventions, should be regarded not as matters of fact, but as products of complex environmental, social, interpretive and biological processes.
Though the qualitative study was embedded in a clinical trial on acupuncture, I couldn’t help but consider the same problem as posed against randomized controlled trials (RCTs) used to test the effects of policy interventions. There is a particular characteristic of the acupuncture study that is similar to policy intervention RCTs: the fact that they’re not double-blind. Policy RCTs cannot control for placebo effects. The forthcoming article is a great introduction to RCTs, and discusses double-blinding:
Provided the test and control interventions are comparable and attrition distributed evenly across groups, randomisation eliminates selection bias and controls for placebo effects. ‘Double-blind’ means that neither the patient, nor the attending practitioner, nor the researchers who analyse the data know which intervention the patient has received (Miller and Stewart, 2010). Blinding of the practitioner is only possible if the test and control treatments are indistinguishable at the point of administration, as in most drug trials.
Because I have yet to come across a policy intervention RCT that is double-blind, I suspect such interventions could be plagued with similar influences that cannot be attributed to the treatment, but to the study itself.
I’ve been trying to think about how we might test for this… but find it kind of silly to design an RCT to test the effects of an RCT, especially when we should be shorting RCTs altogether.