Less research is needed
Guest blogger Trish Greenhalgh suggests its time for less research and more thinking.
The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”. These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.
With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.
Recall the classic cartoon sketch from your childhood. Kitty-cat, who seeks to trap little bird Tweety Pie, tries to fly through the air. After a pregnant mid-air pause reflecting the cartoon laws of physics, he falls to the ground and lies with eyes askew and stars circling round his silly head, to the evident amusement of his prey. But next frame, we see Kitty-cat launching himself into the air from an even greater height. “More attempts at flight are needed”, he implicitly concludes.
On my first day in (laboratory) research, I was told that if there is a genuine and important phenomenon to be detected, it will become evident after taking no more than six readings from the instrument. If after ten readings, my supervisor warned, your data have not reached statistical significance, you should [a] ask a different question; [b] design a radically different study; or [c] change the assumptions on which your hypothesis was based.
In health services research, we often seem to take the opposite view. We hold our assumptions to be self-evident. We consider our methodological hierarchy and quality criteria unassailable. And we define the research priorities of tomorrow by extrapolating uncritically from those of yesteryear. Furthermore, this intellectual rigidity is formalized and ossified by research networks, funding bodies, publishers and the increasingly technocratic system of academic peer review.
Here is a quote from a typical genome-wide association study:
“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.” [1]
The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease.
If the Kitty-cat analogy seems inappropriate to illustrate the flaws in this line of reasoning, let me offer another parallel. We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.
Whereas in the past, any observer could tell that an experiment had not ‘worked’, the knowledge generated by today’s multi-variable mega-studies remains opaque until months or years of analysis have rendered the findings – apparently at least – accessible and meaningful. This kind of research typically requires input from many vested interests: industry, policymakers, academic groupings and patient interest groups, all of whom have different reasons to invest hope in the outcome of the study. As Nic Brown has argued, debates around such complex and expensive research seem increasingly to be framed not by régimes of truth (what people know or claim to know) but by ‘régimes of hope’ (speculative predictions about what the world will be like once the desired knowledge is finally obtained). Lack of hard evidence to support the original hypothesis gets reframed as evidence that investment efforts need to be redoubled.[2] And so, instead of concluding that less research is needed, we collude with other interest groups to argue that tomorrow’s research investments should be pitched into precisely the same patch of long grass as yesterday’s.
Here are some intellectual fallacies based on the more-research-is-needed assumption (I am sure readers will use the comments box to add more examples).
- Despite dozens of randomized controlled trials of self-efficacy training (the ‘expert patient’ intervention) in chronic illness, most people (especially those with low socio-economic status and/or low health literacy) still do not self-manage their condition effectively. Therefore we need more randomized trials of self-efficacy training.
- Despite conflicting interpretations (based largely on the value attached to benefits versus those attached to harms) of the numerous large, population-wide breast cancer screening studies undertaken to date, we need more large, population-wide breast cancer screening studies.
- Despite the almost complete absence of ‘complex interventions’ for which a clinically as well as statistically significant effect size has been demonstrated and which have proved both transferable and affordable in the real world, the randomized controlled trial of the ‘complex intervention’ (as defined, for example, by the UK Medical Research Council [3]) should remain the gold standard when researching complex psychological, social and organizational influences on health outcomes.
- Despite consistent and repeated evidence that electronic patient record systems can be expensive, resource-hungry, failure-prone and unfit for purpose, we need more studies to ‘prove’ what we know to be the case: that replacing paper with technology will inevitably save money, improve health outcomes, assure safety and empower staff and patients.
Last year, Rodger Kessler and Russ Glasgow published a paper arguing for a ten-year moratorium on randomized controlled trials on the grounds that it was time to think smarter about the kind of research we need and the kind of study designs that are appropriate for different kinds of question.[4] I think we need to extend this moratorium substantially. For every paper that concludes “more research is needed”, funding for related studies should immediately cease until researchers can answer a question modeled on this one: “why should we continue to fund Kitty-cat’s attempts at flight”?
This blog was informed by contributions to my Twitter page @trishgreenhalgh
Trish Greenhalgh is Professor of Primary Health Care at Barts and the London School of Medicine and Dentistry, London, UK, and also a general practitioner in north London.
[1] Prins BP, Lagou V, Asselbergs FW, Snieder H, & Fu J (2012). Genetics of coronary artery disease: Genome-wide association studies and beyond. Atherosclerosis PMID: 22698794
[2] Brown N (2007). Shifting Tenses: Reconnecting Regimes of Truth and Hope Configurations DOI: 10.1353/con.2007.0019
[3] Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, & Medical Research Council Guidance (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ (Clinical research ed.), 337 PMID: 18824488
[4] Kessler R, & Glasgow RE (2011). A proposal to speed translation of healthcare research into practice: dramatic change is needed. American journal of preventive medicine, 40 (6), 637-44 PMID: 21565657