Journal Club: The tricky business of research on medicine quality
Catching up on reading about medicine quality, I today ploughed through a paper by Sarah Hodges and Emma Garnett, about evidence gaps in research around fake drugs. It’s got a lovely title: “The ghost in the data: Evidence gaps and the problem of fake drugs in global health research“, and it’s a fascinating read, on many levels.
The paper claims that:
- Most academic and grey literature relating to fake drugs underlines that the evidence base is very poor;
- After recognising the poor evidence, most of those same papers nonetheless go on to assert that fake drugs are a major problem;
- Most people who do academic research on fake drugs spend very little time thinking about sources of evidence other than the academic literature and the media (for example data held by industry), let alone about why they can’t get hold of those data. (“In particular, we note that global health scholarship is not, in the main, asking questions about the conditions that foreclose access to data.“)
I agree wholeheartedly with all of these points, which form the core of the paper. I think it is really important to understand how “facts” are created and circulated, and how the construction of facts (by whom? to answer what need or serve what interest? at what point in time? when what other things are happening in the world?) shapes the global health landscape. So important, indeed, that I once wrote a whole book about it (Chapters 1, 6 and 9 of The Wisdom of Whores are especially relevant to this discussion), as well as academic papers on the subject, like this one, co-authored by MedsWatch’s Maarten Kok.
But I’m also fascinated by the way the authors, in this paper, play every single trick that they criticise others for. Some of these they acknowledge, others not. To see what they are doing, you might have to brush up on social-science-speak (SSS), a language that the authors didn’t see fit to tone down for a global health audience. A prize example of SSS:
“We trace how the material discursive work of scholarly practice constitute wider rhetorical patterns and are part of a rich discursive ecology that structures research of the contemporary global circulation of pharmaceuticals…. Based on a review of published findings we apply the methodological tools of ‘close reading’ and ‘reading against the grain’ that characterise much research on what has come to be seen as the ‘politics of knowledge’”
Tentative translation: “The way academics think about their research influences their findings about the medicine trade…. Like other academics who think about research, we read a bunch of papers critically.”
They also actually bothered to follow up references to see how robustly they supported the assertions in the paper that quoted them – a vital and underused strategy in gauging the bollox-factor in peer reviewed articles. Their paper provides an elegant illustration of why it’s important to do that.
Contention: researchers imply that, because there is antimicrobial resistance, there must be fake drugs. Supporting quote:
‘Scientific theory and common sense thus both suggest an inevitable link with [fake drugs and] antimicrobial resistance’ (Pisani, 2015, p. 12).
Actual quote, from Pisani, 2015, p 12:
The laboratory analyses described below confirm that falsified and substandard medicines — including those with sub-therapeutic levels of API and/or formulations that inhibit dissolution and restrict bioavailability — are common in countries with weak regulatory systems. Many of those countries also have high prevalence of infectious diseases. Scientific theory and common sense thus both suggest an inevitable link with antimicrobial resistance.
When others do it, Hodges and Garnett describe that trick as “refashioning data” — the selective re-interpretation of facts that can happen, almost by implication, when you take something out of its original context and mash it up with other information. They recognise that the unclear use of definitions is a problem in research around medicine quality. Then, conveniently supporting their own narrative, they take a reference to medicines that actually do contribute to antimicrobial resistance (i.e. antimicrobials that don’t deliver enough medicine to kill all pathogens, thus allowing the resistant strains to survive and multiply), and refashion it into a claim about “[fake drugs]”. In fact, most fake drugs contain no active ingredient or the wrong active ingredient, and are thus unlikely to contain sub-therapeutic levels of API and/or formulations that inhibit dissolution and restrict bioavailability in ways that directly drive antimicrobial resistance. Using information about fake medicines instead of information about substandard medicines to help explain antimicrobial resistance is like using the score of football matches instead of data on footballers salaries to help explain inflation in the sports industry. I have to agree with Hodges and Garnett: refashioned data can be badly misleading.
Another great example of using the tricksters’ tricks: the paper correctly criticises researchers’ tendency to point to limited or extreme cases (what they call “small cases”) to make a more generalizable point. They make the case against small cases with a single small case:
“rather than reflect on the problem-definition or methodology of research into fakes, the small-case was often framed as evidence in the absence of better evidence. Indeed, one major briefing report referred to a survey of medicines on sale at a large bazaar in New Delhi which found that only 7.5% were genuine but where this percentage came from was unclear (Stevens & Haja Mydin, 2013).”
Snippiness aside, I think this paper raises important issues. It’s certainly true that the small community of people who try to better understand the potential effects of poor quality medicines are making a lot of claims about the potential threats posed by fake drugs, without sufficient evidence. (The same is true for substandard medicines, which in my opinion are potentially even more of a threat in many low and middle income settings, but then we’re part of the small community making those claims.) It’s true, too, that the evidence may be limited because the problem really isn’t that big after all, so there’s nothing much to provide evidence of. But I would not be as dismissive as Hodges and Garnett are of the conclusions I and others draw from the evidence gaps. (“Instead of treating ‘evidence gaps’ as evidence of a lack of evidence, gaps themselves also became evidence of the need to generate more evidence.”)
For clarity, I’m reading this as Instead of treating ‘evidence gaps’ as evidence of a lack of a major problem…” If that reading is correct, two responses. Firstly, some academic groups, including that led by Paul Newton at Oxford, are developing lot quality assurance methods for post market surveillance, which basically work precisely on the principle of demonstrating that there is no problem (or at least not one big enough to hyperventilate about). Secondly, we are required to draw on academic evidence when writing academic papers if we want to avoid sniffy comments such as “demonstrating the presence of fakes involved the inclusion of undocumented sources, particularly with media reports and various journalistic accounts of fakes“. However, lack of academic evidence does not necessarily translate into a lack of evidence to inform our thinking, only to a lack of evidence that can be verified through close, against the grain reading of published academic literature. Some of us have access to other evidence bases, ones that we can’t publish in the academic literature, but which are certainly robust enough to signal a significant problem. These data encourage us to do more of the type of research that we will be able to publish.
Why don’t we just publish what we have and be done with it? The answer lies, perhaps, in the discipline of public health research, which is (and in our opinion should be) more wrapped up in the vicissitudes of politics and action than in epistemology. Yes, we aim to conduct rigorous research. But we also want the results of our research to be used, where warranted, to improve people’s lives. That means working with the potential end users of research, including policy makers, regulators and sometimes even (sharp intake of breath) industry, to identify which questions are most urgent and most potentially actionable, and to generate politically robust evidence to inform that action. It’s not smart to blow the trust of the very people who could use our research, just for the sake of another academic paper for the cv. So we can’t publish all the data we have, but we can use it to guide the design of research that we can publish.
As the authors of the Ghost paper point out, testing medicines is fiendishly expensive. Though they claim that “fake drugs have become a widely accepted concern in global public health research”, research funders invest remarkably little in well-designed studies that would allow us to produce the sort of knowledge that would allow us to put questions around fake and substandard medicines to bed. That’s certainly one reason that people who do know of data indicating there’s a real problem tend towards hyperbole in describing the likely impact of poor quality meds: we want research funders to share our sense of urgency, so that they’ll support the generation of evidence that is not wrapped up in non-disclosure agreements. Ugly, but true.
The MedsWatch family is planning a couple of field studies using new methods (one tries to develop measures of patient exposure to medicines, another uses market data to predict which products are at highest risk of being substandard or fake), and we’re trying to find cash for a couple more. If that rigorous research discovers that all is well in the medicine markets of the low and middle income countries where we work, we will say so loudly and clearly, and move on to researching something with more potential impact. And we shall certainly reference the Hodges and Garnett paper as a prescient harbinger of a world less worrisome that suggested by undocumented sources, (including those that pop up across the Medicine Quality Globe).
This post is a trial balloon for a MedsWatch Journal Club: We’d like to invite other academic or public health groups working on topics related to medicine quality, access, supply chains etc to take the lead on a rotating basis on picking a paper and scrutinising it every now and then. If you’re up for it, please contact us at email@example.com.