Common Scientific And Statistical Errors In Obesity (And Other) Research
Thursday, March 31, 2016The quality of scientific evidence can only ever be as good as the research methodology and data analyses that goes into creating this evidence.
While poor research methodology, flawed statistical analyses and overstating of findings is by no means particular to obesity research, the wide public interest in the topic of obesity (causes, prevention, treatments) means that flawed outcomes from flawed studies get transmitted to a much larger audience of individuals with a keen interest in this topic.
Thus, the danger of flawed research contributing to widely held misconceptions about obesity can directly lead to poor public policy and ineffective interventions that perhaps have a much broader impact that in other fields of health research.
Thus, it is admirable, that the latest issue of OBESITY features three articles on issues related to the quality of research in this field, highlighting some of the most common and pervasive methodological shortcomings of much of the work.
Thus, for e.g. a paper by Brandon George and colleagues list the 10 most common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research.
These include, in no particular order, issues related to
1) misinterpretation of statistical significance,
2) inappropriate testing against baseline values,
3) excessive and undisclosed multiple testing and “P-value hacking,”
4) mishandling of clustering in cluster randomized trials,
5) misconceptions about nonparametric tests,
6) mishandling of missing data,
7) miscalculation of effect sizes,
8) ignoring regression to the mean,
9) ignoring confirmation bias, and
10) insufficient statistical reporting.
The authors go on to explain each of these errors, citing specific examples from the literature on each.
Most importantly, they also discuss ways to identify such errors and (even better) minimise or avoid them.
As most of these problems are related to statistical handling of the data, the authors passionately argue for the inclusion or at least consultation of statisticians in the both the research and reporting stages of the scientific process, to hopefully produce higher quality, more valid, and more reproducible results.
@DrSharma
Edmonton, AB
Friday, April 1, 2016
In the US, also, one needs to consider funder bias. In the 1980s, Reagan saw to it that the NIH was cut significantly in favor of “privatization.” I don’t think motives are necessarily nefarious. Here’s what I think happens. A significant funder — say, the Robert Wood Johnson Foundation — is of the opinion that “eat less move more” is a viable solution to obesity. They don’t go to a research think tank and say “We want you to produce experiments that will support these particular findings.” They don’t blatantly push their agenda onto unwilling researchers. What they do, is find a think tank that is coming to these conclusions already, send an RFP to that group, eliciting a grant proposal, and dump more funding on that think tank to keep up the good work.
Now, here’s where it gets sticky. The Robert Wood Johnson Foundation is associated with (founded by the original namesake of) Johnson and Johnson, at one time (maybe still), the largest producer of lap bands (as well as other bariatric products). It behooves Johnson and Johnson to have a society that continues to think that “eat less move more” is a viable solution to obesity, because every time it fails, people will look to alternatives. Presumably there is a “wall” between the foundation and the corporation, but I am skeptical.