Follow me on

Overstatement of Findings in Nutrition and Obesity Research?

t_journals1Having survived as an academic over the past 25 years, I am all too aware of the “publish-or-perish” pressures on those of us in academia.

While we often criticize those working for industry for lack of independence and “biased” interpretation of their research findings, we tend to ignore the fact that those of us in “public-funded” research have as large (if not greater) self-interest in presenting our findings in the best possible light – even if that may occasionally involve an overly enthusiastic interpretation of our own findings or “massaging” these till they appear to fit our own favourite hypotheses.

A new analysis of this issue by Menachemi and colleagues from the University of Birmingham, Alabama is now published in the American Journal of Preventive Medicine.

The researchers examined almost 1,000 papers on nutrition or obesity published in 2001 and 2011 in leading specialty, medical, and public health journals to estimate the extent to which authors overstate the results of their study in the published abstract.

They were particularly in overreaching statements that included (1) reporting an associative relationship as causal; (2) making policy recommendations based on observational data that show associations only (e.g., not cause and effect); and (3) generalizing to a population not represented by their sample.

Not surprisingly, they found that almost 10% of studies (8.9% to be exact) had overreaching statements with this being more common in papers published in 2011 than back in 2001.

Interestingly enough, unfunded studies were two and a half times more likely to have an overstatement of results, whereas studies with a greater number of coauthors tended to have fewer such statements.

While these findings are disconcerting, I do neither believe that they are in any way specific to studies in obesity and nutrition – my guess is that a similar number of overly enthusiastic interpretation of studies will be found in most scientific fields.

I would also not just put the blame on authors, who, in a highly competitive environment (and passionately held beliefs) may be tempted to “oversell” their findings.

Rather, I would put the blame squarely on the peer review system and journal editors, whose job it is to identify and insist that such statements be toned down to match the actual findings.

Unfortunately, as most journals are for-profit, there is also considerable pressure on journals to try and oversell the findings published in their pages in the hope that these will be picked up by the media and thus ultimately lead to more visibility, citations and impact factors of their journals.

Unfortunately, all of this can negatively effect the “public trust” when it comes to scientific publishing – something that should concern all of us – especially those dependent on public funding for their work.

Edmonton, AB

ResearchBlogging.orgMenachemi N, Tajeu G, Sen B, Ferdinand AO, Singleton C, Utley J, Affuso O, & Allison DB (2013). Overstatement of results in the nutrition and obesity peer-reviewed literature. American journal of preventive medicine, 45 (5), 615-21 PMID: 24139775




  1. And this problem is made worse by a few orders of magnitude by the lay press! They confuse association studies with causation.

    Post a Reply
  2. As a freshman in college I was first introduced to a delightfully subversive little book by Darrell Huff with a catchy title, of “How to Lie with Statistics”. I doubt that there was anything else I learned that year that could compare to the impact on my education that this tiny booklet had made. Later, as a sophomore, I was fortunate enough to land a research assistant position in the linguistic psychology department of my university. Once again, the introduction to the full “publish or perish” culture did more to permanently disway my awe of white coats and instill a genuine skepticism in me for published work than anything else ever could. Nothing about Menachemi research is the least bit surprising, unfortunate –yes, surprising–no. The only thing I question is the 10% statistic, it seems a bit too optimistic to me….perhaps he too needs to read “How to lie with Statistics”. On an optimistic note, according to Menachemi, this trend is accelerating. Excellent, soon the findings will be so outlandish and garish that the lay public will trust and believe scientists just about as much as we currently trust and believe our politicians. We live in interesting times.

    Post a Reply
  3. This is actually exactly as mentioned in our book (Eatinhg healthy and dying obese): “Science creates and feeds the confusion… and the media blow it up”…

    Post a Reply
  4. EXCELLENT post!

    Yes, journals definitely hold some blame. We used part of my inheritance from my father to fund some veterinary studies by two prominent groups. One small study looking into a possible different measure of endocrinological changes with insulinoma did not pan out. Not panning out was not a bad thing because it told of a change not found in that species. Though not as useful as if the hypothesis had help water on testing, it actually told something more about insulinoma in that species, yet to date journals have not printed it because of a journal bias for not printing negative results.

    One article which may be of interest on your topic here: “the Power of Negative Thinking”, Couzin-Frankel, Science, 4 Oct 2013, Vol 342, pages 68-69

    (On a different topic in a different volume of Science is another article which may interest Dr. Sharma specifically since it is on the development of the midgut and intestinal villi:
    Villification: How the Gut Gets Its Villi, Shyer et al, 11 Oct 2013, Vol 342, pages 212-218)

    When I worked for an anatomy department one of the best things done was when hypotheses were considered, and then again before submission of an article, the professors in related fields and some of the students would get together and purposely rip apart the concepts or papers to find weaknesses. That allowed for greater honesty in presentation, less risk of later embarrassment, the inclusion of more useful refs, education, etc. This sort of process may be why the papers with more authors wound up with better accuracy, and may also be why some of the hard sciences have better accuracy ratings for studies since many such studies are so large and costly that they have no choice but to have many authors, so many people spotting the problems, errors, omissions, and weaknesses.

    There are two further hurdles that can cause difficulties between the research and the public interpretation (besides widespread innumeracy in the general audience of the press). The first is PR departments. In many companies and universities the PR departments have very different goals and very much less understanding of the topic at hand but do not question the researchers for clarification. The result is that a spin can be put on press releases (which the researchers are too often NOT permitted to check for accuracy before release). A large number of times when researchers have been embarrassed they did NOT say what the press releases said they had asserted, or said something similar, or the main points were missed or exaggerated, or… This can also be worsened if the authors are not clear writers. (That is NOT a problem I would expect with Dr. Sharma since I find his writing typically to be crystal clear.)

    Then the press runs with it, but while there are a few press sources which have a history of care and understanding such as the NPR science reporting tradition, far more science assignments go to people who all too often have no idea what they are reading and often further error is introduced into what the public hears on the news.

    Sometimes the press will try but not have any idea whom to ask. For example, you might recall several years ago that a southern nation announced that someone was trying to smuggle a “dinosaur fossil” to the U.S. when the mandible was clearly in Proboscidea (elephants and relations) and from the apparent light weight was either a cast, stained bone, or a sub-fossil (a possibility for that location). What went wrong? The police and press confused archeology with paleontology and mammalogy. The archeologist for some unknown reason did not tell them which specialty they should be asking and instead, with no relevant background, tried to figure it out — wildly inaccurately as it turned out. It might be that all press releases might best serve if in the key terms section they include the names of the types of specialities which are related to lead press to the right types of human resources.

    Post a Reply
    • Thanks Sukie for your thoughtful content. I interact a lot with the media and there are a lot who do try to report science as accurately as possible – however, they are also in the business of selling so sexy headlines trump actual facts. I don’t think that there is any real solution to this except to take all media reports on science with a grain of salt 🙂

      Post a Reply

Submit a Comment

Your email address will not be published. Required fields are marked *