Why Disclosure Statements Are Meaningless

t_journals1On a regular basis, whether it is for papers I write, conferences I speak at, or committees I sit on, I am deluged by “Conflict of Interest Declaration Forms” that seemingly require more personal information than my annual tax return.

The goal of all of this, apparently, is to provide “transparency”, so that the respective audience can judge the objectivity (or lack thereof) of my work based on whether or not I may or may not have a percieived “conflict of interest”.

In my humble opinion, this is an entirely irrelevant and useless exercise, which does nothing to actually ensure objectivity in how my work or actions are interpreted or perceived.

Yesterday’s post was meant to illustrate how non-declared “conflicts of interest” may be as (if not more) relevant to a real or perceived conflict than whether or not I have consulted for a company or received research funding from industry (or, for that matter, any other interest group – by definition an “interest group” is interested in the outcome of where its money goes  – no group that I am aware of is giving away free handouts).

Consider the issue of peer review. Although hardly perfect, the whole purpose of the time-honoured peer-review process is to allow knowledgeable peers to evaluate the scientific merit of a paper. It is their job to fairly evaluate the paper to the best of their ability: Is the topic important? Is the hypothesis relevant? Is the methodology valid? Are the proper statistical tests applied? Are the full data presented? Are the findings interpreted cautiously (and not overstated)? Are limitations acknowledged?

These are the questions that count – in fact, they are the only questions that count. Who funded the study, or what the personal relationship of the authors were to the funding source, is entirely irrelevant – all funders pursue goals, whether commercial, political, or ideological – who cares?

If the paper meets the scientific standards required by the journal (and we assume here that higher impact journals have higher standards and do a more thorough job of vetting all of the relevant aspects of a paper), the funding source should be irrelevant – if the study is well conducted, the findings should stand on their own merit. If anyone does not believe the data or findings or interpretations, they are welcome to disagree – but their criticism should be based on scientific arguments – not just by pointing fingers at the funding source (or ad hominem attacks on the authors).

If any serious doubts do arise about any of the above questions (e.g. methodology, analysis, interpretation, etc.), it is up to the reviewers and editors to either request clarification or to reject (or even retract) the paper. After all, that is what the whole notion of peer-review is about.

So what about the argument that industry funded studies are more likely to report positive findings than other research and should therefore be taken with a grain of salt?

I can think of several possible explanations including the simple fact that no industry that wants to stay in business is likely to fund a trial where there is not at least a fighting chance of having a favourable result.

I am therefore not at all surprised that industry often goes to great lengths to perform due diligence regarding what trials to fund (often more so than some peer-review committees I have sat on) in the hope for a “positive” outcome. Studies that don’t stand a fair chance of producing positive findings is not where industry is likely to (or can be reasonably expected to) put its money. This, however, is not the same as saying that the data or the study (or the investigators) are somehow manipulated to produce positive results – that would be outright scientific fraud.

So rather than wondering about why industry funded studies so often tend to be favourable, I am in fact surprised every time this “biased” funding by industry does leads to results that are far from favourable (or even damaging) to the sponsor (for e.g. I just happened to be one of the PIs of a 10,000 patient study on a an anti-obesity drug, which showed this drug to modestly increase the risk for non-fatal cardiovascular events, a finding which led to the drug being taken off all markets worldwide – hardly a result that the sponsor (who footed the cost of almost $200 million for the trial) wanted to see).

Every researcher I know would like to see their study confirm their favourite hypothesis (or rather discard the null-hypothesis) – the funding source has nothing to do with this – the rewards of a positive finding are evident: high-impact publications, peer-recognition, media interest, promotion, tenure, and funding for yet more studies. I have yet to meet a “successful” researcher who has build a career on a track record of “negative” studies.

But peer-review is not the only mechanism that provides checks and balances. Clinical trials have to be registered, study protocols have to be vetted by ethics committees, good clinical practice guidelines need to be followed, sites are monitored (including random and targeted checks by regulators), primary data sources have to be archived, raw data may have to be made available to the reviewers (or even the public), data monitoring boards must ensure participant safety, the list of checks and balances (at least in clinical trials) goes on and on.

None of this will provide 100% protection against fraud or criminal intent – but nor will a disclosure of the funding source or a statement as to what shares my grandkids happen to own in their education funds.

The only consequence that I see resulting from “disclosures gone wild” is the undermining of public trust in the scientific process. Thus, no matter how relevant, precise, accurate, arms-length or important the findings – simply seeing a statement of industry funding on a paper, is often automatically interpreted as tainting the study.

Oddly enough, the same folks who would criticize an industry funded study showing a positive result for a given product, would often have no problem citing that same study if it happened to show an outcome more in line with their own views and thinking on the matter.

So, rather than obsessing about who is funding what, let us allow the science speak for itself. Let us make sure we respect the peer-review process and ensure that all the other checks and balances are in place.

If we do not trust the scientific process, the addition of a disclosure statement will hardly make us trust it more.

Edmonton, AB