Author

Craig Klugman

Publish date

Tag(s): Legacy post

by Craig M. Klugman, PhD

This week I gave a lecture at a university in Texas on ways to teach research ethics. A question from the audience led to a conversation about the stresses and pressures that lead researchers to be involved with ethically questionable activities. I mentioned that among the pressures are to maintain and increase funding as well as to get published. This pressure manifests itself in the fact that we are more likely to publish positive results than negative. As I said at the talk, “After all, journals are in the business of selling subscriptions. And who is going to buy a journal that says ‘Special Issue—We Found Nothing’.” It makes sense to think that a company is less likely to fund a researcher who finds its products lack efficacy. And a researcher who finds nothing is not likely to be in first in line to receive new grants.

Published biomedical research reports present information on new uses for old drugs, new drugs that help treat disease, and insights into biological processes that may lead to new understandings and treatments of human health. If the research enterprise is being innovative and looking in new directions, there should be a lot of research that does not prove its hypothesis. One would expect most experiments to fail. And equivalently that should mean that most publications and presentations report negative results.

Negative results are just as important as positive ones. Knowing what does not work can save another research group time and money trying to prove what has been disproven. But that insight would require having access to the knowledge of what did not work. A 2011 paper showed that over a 17-year period, the percent of negative results being published has decreased dramatically. The authors proposed, “research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.” Another possibility is that hypothesis proposing and research methods have gotten so good that the rates of success have increased. And yet another explanation is that negative results are less likely to be sent out into the world.

Add to this conversation a new report that half of clinical trials in the U.S. are never published and when published, they contain less complete information than are reported to the ClinicalTrials.gov database. The report shows that adverse events and side effects are less likely to be fully reported in articles as well.

The pressures to have positive findings are very real—renewed funding from agencies and corporations, sales of journals, and emphases on quantity of articles (instead of quality) in researcher promotion and tenure. In a real sense, there is a conflict of interest between researchers needing to find funding, seek promotion and tenure, and build a reputation all against the necessity to do objective research. The system is inherently biased and we need to admit this and take steps to manage the bias. One radical suggestion would be to blind all research funding so that no one knew from whom the money came or to what researchers it went. Another suggestion is to overhaul the entire system of publication and distribution of knowledge to remove the profit element. Imagine if profit was removed from the journal business. Instead of worrying about sales and subscriptions, a journal could print good research, even when it reports failure. Open access journals are growing in popularity in part for this reason. A third suggestion is to find other mechanisms for funding research that take the competitive market element out of the system.

A proponent might say that these suggestions remove the market element and thus the element of competition that leads to only the best researchers and most promising research being funded. I believe that the system rewards the researcher with the best network and who has learned to play it safe to ensure funding at the sake of innovation. What we have is a system that funds researchers who know how to play the game—to keep crunching the numbers until there is a positive result, to only do safe experiments where you more or less know what the outcome will be before you begin, and if you get results that someone with the purse strings won’t like that you bury the information. This system poses a threat to human lives when they receive treatments based on biased research, and threatens the reputation of science as an objective arbiter of knowledge and illuminator how the world works. My challenge to research is let’s see more failure, which means more risk, more negative results, and thus more innovation.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.