Baltimore, MD (Scicasts) — New study suggests that the funding mechanism used by the National Institutes of Health (NIH) for choosing projects which will produce most-cited science is no better than distributing the money on a random basis.

Researchers say that the expensive and time-consuming peer-review process is not necessarily funding the best science, and that awarding grants by lottery could actually result in equally good, if not better, results.

The report by Dr. Ferric Fang, at the University of Washington, Anthony Bowen, MS, at the Albert Einstein College of Medicine and Dr. Arturo Casadevall, at the Johns Hopkins Bloomberg School of Public Health, was published online today in the journal eLife.

"The NIH claims that they are funding the best grants by the best scientists. While these data would argue that the NIH is funding a lot of very good science, they are also leaving a lot of very good science on the table," says Dr. Casadevall, Professor and Chair of the W. Harry Feinstone Department of Molecular Microbiology and Immunology at the Bloomberg School.

"The government can't afford to fund every good grant proposal, but the problems with the current system make it worse than awarding grants through a lottery."

The majority of research grant proposals received by NIH ultimately get rejected. To decide which proposals to fund, NIH relies on expert panels whose members score each application and funding decisions are then made on the basis of these scores and the amount of available funds. In recent years, the NIH has only funded those proposals ranked around the top 10 percent. The annual research budget for the NIH was $30.1 billion in 2015.

"We are not criticizing the peer reviewers. We are simply showing that there are limits to the ability of peer review to predict future productivity based on grant applications,” notes Dr. Fang, a professor of laboratory medicine and microbiology at the University of Washington.

“This suggests that some of the resources and effort spent on ranking applications might be better spent elsewhere. While the average productivity of grants with better scores was somewhat higher, the differences were extremely small, raising questions as to whether the effort is worthwhile."

For their study, the researchers reanalyzed data on the 102,740 research project grants funded by the NIH from 1980 through 2008. Researchers who published a paper in the journal Science in 2015 had collected the data set. Their research suggested that peer review did in fact work -- that the highest ranked research projects funded by the NIH earned the most citations.

The researchers in this case chose to measure the success of a research grant by determining how many papers that resulted from the work were published in scientific journals and then tracked how many times those papers are cited in future research papers.

The original researchers looked at all of the grants funded by NIH in those years and a significantly larger number of grants were funded in many of those years. The percentage of grants funded in recent years has been at historic lows because of cutbacks resulting from sequestration budget cuts after the October 2013 government shutdown.

For the new study, Dr. Casadevall and his colleagues decided to only look at the top 20 percent of grants awarded and found very little difference between the top-ranked projects and those projects ranked in the 20th percentile when it came to which would go on to be the most-cited research.

What the peer review process can do, they determined, is discriminate between very good science and very bad science -- that is, those in the top 20 percent versus those below the 50th percentile.

However, peer review isn't cheap. The annual budget of the NIH Center for Scientific Review is $110 million and individual NIH institutes and centres also spend a lot on peer review. That money could go toward more grants, the researchers say.

The costs are not only financial since writing and reviewing grants is extremely time consuming and diverts the efforts of scientists away from doing science itself.

The process also allows for substantial subjectivity, when the objection of a single member of the committee can effectively kill a grant proposal, whether that objection is legitimate or not.

"When people's opinions count a lot, we may be doing worse than choosing at random," Dr. Casadevall says. "A negative word at the table can often swing the debate. And this is how we allocate research funding in this country."

To solve this, the authors suggest that the top proposals are first chosen by peer review and that those proposals then be put into a lottery, with grants awarded at random.

Lotteries were used as part of the military draft during the Vietnam War and are used today to fill magnet schools with many qualified applicants and to award permanent residency applications. College student and low-income housing is often awarded by lottery. Dr. Casadevall says New Zealand has started using a lottery to make its scientific grants.

Dr. Casadevall adds: "We're hoping people will look at this data and say, 'Can we do better? Can we create a fairer system that gives society the best science it can afford?'"

Publication: Research: NIH peer review percentile scores are poorly predictive of grant productivity. Ferric C Fang, Anthony Bowen & Arturo Casadevall. eLife (16 February, 2016): Click here to view.