Science works by building on the work of the past. What happens when you check to make sure that work can be trusted?
via The LA Times:
In today’s world, brimful as it is with opinion and falsehoods masquerading as facts, you’d think the one place you can depend on for verifiable facts is science.
You’d be wrong. Many billions of dollars’ worth of wrong.
A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.
The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test — that the original results couldn’t be reproduced because the findings were especially novel or described fresh therapeutic approaches.
But what they found was startling: Of the 53 landmark papers, only six could be proved valid.
“Even knowing the limitations of preclinical research,” observed C. Glenn Begley, then Amgen’s head of global cancer research, “this was a shocking result.”
Unfortunately, it wasn’t unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned. Whole fields of research, including some in which patients were already participating in clinical trials, are based on science that hasn’t been, and possibly can’t be, validated.
“The thing that should scare people is that so many of these important published studies turn out to be wrong when they’re investigated further,” says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute. The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That’s how much could be at risk from faulty fundamental research.
Eisen says the more important flaw in the publication model is that the drive to land a paper in a top journal — Nature and Science lead the list — encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn’t a safeguard. Eisen says the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws.
“The journals want the papers that make the sexiest claims,” he says. “And scientists believe that the way you succeed is having splashy papers in Science or Nature — it’s not bad for them if a paper turns out to be wrong, if it’s gotten a lot of attention.”
Eisen is a pioneer in open-access scientific publishing, which aims to overturn the traditional model in which leading journals pay nothing for papers often based on publicly funded research, then charge enormous subscription fees to universities and researchers to read them.
But concern about what is emerging as a crisis in science extends beyond the open-access movement. It’s reached the National Institutes of Health, which last week launched a project to remake its researchers’ approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience.
PubMed Commons is an effort to counteract the “perverse incentives” in scientific research and publishing, says David J. Lipman, director of NIH’s National Center for Biotechnology Information, which is sponsoring the venture.
The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed — NCBI’s clearinghouse for citations from biomedical journals and online sources — can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.
Science and Nature both acknowledge that peer review is imperfect. Science’s executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data — where cherry-picking or flawed interpretation can contribute to erroneous conclusions — “increased vigilance is required.”…
Read more at The LA Times.