One thing that becomes clear if you pay close attention to science is that for all the rigor, discipline, and hard work that’s required to make big discoveries, something completely unscientific also plays an important role: serendipity.
Despite efforts to place funding bets wisely and support projects likely to have the most impact outside the laboratory, progress often comes where people least expect it. That’s one of the marvelous things about the whole enterprise — people carefully design experiments based on their expectations, and yet surprises occur.
In fact, some scientists argue the whole reason to do experiments is to be astonished; if we could predict how everything would come out, it would be far easier to cure disease or predict human behavior. It’s when scientists find something unanticipated that they get excited: they discover hormones that trigger unexpected behaviors in cells, or gather evidence that defies decades of expectations about the fate of the universe.
Even so, a new field is increasingly focused on using science to understand science. Can researchers develop more powerful tools to distinguish high-impact findings from those that are quickly forgotten? Is there a better way to identify which scientists or research areas should receive limited funding?
In a paper published in the journal Science on Thursday, a team from Northeastern University took a shot at creating a new tool for predicting whether a paper will be a major breakthrough or not. Many of the current ways to measure a scientist’s contributions turn out to be imprecise, at best, they argue. For example, publishing in a top journal, which are rated through a measure called “impact factor’’ based on the journal’s track record, can be irrelevant when trying to assess the importance of any particular paper in the future, the researchers showed. The number of times a work is mentioned in other scientists’ papers can also fail to illuminate its importance, since those mentions add up over time — meaning scientists who have simply been in the field longer may appear more influential.
In the new work, Albert-László Barabási, a physicist who works in the emerging field of analyzing networks, found that with four to five years of data on how a paper has been received by the scientific community, it is possible to make a fairly good prediction of the long-term influence. He used a barometer called “fitness,’’ which is a quantitative measure of how the community responds to a new piece of science.
“Can you predict success is the motivating factor,’’ Barabási said.
Barabási said his team is considering making the tool into something that could be easily used on any published paper, providing a more accurate method than traditional measures that already play a role in how scientists are hired and rewarded with grants.
In an accompanying editorial, James Evans, a sociologist from the University of Chicago, praised the research but also warned of its limits. A tool that identifies the research that will be important in the long-term could, he argues, actually lead to self-fulfilling prophecies — research that becomes important long-term in part because of the prediction.