每日吃瓜

Skip to main content Skip to secondary navigation
Publication

Race to the bottom: How competition to publish first can hurt scientific quality

Key Takeaways

  • Scientific research is a critical piece of R&D. Understanding what motivates scientists has important economic implications.

  • A primary motivator in science is the credit associated with publishing first. But the race for a scoop leads to lower-quality work, which may threaten true scientific progress.

  • Government subsidies of collaborative research can improve research quality by blunting the incentive to cut corners.

  • Several scientific journals have recently instituted 鈥渟coop protection policies鈥 aimed at curbing unhealthy competition.


Credit for new ideas is critical for research scientists. It is the currency of scientific careers that builds reputations and leads to funding, promotions, and prizes (Stephan, 1996). And where does this credit come from? As described in 1957 by the sociologist Robert K. Merton, credit in science comes 鈥 at least in part 鈥 from disclosing one鈥檚 findings first. Merton and others have described this as the 鈥減riority rule鈥 in science.

Given the importance of priority, it is not surprising that scientists compete 鈥 sometimes fiercely 鈥 to publish their findings first. There are many well-known historical examples of priority races and disputes. Isaac Newton and Gottfried Leibnitz fought bitterly over which one of them deserved credit for inventing calculus. Charles Darwin was distraught upon receiving a manuscript from Alfred Wallace, which bore an uncanny resemblance to Darwin鈥檚 (yet unpublished) On the Origin of Species (Darwin, 1887). More recently, Robert Gallo and Luc Montagnier engaged in an intense public conflict over who first discovered the HIV virus. The dispute was so acrimonious (and the research topic so important) that two national governments had to step in to broker a peace (Altman, 1987).

While priority disputes are nothing new, there is a growing concern that science is becoming more competitive and that this competition may hurt scientific progress. As grants become increasingly selective, scientists must spend more time writing proposals, leading to 鈥渃rippling demands鈥 that subtract time from thinking, reading, and conducting research (Alberts et al., 2014). Other researchers (Walsh and Hong, 2003; Anderson et al., 2007) are concerned that increasing competition has stifled collaboration and promoted a culture of secrecy. Some have speculated that competition 鈥 and, in particular, the competition to publish first 鈥 has caused researchers to cut corners and produce lower-quality science (Fang and Casadevall, 2015; Vale and Hyman, 2016).

This final concern 鈥 that competition in science leads to lower-quality work 鈥 is the topic of our research (Hill and Stein, 2021a) discussed in this policy brief. And while the issue may not immediately appear to be in the realm of economics, it鈥檚 important to note that scientific research is a critical input into the applied research and development that drives economic growth (Nelson, 1959; Arrow, 1962). Therefore, understanding the incentives that scientists face has important economic implications.

Structural biology and the protein data bank

Our research focuses on the field of science known as structural biology. Researchers in this field use experimental and computational techniques to build models of the three-dimensional structure of proteins. Understanding how proteins perform their functions inside of cells is one of the key themes of molecular biology. Moreover, these 3D models are important to understanding diseases and developing drugs and vaccines.

Structural biologists report these 3D models in a centralized database known as the Protein Data Bank (PDB). Established in 1971, the PDB is a worldwide repository for protein structures. Today, it contains upward of 150,000 structures and continues to grow at a rate of about 10 percent a year. The PDB serves as our primary data source.

A unique feature of structural biology is that the 3D models can be scored in terms of their quality. In other words, we have an objective way of distinguishing between high-quality and low-quality structures. These scores are computed by algorithms and recorded in the PDB. Moreover, the PDB records key dates for each protein that allow us to see how long scientists spend working on each structure.

How does competition impact the quality of science?

Our analysis starts by noting that not all proteins are equally important. For example, an unsolved protein structure may be relevant for drug development and therefore a successful structure determination that provides a 3D map of the protein would be published in a top journal and be highly cited. We call this a 鈥渉igh-potential鈥 protein or project. In our data, we can use characteristics of the protein structures to identify 鈥渉igh-potential鈥 and 鈥渓ow-potential鈥 structures.

Since high-potential projects yield more citations, more scientists ought to be interested in working on these projects. Figure 1 shows that this is indeed the case. Proteins that we code as being high-potential (i.e., structures that fall on the right side of the graph) are deposited more frequently in the PDB, suggesting that multiple research teams are working on them simultaneously. In other words, these high-potential proteins are more competitive.

Figure 1. The Effect of Project Potential on Competition

Figure 1. The Effect of Project Potential on Competition

If these high-potential proteins are more competitive, then these should be the projects that scientists rush the most. Figure 2 confirms this intuition. High-potential structures have shorter 鈥渕aturation periods,鈥 i.e., they are completed more quickly as scientists race to claim priority.

Figure 2. The Effect of Project Potential on Project Maturation

Figure 2. The Effect of Project Potential on Project Maturation

Do these shorter maturation periods translate to lower-quality science? Figure 3 suggests that the answer is yes. The negative slope in Figure 3 implies that the same high-potential structures that were the most rushed also have the lowest quality scores. In other words, because more important projects are more competitive, more important projects are executed more poorly.

Figure 3. The Effect of Project Potential on Project Quality

Figure 3. The Effect of Project Potential on Project Quality

More collaborative science: Structural genomics consortia

Credit is particularly important for researchers on the tenure track, who must 鈥減ublish or perish.鈥 However, not all the researchers in our sample are seeking tenure. In particular, about 20 percent of the protein structures in our sample are deposited by researchers working in federally funded structural genomics (SG) groups.

Since the early 2000s, members of these groups have focused their efforts on solving and depositing protein structures in the PDB. Inspired by the success of the Human Genome Project, SG groups have a different mission than university labs. They focus on solving a wide array of protein structures, in an effort to achieve full coverage of the human 鈥減roteome,鈥 the catalog of all human proteins (Grabowski et al., 2016).

Importantly for our purposes, SG groups are less focused on winning priority races than their university counterparts. Indeed, the vast majority of structures solved by structural genomics groups are never published, suggesting that researchers in these groups are focused on data dissemination rather than priority.

For example, The Structural Genomics Consortium (an SG center based in Canada and the United Kingdom) describes its primary aim as 鈥渢o advance science and [be] less influenced by personal, institutional or commercial gain.鈥 Therefore, we view structures deposited by SG groups as a set of structures that were published by scientists who were not subject to the usual level of competition for priority.

Figure 4 below replicates Figure 3, but separates the structures deposited by university researchers (in blue) from those deposited by SG researchers (in orange). As we saw in Figure 3, there is a negative relationship between the project鈥檚 potential and its quality among the university researchers. However, this relationship is less strong (i.e., the slope is flatter) for the structural genomics researchers. In other words, when scientists are less concerned with priority, they don鈥檛 feel the same need to rush when working on important (and competitive) projects. Therefore, they are able to execute these high-potential projects with higher quality.

Figure 4. The Effect of Project Potential on Project Quality: University Researchers versus Structural Genomics Researchers

Figure 4. The Effect of Project Potential on Project Quality: University Researchers versus Structural Genomics Researchers

Policy implications

Our results suggest one way in which heightened competition may be bad for science: It can induce researchers to rush in their pursuit to publish first, which can degrade the quality of the scientific work they produce.

With this consequence in mind, what are some ways that science could aim to reduce competition?

One idea is to have the government subsidize more centralized and collaborative scientific efforts. The Protein Structure Initiative (which led to the formation of structural genomics consortia) is one such example. The Human Genome Project is another. In both instances, governments subsidized the initiatives with the understanding that researchers would collaborate and share their results 鈥 both with each other and with the public. Moreover, credit in these efforts was shared more equally among the entire group, rather than bestowed on a single researcher or lab.

Another way to blunt the competition to publish first is to reduce the premium for priority. As shown in Hill and Stein (2021b), researchers perceive a large difference in the credit allocated to a team that publishes a finding first versus the team that publishes second. Policies that reduced this (real or perceived) credit gap might minimize the pressure to publish first, leading scientists to rush less.

Scientific journals appear to agree with this assessment. Several journals have recently instituted 鈥渟coop protection policies.鈥 These policies represent a commitment by journals to publish work that has recently been scooped, ensuring that these projects still have an avenue for publication. In other words, they help increase the credit and visibility that the scooped paper receives. Their reasoning for these policies is very much in line with the logic we have outlined above. For example, in 2017 the journal eLife wrote:

鈥淲e all know graduate students, postdocs and faculty members who have been devastated when a project that they have been working on for years is 鈥榮cooped鈥 by another laboratory, especially when they did not know that the other group had been working on a similar project. And many of us know researchers who have rushed a study into publication before doing all the necessary controls because they were afraid of being scooped. Of course, healthy competition can be good for science, but the pressure to be first is often deleterious, not only to the way the science is conducted and the data are analyzed, but also for the messages it sends to our young scientists. Being first should never take priority over doing it right or the search for the truth. For these reasons, the editors at eLife have always taken the position that we should evaluate a paper, to the extent we can, on its own merits, and that we should not penalize a manuscript we are reviewing if a paper on a similar topic was published a few weeks or months earlier鈥 (Marder, 2017).

Ultimately, competition shapes science in myriad ways. Some impacts may be positive. For example, competition may motivate researchers to work harder and faster and encourage timely publication and disclosure of results. However, the goal of this brief is to highlight one dimension along which competition may harm science: By putting a premium on being first, it encourages researchers to rush at the expense of scientific quality. We should be wary of this potential downside of competition when designing incentives for research scientists. Competition and the priority rule have been the norm in science for centuries, but increased collaboration may be instrumental in pushing science forward in the future.

References

  • Alberts, Bruce, Marc W. Kirschner, Shirley Tilghman, and Harold Varmus, 鈥淩escuing US Biomedical Research from its Systemic Flaws,鈥 Proceedings of the National Academy of Sciences, 2014, 111 (16), 5773鈥5777.
  • Altman, Lawrence K., 鈥淯.S. and France End Rift on AIDS,鈥 The New York Times, 1987.
  • Anderson, Melissa S., Emily A. Ronning, Raymond De Vries, and Brian C. Martinson, 鈥淭he Perverse Effects of Competition on Scientists鈥 Work and Relationships,鈥 Science and Engineering Ethics, 2007, 13, 437鈥461.
  • Arrow, Kenneth J., 鈥淓conomic Welfare and the Allocation of Resources for Invention,鈥 in The Rate and Direction of Inventive Activity: Economic and Social Factors, Princeton University Press, 1962.
  • Darwin, Charles, The Life and Letters of Charles Darwin, Including an Autobiographical Chapter, Vol. 1, John Murray, 1887.
  • Fang, Ferric C., and Arturo Casadevall, 鈥淐ompetitive Science: Is Competition Ruining Science?鈥 Infection and Immunity, 2015, 83 (4452).
  • Grabowski, Marek, Ewa Niedzialkowska, Matthew D. Zimmerman, and Wladek Minor, 鈥淭he Impact of Structural Genomics: The First Quindecennial,鈥 Journal of Structural Functional Genomics, 2016, 17 (1), 1鈥16.
  • Hill, Ryan, and Carolyn Stein, 鈥淩ace to the Bottom: Competition and Quality in Science,鈥 Working Paper, 2021a.
  • Hill, Ryan, and Carolyn Stein, 鈥淪cooped! Estimating Rewards for Priority in Science,鈥 Working Paper, 2021b.
  • Marder, Eve, 鈥淪cientific Publishing: Beyond Scoops to Best Practices,鈥 eLife, 2017, 6.
  • Merton, Robert K., 鈥淧riorities in Scientific Discovery: A Chapter in the Sociology of Science,鈥 American Sociological Review, December 1957, 22 (6), 635鈥659.
  • Nelson, Richard R., 鈥淭he Simple Economics of Basic Scientific Research,鈥 Journal of Political Economy, June 1959, 67(3) 297-306.
  • Stephan, Paula E., 鈥淭he Economics of Science,鈥 Journal of Economic Literature, 1996, 34 (3), 1199鈥1235.
  • Vale, Ronald D., and Anthony A. Hyman, 鈥淧riority of Discovery in the Life Sciences,鈥 eLife, 2016, 5.
  • Walsh, John P., and Wei Hong, 鈥淪ecrecy is Increasing in Step with Competition,鈥 Nature, 2003, 422 (6934), 801.
Author(s)
Carolyn Stein
Ryan Hill
Publication Date
December, 2021