The Outdated Reliance On Animal Data
Publishing scientific results in peer-reviewed journals is a key piece of the research process. However, it is not without biases. Publication bias is a journal’s preference to publish a study based on the direction or strength of the results. “Animal-reliance bias” is a new type of bias that relies on experimental methods that use animals, even if unnecessary. Peer reviewers and editors can request or require that researchers include conventional animal data as a way to “validate” studies that were originally produced using human biology-based approaches.
Today, more researchers are questioning why animal data is still considered the gold standard by peer-reviewers and publications. Animal-free methods have advanced in the past 10 years, improving on the limitations of animal-based approaches. Compared to animal data, nonanimal methods model human disease more reliably and translate into treatments more predictably. Beyond the ethical problems with animal-reliance bias, it can also cause logistical problems: for example, it can squander resources, delay publication times, and hinder scientific progress.
In this paper, a team of researchers laid the foundation for animal-reliance bias by attempting to confirm its existence in the scientific community, identifying its causes, and developing strategies to prevent it. To do this, they surveyed 68 academics around the world to learn about their experiences with animal-reliance bias in the peer review process. While the study has a limited sample size and used a “convenience sample” of respondents within the authors’ existing networks, the results nevertheless shed interesting light on the phenomenon.
Most respondents worked in research or academic institutions in the U.S. and South Korea, primarily in the fields of medicine and clinical research (28%) or molecular and cellular biology (21%). Although 44% of respondents shared that they had never used animal experiments in their career, 31% said they either rarely, sometimes, or often used animal experiments for the sole purpose of anticipating peer reviewer requests.
Additionally, 46% of respondents (31 people) had been asked at least once by a reviewer to provide animal experimental data to a study that used no animal methods. Of these respondents, only three thought that the request was justified, while 14 felt that it was sometimes justified and 11 respondents did not think the request was justified.
While the data is limited and should be interpreted with caution, the results suggest the presence of animal-reliance bias in the peer review process. The authors argue that such bias is tied to emotional rather than objective reasoning, which is supported by previous research showing that reviewers’ judgments of a given study often rely on their prior beliefs. In this study, respondents likewise indicated in open-ended survey questions that reviewers made requests for animal data out of habit, a personal preference for animal-based methods, or because they were unaware of the benefits of animal-free methods.
Potential solutions to address animal-reliance bias include peer review training and accreditation. Additionally, the review process could be made more transparent by including the name of the reviewers assigned to a paper, requests for adding animal data, and a conflict of interest disclosure. This is called “open review,” and evidence suggests that such processes are perceived as being of higher quality with more valuable feedback for researchers and scientists.
While publication bias is nothing new, bias toward animal research methods remains largely ingrained within the scientific community. Elimination of this type of bias would improve science communication and transparency, and help develop research methods that do not rely on animals and are better adapted for clinical use in humans.
