Scientists need to take a different approach to early-stage cancer research
Preclinical studies, which scientists do before testing on people, don’t get as much attention as clinical trials. However, they are the first step toward possible treatments and cures. It is important that preclinical results are reliable. When it’s wrong, researchers waste resources by following false leads. Worse, false results can lead to human trials.
Last December, the Center for Open Science (COS) published the disturbing results of a reproducibility study. The Reproducibility Project: The Biology of Cancerit took eight years and cost US$1.5 million. In collaboration with the Science Exchange research marketplace, independent scientists found that 50 of the 23 leading published studies had no more chance of replicating the results of preclinical experiments than flipping a coin.
Praise and controversy marked the project from the start. Overview Nature hailed replication studies as “the best in science practice.” But look Science He noted that the reactions of some scientists whose research was selected ranged from “irritation to anxiety to anger” and that this harmed repetition. None of the original experiments was described in sufficient detail to allow scientists to reproduce it, a third of the authors refused to cooperate, and some became hostile when they did.
(Shutterstock)
COS CEO Brian Nosek said the findings “raise challenges to the validity of preclinical research in cancer biology.” Acknowledging that biomedical research is not always perfectly rigorous or transparent, the US National Institutes of Health (NIH), the world’s largest funder of biomedical research, has announced that it will improve these two aspects.
I have been teaching and writing about good scientific practice in psychology and biomedicine for over 30 years. I’ve reviewed countless grant applications and manuscripts for journals, and I have to say, it doesn’t surprise me.

(Shutterstock)
Incentives to advance careers at the expense of scientific credibility undermine two pillars of sound science: transparency and unbiased rigor. Oftentimes, proposed preclinical studies—and surprisingly, published, peer-reviewed studies—do not follow the scientific method. And scientists often don’t share their publicly funded data, even if the journal publishing it asks for it.
Controlling for bias
Many preclinical practices lack rudimentary controls for bias, which are taught in the social sciences but rarely taught in biomedical disciplines such as medicine, cell biology, biochemistry, and physiology. Bias control is a key part of the scientific method because it allows scientists to separate experimental signals from procedural interferences.
Confirmation bias, the tendency to see what you want to see, is usually driven by something called “blindness.” Consider the “double-blind” procedure of clinical trials, where neither the patient nor the research team knows who is receiving the placebo and who is receiving the drug. In preclinical studies, the fact that experimenters do not know the identity of the samples minimizes the risk of altering their behavior to support their hypothesis.
Seemingly insignificant differences, such as whether the sample was processed in the morning or afternoon, or whether the animal was caged in the top or bottom row, can affect the results. It’s not as hard as you think. For example, every small change in the microenvironment for light exposure or ventilation can alter physiological responses.

(Shutterstock)
If all animals receiving a drug are placed in one row and all animals not receiving it are placed in another row, any differences between the two groups of animals may be due to the drug, their location, or a drug-by-drug interaction. two. You couldn’t find out what the real reason was, and neither did the scientists.
Randomization of sample selection and processing order minimizes procedural bias, makes the interpretation of results clearer, and increases the chance of their reproducibility.
Many replication experiments were blinded and randomized, but it is not known whether this was the case for the original experiments. All that is known is that for 15 animal experiments, only one mentions randomization and none mentions blinding. It would not be surprising if a large number of studies used neither randomization nor blinding.
Curriculum and statistics
By one estimate, more than half of the million articles published each year are based on biased study design, which wastes 85% of the US$100 billion spent annually on (mostly preclinical) research.
In a widely circulated paper, industry scientist and former researcher Glenn Begley reported that only six of 53 academic studies were able to replicate their results (11%). He listed six valid research practices, including blinding. All six replicate studies followed these six practices. 47 studies that could not be replicated followed little or no practice in some cases.

(Shutterstock)
Another way to distort results is to misuse statistics. As with blinding and randomization, it is not clear in the Replication Project which original studies misused statistics due to a lack of transparency. This is again a common practice.
A glossary of terms describes a number of poor data analysis practices, such as HARKing, which can produce statistically significant (but false) results.Making assumptions after the results are known or hypothesizing after the results are known), p-hacking (which involves repeating statistical tests until a desired result is obtained), or following a series of data-dependent analysis decisions that we call fork roads garden or Garden Paths’ to produce publishable results.
These practices are common in biomedical research. Decades of advocacy by methodologists and an unprecedented statement by the American Statistical Association calling for changes in data analysis practices have gone unnoticed.
More serious to come

(Shutterstock)
People who are hostile to science should not be happy about these results. The achievements of preclinical science are real and impressive. Thus, years of preclinical research led to the development of mRNA vaccines against Covid-19. And most scientists do their best in a system that rewards fast and dramatic results over reliable but slower ones.
However, science, with all its strengths and weaknesses, was created by humans. Experiments that produce sound science should be rewarded and those that don’t should be sanctioned without killing innovation.
Changing incentives and implementing standards is the most effective way to improve the science experience. This is to ensure that scientists who value transparency and rigor over speed and frenzy succeed. The practice has already been tried with limited success. This time things could be different. Research The Reproducibility Project: The Biology of Cancer and the NIH policy changes he’s pushing for could be just the push he needs to make it work.