Every year in the United States alone, close to a million people undergo a medical procedure involving a balloon angioplasty, a technology that helps restore blood flow to the heart and lungs by widening arteries and blood vessels that have constricted due to either congenital defects or cholesterol buildup.
The way it works is tremendously clever: A specialized catheter is inserted into an artery until it encounters the blockage. Once it does, a tiny balloon surrounding the catheter inflates, widening the passage. A stent is put in place to keep the artery open after the balloon is deflated, the catheter is removed, and blood flow is restored.
The technology is lifesaving. One of the most famous medical cases in its history involved a three-year-old girl named Charlotte Evert whose cardiovascular system had plagued her since birth. The angioplasty treatment allowed the young girl to avoid a risky heart-lung transplant, something unheard of for people that age, and enabled her to live a normal life.
The balloon angioplasty was developed in the 1970s by the Swiss physician Andreas Grüntzig, who tested it on both cadavers and on dogs. It’s one of many commonplace medical technologies and treatments that were first tested on animals before successfully being used on humans.
Animal testing is a practice that has never been uncontroversial or morally uninteresting. Its supporters argue that the benefits of conducting these tests are clear. Other individuals and organizations disagree, claiming they’re causing the untold suffering of countless animals and producing questionable medical gains in the process.
Almost every facet of this discussion has its detractors and its supporters, its reasonable voices, and its fringe opinions. All of them are, in some way, trying to address the following questions: If we did away with animal testing—sometimes referred to more euphemistically as animal research—would we be able to accept a world in which more humans suffered or even died as a result? Is it even really necessary? Does it actually help? Aren’t there other ways to make medical progress?
There are no definitive, easy answers to those questions, and some don’t even accept their premises. What is clear, however, is that the issue of animal testing is a negotiation that all individuals in society would do well to be a part of, one that continues to evolve alongside time, technology, and (hopefully) wisdom.
Why animal testing is necessary and beneficial
A significant amount of medical institutions, medical professionals, and private research groups argue for the necessity of animal testing. Stanford Medicine is one such proponent, asserting that using animals in particular types of biomedical research benefits both humans and animals by enabling the discovery of the “causes, diagnoses, and treatment” of diseases, thereby helping to ease and eliminate suffering in the world on a large scale.
"Mammals are essential to researchers because they are the closest to us in evolutionary terms."
And they have a point. Just about every human alive today has benefited from advances in medicine because of animal testing. Before he died in 2019, Kurt J. Isselbacher, the former director of Massachusetts General Hospital, once noted that so many modern medical wonders, from radioactive iodine used to scan patients' thyroid glands to anticoagulants used to prevent blood clots, have had their origin in animal testing practices.
Even the polio vaccine was born out of tests on monkeys, and that’s a treatment that the Centers for Disease Control and Prevention estimates to have saved 500,000 lives and prevented 10 million cases of paralysis since 1988 alone. More recently, Pfizer and Moderna tested their COVID-19 vaccines on mice and macaques.
Stanford Medicine also makes a compelling case for treating those animals ethically and humanely, noting that scientific study needs to be a reliable process. That results must be replicable for them to be valid at all is a well-known feature of the scientific method. If animals are treated poorly by researchers, the results they produce will not be good or trustworthy data. This is encouraging because it implies that, even those who didn’t care about animals in any meaningful, ethical way, would still have an incentive to treat them humanely.
Another argument for the relevance of animal testing states that some animals are so similar to humans in their genetic and physiological makeup that testing them has every application to our own species.
As the National Academy of Sciences writes in its book on the subject, Science, Medicine, and Animals, “Some animals have biological similarities to humans that make them particularly good models for specific diseases [...] In particular, mammals are essential to researchers because they are the closest to us in evolutionary terms.”
It’s hard to refute this point. We share over 98 percent of our DNA with mice, one of the most commonly-used lab animals on the planet, and many other species are indeed vulnerable to the same kinds of ailments that we are. Nothing that we currently know of, claim research groups like the Flemish Institute for Biotechnology (VIB), is able to truly substitute for a full-body system. Many diseases, they explain, “are a complex interaction between various components, cells and tissues, in a three-dimensional structure.”
The shorter natural life spans of these animals also means researchers can observe how drug treatments may manifest over a full lifetime, or even multiple generations of animals. This offers a window into how the effects of a particular treatment may ripple out to others in the actual context of a biological and social setting.
One practical reason to use animal models, according to the National Human Genome Research Institute, is that the kinds of tests scientists want to do are just not legally allowed with human subjects, and a stand-in is needed.
This raises a question that no one who advocates for animal testing seems to be able to answer: If researchers aren’t allowed to conduct most of the kinds of experiments they’d like to on human subjects, why, then, does society find it permissible to do so on animal subjects?
Why animal testing is needless and harmful
This constitutes one of the foundational cases against animal testing, that the amount of suffering and the sheer number of animals involved make any benefit to animals or humans fall well short of justifying the means used in producing them. There is evidence to support this claim.
"We have cured mice of cancer for decades. It simply didn’t work in humans."
One of the more curious facts about the animal testing debate lies in the animals we actually test on and the self-awareness or consciousness levels we ascribe to them. Figures from the U.S. Humane Society and others show that rodents like mice and rats are by far the most commonly used in tests, usually followed by flies and fish, birds, rabbits, farm animals, significantly fewer cats and dogs, and a much small number of non-human primates like monkeys and chimpanzees.
The striking thing about this scale is that the more likely an animal is to have a self-awareness similar to ours, the fewer of those animals you see used in tests, revealing the implicit notion that the empathetic line gets drawn definitively at “human-like.” However, given that the list of animals demonstrated to possess self-awareness has grown to include the great apes, dolphins, elephants, some species of birds, and now potentially even some fish species, is this kind of testing justified any more so than it would be on humans?
Beyond that question is the possibility that at least some animal testing either isn’t capable of producing applicable results for human biology or is a bad model for the concept altogether. In the late 1990s, the former director of the National Cancer Institute, Dr. Richard Klausner, famously remarked that “The history of cancer research has been a history of curing cancer in the mouse. We have cured mice of cancer for decades. It simply didn’t work in humans.”
More recent work confirms this, revealing that the “average rate of successful translation from animal models to clinical cancer trials is less than 8%,” according to a 2014 article published in the American Journal of Translational Research. Similarly, a 2013 study published in the Proceedings of the National Academy of Sciences showed that mice are poor models for inflammatory diseases in humans, of which cancer is often the end result.
Reducing harm to animals in the lab
The “Three R’s” of animal testing are known to anyone actively involved in it. They represent the principles of “replacement, reduction, and refinement.”
If a study can be conducted without the use of animals, then it’s imperative to replace them with cell models, lab-grown tissues, or something else. The reduction principle states that, if animals are indeed deemed necessary to a study, then the absolute minimum number must be used. Refinement entails that researchers make any testing carried out on animals as painless and brief as possible and that such techniques continue to be refined over time to further this aim.
"We simply do not need the numbers of animals that were once required for our experiments."
At the very least, it seems feasible that a dramatic reduction in the number of animals used in testing is possible in the here and now.
In 2019, the UK-based Sanger Institute, a genetics laboratory that helped sequence the human genome, announced it would no longer operate its animal facility, a department that bred generations of rats, mice, and zebrafish specifically for testing purposes.
Explaining the decision to The Guardian, Jeremy Farrar, director of the trust that oversees the institute, said, “New laboratory techniques have recently been developed which mean we simply do not need the numbers of animals that were once required for our experiments. We still need animals for our research, but not as many as in the past.”
However, others argue the entire premise is faulty, and that animal testing actually does far more harm than good. In a 2015 paper published by the journal Cambridge Quarterly of Healthcare Ethics, Dr. Aysha Aktar, a neurologist and fellow at the Oxford Center for Animal Ethics, argued that we’re looking for medical answers in the wrong place.
“It is possible [...] that animal research is more costly and harmful, on the whole, than it is beneficial to human health,” she writes, going on to say, “It would be better to direct resources away from animal experimentation and into developing more accurate, human-based technologies.”
Technological advances are indeed allowing scientists to reduce the need to study animals in the lab. Stem cells, lab-grown cell cultures, and complex three-dimensional cell tissue models have all come a long way in recent years.
As the National Institute of Environmental Health Sciences, a publicly-funded research group explains, “Computer programs with advanced systems based on large chemical databases can predict a chemical's toxicity, reducing the need for animal testing in some situations.” Given enough time and the right incentives, animal testing could one day disappear altogether.
No species can meet us at the table to discuss the merits and harms of conducting animal testing. This makes the issue somewhat unique as a moral and ethical quandary—half its participants are effectively mute.
"People value individual interest—often against the interest of the group."
Philosophy has a significant role to play here, as one of the main moral drivers of animal testing is the idea that the sacrifice of a few is justified when it brings about good for the many. That kind of altruism is appealing to us in many ways, and both history and pop culture are full of examples of this kind of behavior, something we generally label as heroic. But the other side of that coin is arguably just as noble, and it’s worth looking at the value of self-preservation.
In a fascinating 2018 study published in the Journal of Cognition and Culture, researchers asked participants from nine different countries moral questions about personal sacrifice and the sacrifice of others for increased group welfare.
Despite wide variation in the cultures involved in the research, the results were surprisingly uniform. “Across all cultures,” the authors observed, “we found that people value individual interest—often against the interest of the group—when they grant people the right not to sacrifice their welfare in helping others, and when they take into account harms to individuals rather than just maximizing the number of lives saved.”
In other words, humans widely acknowledge that it’s not just about numbers and saving the maximum amount of lives. The right of a person not to sacrifice their well-being to help others is as fundamental as any other. It’s entirely legitimate to wonder if humans should defend this right in animals as well.