Why "It Makes Sense" Is Not Science
From George Washington's death to modern medicine, history shows that plausible ideas often kill, and only evidence can save us.
It would seem to make sense that because masks provide a barrier to at least some COVID-19 virus particles, then masking should prevent at least some COVID-19 infections. From this line of reasoning, it would follow that masks should be recommended for the prevention of COVID-19 infections. But if George Washington’s ghost could speak to us today, it would object. From the wisps of time, the ghost would appear out of the haze and cry out:
“Do you not know that exactly the same reasoning was responsible for my death? And do you not know that you, who now mandate these masks, have killed me?”
On December 12, 1799, on one freezing day amidst heavy snowfall, George Washington spent five hours inspecting his large plantation as he had done countless times before. The next day, he complained of a sore throat and hoarseness. But he rode out again in heavy snow: there were trees that he needed to mark to have cut. That night, he woke suddenly. He could barely speak or breathe. He reported to his wife Martha that he was feeling very sick. He deteriorated quickly.
The treatment that Washington’s physicians used to combat his illness was called bloodletting: an incision is made, and blood is let out of the wound. Washington’s doctors blood-let him 5 times over the course of a day, taking more than half a liter each time, and twice taking more than a liter. That day, after his physicians bled him of more than half his blood volume [1]–more than 3.75 liters–he died [2].
The bloodletting itself may have contributed significantly to Washington’s death. It potentially caused it. As one physician, commenting on the case in 2004, wrote:
“The extraction of more than half of his blood volume within a short period of time inevitably led to preterminal anemia, hypovolemia, and hypotension. The fact that General Washington stopped struggling and appeared physically calm shortly before his death may have been due to profound hypotension and shock.”
Yet Washington’s treatment harkened back in Western medicine more than 2000 years. The Greek physician Hippocrates approved of it. 500 years later, the Roman physician Galen popularized it. After Galen, it was a staple of Western medicine. Many brilliant physicians and eminent authorities promoted bloodletting, including America’s most renowned physician William Osler in his famous textbook of medicine in 1892 [3]:
“During the last decades we have certainly bled too little. Pneumonia is one of the diseases in which a timely venesection [bleeding] may save life. To be of service it should be done early. In a full-blooded, healthy man with a high fever and bounding pulse the abstraction of from twenty to thirty ounces of blood is in every way beneficial.”
And Osler’s promotion of the practice remained in his textbook until as late as 1923 [4]. In the early 20th century, Boston physicians who refused to bloodlet for pneumonia were still being judged negligent by the authorities [4].
Surprising as it may strike readers today, the practice of bloodletting rested on a solid conceptual basis. Consider. During acute infectious illness, the blood contains a significant fraction of what is responsible for the disease. Therefore, would it not be reasonable to remove some of the blood to make the illness better? After all, by removing blood, one is removing some of the stuff that is causing the disease.
Without evidence, it is not easy to make a good argument against this.
But nobody really knew whether this treatment worked one way or the other until it was tested in 1816 by a young medical student named Alexander Lesassier Hamilton [5]. Hamilton’s finding: bloodletting caused ten times more deaths in people with acute infectious illness than doing nothing. A plausible hypothesis proven wrong–and deadly. Many of the physicians who bloodlet were talented and intelligent, and almost all had good intentions. Yet millions throughout history met their demise at the hands of this staple of Western medicine.
Consider a more modern example [6]: Studies in the 1970s consistently showed that people who had had a recent heart attack and who had more abnormal heart rhythms had a higher risk of dying from a subsequent heart attack. Therefore, would it not be reasonable to normalize these abnormal heart rhythms, to prevent heart attacks? In the 1970s through the early 1990s, millions of people with abnormal heart rhythms were prescribed drugs called antiarrhythmics for exactly this purpose.
Antiarrhythmics worked brilliantly. They could reduce abnormal heart rhythms in a way that many cardiologists found miraculous. But from their very inception, there was a controversy within the medical community and among regulators about whether these medicines should be made widely available without data showing that reducing these rhythms actually reduced deaths. Those who advocated for the drugs won the day, and they became the most widely prescribed medicine for heart attack survivors.
The consequences were catastrophic. The first rigorous large-scale randomized controlled trials that tested whether these drugs were beneficial for heart attack survivors were published in 1991 [7] and 1992 [8]. These trials showed that, compared to patients receiving a sugar pill, patients receiving the drug had a nearly three-fold higher risk of dying. In other words, for every patient death that would normally happen, two additional deaths were caused by the drug. While these “antiarrhythmic” drugs did reduce abnormal rhythms under some circumstances, they actually caused arrhythmias under other conditions: they had proarrhythmic effects. Experts estimate that hundreds of thousands of patients died prematurely due to the widespread use of antiarrhythmics [9].
The cases of bloodletting and antiarrhythmics are not exceptional. They are the norm. In even more recent history, one need only examine the opioid epidemic. But one can also take a systematic approach and read historical literature on drug approval. One study examined 41,000 drugs tested in clinical trials between 2000 and 2015 [10]. 86% of all promising drugs failed to show benefit or showed excessive harm when tested; among cancer drugs, this figure was 97%. This made for a success rate of 14% and 3%, respectively.
In another study, 101 promising basic science discoveries published in the most prestigious journals were followed for 24 years, to see if such promising findings predicted similar promise in actual medical practice [11]. Of these discoveries, only 19 led to a positive clinical trial result, while only 5 became licensed for clinical use, and only 1 became widely used. That is, fewer than 1% of promising findings in the world’s top journals actually translated into clinical care.
Historically speaking, a plausible new treatment is almost certain to fail in actual medical practice. As UCSF professor of medicine Vinay Prasad has put it, “The pretest probability that anything works in biomedicine is ~0%.” [12] Although absence of evidence is not evidence of absence, for interventions in biomedicine, these two are remarkably close. For practical purposes, they are nearly indistinguishable.
The above considerations are why evidence-based medicine is the dominant paradigm within medicine today. Evidence-based medicine demands rigorous evidence to recommend the use of any given therapy, not just plausibility. This is why, prior to the pandemic, virtually all respected scientific organizations did not recommend routine mask-wearing during pandemics.
But while bloodletting might have killed George Washington, surely masks could not cause harm, right? As things in medicine often are, the reality is much more complicated than that.
(To be continued in part 2.)
[1] https://thepermanentejournal.org/doi/pdf/10.7812/TPP/04.953?download=true
[2] https://constitutioncenter.org/blog/the-mysterious-death-of-george-washington
[3] https://web.archive.org/web/20070102144950/http://www.jameslindlibrary.org/essays/fair_tests/why-fair-tests-are-needed.html
[4] https://web.archive.org/web/20120313034620/http://www.library.ucla.edu/specialcollections/biomedicallibrary/12193.cfm
[5] https://jameslindlibrary.org/articles/alexander-lesassier-hamiltons-1816-report-of-a-controlled-trial-of-bloodletting/
[6] https://archive.org/details/deadlymedicinewh00moor/mode/1up?view=theater&q=50%2C000
[7] https://nejm.org/doi/full/10.1056/nejm199103213241201
[8] https://nejm.org/doi/full/10.1056/NEJM199207233270403
[9] https://archive.org/details/deadlymedicinewh00moor/page/287/mode/1up?view=theater&q=50%2C000
[10] https://academic.oup.com/biostatistics/article/20/2/273/4817524
[11] https://annalsofoncology.org/article/S0923-7534(20)42987-4/fulltext
[12] https://twitter.com/VPrasadMDMPH/status/1622025088609701888
I was also supportive of mask mandates at the time, but, given the evidence from Sweden, I was stunned to see how ineffective they were. I don't think they did any harm, but I don't think they were necessary or obviously useful.
BTW keep your chin up and your shoulders up.