Are Evidence-Based Medicine and Public Health Incompatible?

Yves here. IM Doc, whose father was a public health official, has been highly critical of evidence-based medicine, I have seen some doctors and medical writers defer to it to such a degree that it seems to be talismanic, a way for physians and officials to turn their brains off and defer to authority. Specifically, evidence-based medicine studies often have very narrow application (even before getting to Big Pharma cherry picking study parameters and statistical analysis) and insisting only evidence-based medicine (which costs money) as a basis for treatment vitiates doctors using clinical data and public health officials responding to emerging phenomena. I hope I have not oversimpified IM Doc with this recap.

By Michael Schulson, a contributing editor for Undark. His work has also been published by Aeon, NPR, Pacific Standard, Scientific American, Slate, and Wired, among other publications. Originally published at Undark

It’s a familiar pandemic story: In September 2020, Angela McLean and John Edmunds found themselves sitting in the same Zoom meeting, listening to a discussion they didn’t like.

At some point during the meeting, McLean — professor of mathematical biology at the Oxford University, dame commander of the Order of the British Empire, fellow of the Royal Society of London, and then-chief scientific adviser to the United Kingdom’s Ministry of Defense — sent Edmunds a message on WhatsApp.

“Who is this fuckwitt?” she asked.

The message was evidently referring to Carl Heneghan, director of the Center for Evidence-Based Medicine at Oxford. He was on Zoom that day, along with McLean and Edmunds and two other experts, to advise the British prime minister on the Covid-19 pandemic.

Their disagreement — recently made public as part of a British government inquiry into the Covid-19 response — is one small chapter in a long-running clash between two schools of thought within the world of health care.

McLean and Edmunds are experts in infectious disease modeling; they build elaborate simulations of pandemics, which they use to predict how infections will spread and how best to slow them down. Often, during the Covid-19 pandemic, such models were used alongside other forms of evidence to urge more restrictions to slow the spread of the disease. Heneghan, meanwhile, is a prominent figure in the world of evidence-based medicine, or EBM. The movement aims to help doctors draw on the best available evidence when making decisions and advising patients. Over the past 30 years, EBM has transformed the practice of medicine worldwide.

Whether it can transform the practice of public health — which focuses not on individuals, but on keeping the broader community healthy — is a thornier question. During the Covid-19 pandemic, Heneghan and several other prominent EBM thinkers became influential critics of pandemic policies. The specifics of their critiques vary, but certain themes emerge: Again and again, they have argued that public health leaders like McLean relied too heavily on error-prone models, biased evidence, and simple intuition when making consequential decisions. At the same time, they say, those public health experts did little to gather rigorous data to back up their claims about the usefulness of interventions like mask mandates and school closures.

In response, some public health experts and physicians — including other prominent people in the EBM movement — have argued that researchers like Heneghan and his allies have overstepped their bounds, making unreasonable demands on public health that can be used to further political agendas.

Versions of this debate pop up regularly — over masks, school closures, and even in areas outside of pandemic policy, such as treatment for children experiencing gender dysphoria.

Taken alone, each can be understood as its own little flashpoint. But there’s a bigger debate underway — one that has profound implications for the future of public health practice. At stake are deep divisions over how scientific evidence should be used to make decisions, said Jonathan Fuller, a physician and a philosopher of medicine at the University of Pittsburgh. “Until we have a better handle on them,” he said, “I think these particular flashpoints are going to keep coming up.”


A foundational tale of the discipline of public health is a story about the triumph of sharp observation. During an 1854 cholera outbreak in London, the physician John Snow mapped the location of cholera deaths in the city. He noticed they clustered around a specific water pump and reasoned — correctly — that the disease spread via contaminated water. It was a lifesaving insight.

In contrast, the foundational tales of EBM are stories of how fallible human observation and reasoning can be. Since at least the 1970s, for example, some physicians observed that women who took estrogen and other hormones after menopause seemed to have lower rates of heart disease. Hormone pills were given to millions of women — at least until rigorous experiments suggested that some women getting the hormone treatments were actually more likely to suffer a range of harms, including heart attacks and breast cancer. (Today, the situation appears more nuanced: Evidence suggests the treatments are safe and even beneficial for some women, but may carry unforeseen risks for others.)

A 1992 paper by a group of physicians spelled out the solution to what they saw as a misguided medical approach — and helped launch the EBM movement. The paper presented a scenario: A junior medical resident sees a patient who has suffered a seizure for the first time in his life. There’s no obvious cause. The patient wants to know the odds that he will have another seizure.

One approach, the group writes, would be for the resident to ask a couple senior doctors to provide her with an educated guess, based on their experiences. Such an estimate is necessarily imprecise. “The patient leaves in a state of vague trepidation,” the group wrote. They labeled this approach “The Way of the Past.”

In “The Way of the Future,” the young doctor skips a consult with her superiors and instead goes to a computer. She performs a search in a database of medical papers, and finds 25 different published research papers on seizures recurrence. She scans through them all, and identifies one that actually collects and crunches data about how patients fare after seizures, providing a more precise estimate of how likely the man is to have another episode. The doctor conveys that risk to the patient, who “leaves with a clear idea of his likely prognosis.”

The goal, the EBM pioneers argued, was to bring the best available evidence more squarely into the practice of medicine.

Perhaps predictably, older physicians were not always elated about a movement devoted to de-emphasizing intuition and “unsystematic clinical experience,” in the words of the 1992 paper. It probably didn’t help that David Sackett, the physician widely described as the father of EBM, seemed to relish challenging authority. “He was a nice guy, but he was in your face,” recalled Jeffrey Aronson, a physician who worked with Sackett in the 1990s, at Oxford. “He didn’t hold back. If he thought you were on the wrong track, he would tell you in no uncertain terms.”

Younger doctors at Oxford, Aronson recalled, seemed to enjoy “something of the iconoclasm that he brought.” But, Aronson added, Sackett “rather raised hackles among the senior members” of the division. (Sackett died in 2015, at age 80.)

Despite that pushback, the movement was quickly influential. But bringing the best evidence into medical decision-making is challenging. There are more than 1.5 million papers published each year in the biomedicine and life sciences, by one recent estimate. Just a single search in a popular database can yield thousands of different studies and analyses. (The figures were less extreme, but still overwhelming, in the 1990s.) How should physicians sift through it all?

To address that problem, EBM practitioners created methods for evaluating the quality of scientific studies. In essence, they were coming up with ways to grade evidence.

Say a researcher wants to know whether estrogen hormones prevent heart disease in postmenopausal women. In one kind of study, researchers might explain how estrogen seems to affect the heart, and why it’s plausible that taking hormones would prevent heart disease. EBM approaches would rank that evidence as relatively low quality. Better, according to EBM: an observational study, in which researchers effectively survey women who already use hormones, and those who don’t, to see who has higher rates of heart disease. But even then, there are potential sources of bias: Maybe the kind of people who take estrogen hormones are also the kind of people who exercise more, and it’s actually the CrossFit sessions and the yoga classes — not the estrogen — helping their hearts.

The gold standard for research on clinical interventions, according to EBM, is the randomized controlled trial. In an RCT on the hormone therapy, for example, some women would be randomly assigned to get hormone therapy, some would take a placebo, and then researchers would track their heart health. As a general rule, EBM practitioners have pushed for medical decisions to be grounded in high-quality RCTs whenever possible, rather than in other forms of evidence, even though good RCTs can be slow and costly.

Once doctors have decided how to identify high-quality research, they need some rigorous way of finding all those studies, amid the millions of published papers available. In the 1980s and ’90s, EBM practitioners helped develop something called a systematic review. They would comb through thousands of papers and then use a transparent method to rank all the evidence and synthesize it into a clear, simple conclusion. (It’s basically an exhaustive evidence inventory.) That way, a physician with a question doesn’t have to do all the searching themselves: They can simply refer to the systematic review.

The most influential group that makes such reports is the Cochrane, founded by EBM pioneers in the 1990s. Cochrane has today published thousands of reviews, and they’re widely used to inform medical decisions.

Along the way, the EBM movement also developed a certain culture. “We tend to be skeptics,” said Gordon Guyatt, who coined the term “evidence-based medicine” in the early 1990s, as a physician-researcher at McMaster University in Canada. “It somehow attracts people with a skeptical bent.”

Often, EBM-oriented researchers have directed that skepticism toward medical interventions that, they feel, are based on thin evidence. Indeed, over the years, EBM researchers have challenged certain kinds of cancer screenings, the use of specific drugs for certain treatments, and certain post-operative practices, successfully revising their role in medicine.

“One of the things that evidence-based medicine is kind of pushing back against is an unbridled interventionism,” said Fuller, the Pittsburgh medical philosopher, who has written about the history of the movement. “If the standards of evidence are too low, you’re going to have lots of interventions, because there’s a lot of interested, invested parties that want to sell you things.” Raising the standards of evidence — as EBM aimed to do — will necessarily challenge some of those treatments, he added, and one result is “becoming less interventionist than you would have been before.”


Today, experts debate whether that skepticism toward interventions went too far during the Covid-19 pandemic.

When Covid-19 began spreading worldwide in early 2020, public health leaders were faced with a set of difficult choices: They had very little information about the new virus. At the same time, there was tremendous pressure to act quickly to slow down the spread. Authorities used what evidence was available to make decisions. Computer models suggested that measures like school closure might slow down the spread of Covid-19; schools closed. Some laboratory and clinical evidence suggested masks would slow down the spread of Covid-19; mask mandates were soon in place.

Among researchers, it was no secret that the evidence behind many of these measures was less-than-ideal. Computer simulations are prone to error and fraught with uncertainty. While there was good reason to believe masks could probably slow the spread of the virus, there was inconclusive evidence that masking policies would actually blunt the impact of a pandemic pathogen.

Communities of scientists soon fractured over how, exactly, to deal with all that uncertainty. Already, in the spring of 2020, some evidence-based medicine figures were expressing concerns that public health authorities were acting too aggressively on weak evidence, including models. “The fixation with modelling distracted from an evidence-based interpretation of the data,” Heneghan, the Oxford professor and target of McLean’s WhatsApp epithet, later wrote.

“Evidence is lacking for the most aggressive measures,” John Ioannidis, a health researcher and evidence-based medicine luminary at Stanford University, wrote in March 2020, in an editorial for a scientific journal. Research on previous respiratory disease outbreaks, he continued, found scant evidence to support practices like social distancing. “Most evidence on protective measures come from nonrandomized studies prone to bias,” he wrote.

Some people in the world of public health fired back: That kind of EBM perspective, they argued, was simply unrealistic in a moment of crisis. “Ioannidis doing his schtick about standards of evidence is not helpful,” Yale epidemiologist Gregg Gonsalves wrote on Twitter (now X) that March. “We all want better data,” Gonsalves wrote. “But if you don’t have it. Do you sit and wait for it in a pandemic?”

For people familiar with the history of EBM, the fault lines could sound familiar. Once again, a skeptical group of doctors was challenging medical authority, saying that current practices were based on thin evidence, and warning that the reflex toward intervention might run amok.

Here, though, the challengers weren’t taking on just their fellow clinicians. They were taking on a whole different field: that of public health, and specifically the discipline of public health epidemiology.

In an influential essay for the Boston Review, published in May 2020, Fuller, the Pittsburgh philosopher, laid out what he characterized as a clash of worldviews — a battle between two “distinct traditions in health care” that were also “competing philosophies of scientific knowledge.”

One of these, he wrote, was that of public health epidemiology. The discipline tried to track and respond to emerging outbreaks by using a whole range of tools, including models and observational studies suggesting that a certain intervention could plausibly have a benefit. This camp, Fuller wrote, “is methodologically liberal and pragmatic.”

On the flip side was the discipline of clinical epidemiology, which, he noted, is closely tied to EBM. That world, he wrote, “tends to champion evidence and quality of data” above all. Its adherents are also usually more conservative about interventions.

It is possible for a single person to draw from both traditions in making decisions. But a clash between those schools of thought, Fuller suggested, was one way to understand some of the emerging flashpoints over the pandemic. At the time, Fuller argued that a synthesis of two approaches might help bolster the pandemic response. He described an approach that would combine the act-now pragmatism of the public health world with some of the skeptical rigor of the EBM mindset.

That synthesis, he reflected in a recent interview with Undark, did not materialize. “I hoped that both of these sides would embrace some of the virtues of the other,” he said.

But it didn’t turn out that way. “If anything,” he said, “I think these two different camps just became more entrenched.”


Recently, some prominent figures in the EBM world have been reflecting on the response to Covid-19. Often, they’ve expressed sympathy for public leaders tasked with stopping a fast-moving pandemic, while also outlining a critique that amounts, in effect, to this: Institutional public health has an evidence problem.

“I empathize with the folks in public health, because their evidence is often low, or very low, quality. And you need to make decisions,” said Guyatt, the McMaster University physician, during a recent Zoom conversation with Undark. But, he argued, public health authorities weren’t transparent about those limitations. “One of the terrible mistakes I think they made,” he said, “was not acknowledging the low quality of the evidence.”

Instead, he and other EBM thinkers have argued, public health authorities overstated the certainty of the evidence behind their decisions. Others have said officials then failed to do the research necessary to actually back up those claims.

“When we actually test them, the majority of things turn out not to work the way we think they should, just because of the complexity of reality,” said Paul Glasziou, a prominent EBM researcher who directs the Institute for Evidence Based Healthcare at Bond University in Australia. Glasziou praised public health leaders for making difficult decisions under pressure. But, as the pandemic wore on, he said, it seemed that public health authorities did too little to try to undertake RCTs and other studies to confirm that measures were working — and to adjust course if not. “It’s not just trials, it was research in general,” he said.

Vinay Prasad, a University of California, San Francisco oncologist — and a vocal critic of U.S. public health institutions — was blunter in a blog post published last year. “The issue is not acting without data. We all forgive the initial events of March 2020,” he wrote. “The issue is NOT EVEN TRYING TO GENERAT[E] DATA IN THREE YEARS WHILE YOU TALK AS IF THE SCIENCE IS SETTLED.” Public health leaders, Prasad has argued, should have done more to run RCTs to test whether interventions like mask mandates actually work to slow the spread of Covid-19.

Not everyone in the EBM world is sympathetic to those kinds of arguments. Among them is Trish Greenhalgh, a physician and medical researcher at Oxford, and the author of a popular EBM textbook. She has argued that her colleagues have set unreasonable standards of evidence for public health interventions. She also questions whether RCTs are actually effective tools for studying whether some public health interventions work.

“These methods and tools were designed primarily to answer simple, focused questions in a stable context where yesterday’s research can be mapped more or less unproblematically onto today’s clinical and policy questions,” Greenhalgh and four colleagues wrote in a 2022 paper. “They have significant limitations when extended to complex questions about a novel pathogen causing chaos across multiple sectors in a fast-changing global context.”

In a conversation with Undark in early 2023, Greenhalgh characterized some of her EBM colleagues as becoming dogmatic about RCTs, to the point where they were overlooking other useful forms of evidence. “It’s not everyone in the EBM movement,” she said. “It is the very narrow evangelistic group that have, I think, risen to prominence during the pandemic, and are claiming the EBM kitemark as their own.”

Critics of this segment of the EBM movement have also argued that this intervention-averse, RCT-focused approach has been yoked to political agendas that are increasingly skeptical of certain public health and medical practices.

“I think the structure of public health was stronger in terms of the science than we gave it credit for,” said David Gorski, a physician and an editor of the Science-Based Medicine blog, which has often been critical of EBM. “And it was actively undermined, not necessarily by EBM fundamentalists, but just by ideologues who were not happy with contact tracing, masking, vaccine mandates, public distancing, business closures, et cetera.”

Where some see reasoned medical caution, Gorski and some others describe a kind of weaponized doubt. By constantly demanding higher standards of evidence or not-actually-feasible RCTs, the thinking goes, evidence-based principles can be used to undermine policies at will.

That dynamic, Gorski said, has become potent in the world of medicine for young people experiencing gender dysphoria. There is a lack of RCTs for example, that study the mental health outcomes of certain interventions on patients.

That absence of RCT-oriented evidence has led some EBM leaders to caution against current practices in gender care. “Gender dysphoria treatment largely means an unregulated live experiment on children,” Heneghan told The Times in 2019. Guyatt, more recently, has raised concerns about low-quality evidence in the field. Some critics of current care standards have embraced the EBM label: One of the principal organizations questioning common treatments for minors experiencing gender dysphoria is called the Society for Evidence-Based Gender Medicine.

By the standards of an EBM evidence-rating system called GRADE, “almost all of these recommendations are made on the basis of low quality or low certainty evidence,” said Quinnehtukqut McLamore, a psychologist at the University of Missouri and a close observer of the relationship between EBM and gender medicine. “This sounds bad,” they added, “until you realize that the GRADE guidelines, according to evidence-based medicine, are extremely risk averse. They are very, very strict.”

Many experts say that other, robust forms of evidence show these interventions can help — and that RCTs are an inappropriate tool for studying some of these questions. “RCTs are ill-suited to studying the effects of gender-affirming interventions on the psychological well-being and quality of life of transgender adolescents,” wrote the authors of one 2023 paper, published in the International Journal of Transgender Health. Among other obstacles, the researchers write, patients strongly want the interventions, and will know if they’re not receiving them as part of a study.

At some point, McLamore argued, the drumbeat of concerns about low-certainty evidence shifts from a constructive call for scientific rigor to a kind of politicized obstructionism — one that makes it impossible to act at all.


In 2017, just a few months after finishing a nearly eight-year stint as director of the Centers for Disease Control and Prevention, Tom Frieden published an article in The New England Journal of Medicine on the use of evidence in public health. RCTs, he wrote, were not always the best form of evidence to answer vital questions in public health, such as whether taxes can help curb tobacco use. Public health practitioners, he argued, should lean on multiple forms of evidence in making decisions, and avoid fixating on RCTs.

“The goal must be actionable data — data that are sufficient for clinical and public health action that have been derived openly and objectively and that enable us to say, ‘Here’s what we recommend and why,’” he wrote.

In a recent conversation with Undark, Frieden reflected on the fissures between public health epidemiology and EBM. Part of the problem, he argued, was that the EBM movement had taken tools that work well when treating individual patients in the clinic, and tried to apply them in places they don’t belong. A doctor may want an RCT if they’re planning to give a patient a certain drug. But demanding that level of evidence for public health interventions isn’t always feasible, he argued. And the pull toward RCTs can leave people relying on a few bad trials, rather than on higher-quality observational studies, of the kind that are common in public health.

T

Frieden also suggested that, in their skepticism toward interventions, EBM practitioners were at odds with the basic imperatives of public health. In medicine, physicians are trained, above all, to do no harm. In situations of uncertainty, they may default toward inaction. “My father was a wonderful physician and a wonderful cardiologist,” Frieden recalled. “And he was virtually a Christian Scientist when it came to medication.” Unless there was firm evidence, his experience had shown him, giving a drug or some other intervention could cause harm. The situation looks different for practitioners of public health. There, the principle is different: It’s not do no harm, Frieden said, but something more like “above all, avoid a preventable death.”

“That’s a very different ethos,” Frieden added.

Frieden acknowledged that public health decision-making at times relies on imperfect data — something he said could have been more clearly communicated to the public during Covid-19. And at some level, he said public health sometimes requires intuition; it is an art, not just a science. “People who have worked in public health, we have to make decisions in real-time often. And using modeling can be helpful,” he said. “But often, it is kind of an intuitive feel of the data. And I know how unsatisfying that would be for evidence-based medicine people.”

Indeed, Guyatt was not impressed with that reasoning. “Baloney,” he said. “Absolute baloney.”

“Instead of doing that, recognize that it’s low or very low quality, recognize your uncertainty,” Guyatt said. “And then instead of pretending there’s a feel of what is right, make your values and preferences explicit.”


Efforts toward synthesis are underway. “I think Cochrane wants to expand its remit more into public health,” said Lisa Bero, a researcher at the University of Colorado and a longtime member of Cochrane’s leadership.

The move has precedent. In the past 15 years, EBM principles have helped transform another branch of public health, that of environmental health.

Lisa Bero, a longtime member of Cochrane’s leadership, has spent years thinking about how to bridge the worlds of public health and EBM.

Tracey Woodruff, now an environmental health researcher at the University of California, San Francisco, saw the need for those kinds of changes during her time at the U.S. Environmental Protection Agency in the 1990s and 2000s. Woodruff was skeptical of the way the agency often tackled questions of how, for example, a certain pollutant may affect health. Their methods seemed inconsistent and not always rigorous, she recalled; researchers would gather a batch of studies and make judgment calls about which to focus on in making decisions, rather than having a transparent way of marshaling and organizing data. As a result, she said, evidence was “not evaluated in a consistent fashion.”

Woodruff was part of a push, starting in the 2000s, to bring some of the tools of EBM into environmental health. This specifically meant doing more systematic reviews, in order to have a transparent, consistent way of evaluating evidence.

The work could be uncomfortable for people in the EBM world, who were accustomed to working strictly with RCTs. In environmental health, such trials are often impossible. “You’re not going to do a randomized controlled trial of the effects of PFOA on pregnant women. It’s just not going happen,” said Bero. To answer public health questions, the Cochrane folks had to get used to applying their methods to observational studies and other forms of evidence.

Meanwhile, for people in the world of environmental health, there could be discomfort with the EBM approach — motivated in part, Woodruff said, by concerns that the techniques would somehow downplay or replace expert knowledge. But, she said, the process is akin to expert decision-making, with some added benefits: “It’s a structured approach that you put your judgments together in a way that’s the same — and more consistent.”

Today, those kinds of systematic reviews are more common in environmental health, including as standard practice at federal agencies such as the EPA and in the National Toxicology Program.

Whether similar steps will be taken across public health more broadly is, so far, unclear.

When the CDC and other public health leaders, for example, have offered justifications for their support of masking during the pandemic, those documents often resemble a partial list of studies that support mask use rather than a systematic, transparent breakdown of all the available evidence.

On the flip side, a Cochrane review that raised questions about mask efficacy relied exclusively on RCTs that many critics said were simply bad studies, or not well equipped to answer questions about mask use during the Covid-19 pandemic.

Bero, the Cochrane editor, has spent years thinking about how to bridge the worlds of public health and EBM. It’s possible, she said, for Cochrane to maintain its standards for rigor and transparency, while becoming more open to other forms of evidence besides RCTs, and more flexible when tackling complicated questions like those presented by disease outbreaks. “I see us moving towards broader public health questions,” she said. And along the way, she added, “we will inevitably be keeping on this trajectory of diversifying the evidence.”

Print Friendly, PDF & Email

52 comments

  1. bdy

    “When we actually test them, the majority of things turn out not to work the way we think they should, (tests can’t satisfy pathological skepticism), just because of the complexity of reality,”

    Fixed.

    1. redleg

      RCTs and models have a common problem: Both require oversimplification in order to work in an open system. While all models are wrong, some are incredibly useful. Focusing on data quality over any other factor is, IMHO, a cult activity.

      One observation: EBM and other “merchants of doubt” purveyors focus on the individual, where public health and environmental research focuses on systems and communities. This conflict epitomizes the present day political divide on damn near everything. The “I’ve got mine” side is slowly winning a pyrrhic victory.

  2. panurge

    Evidence-based-medicine is increasingly sounding like rules-based-order.
    It could be a wonderful tool to help, support and assist the discipline. Rather, it is used to further different (malicious) agendas, for example to create new streams of revenues.
    The implementation is mostly a mapping of acceptable/allowed practices. On top of my head, not in a particular order:

    1 – It narrows the options of available treatments (Why? ‘Tis a mistery*)
    2 – It replaces critical thinking with pattern recognition ( why hiring expensive humans when cheap AIs can do the trick?)
    3 – It encourages defensive medicine (patient died ‘cuz crappy protocol? don’t blame me!)

    (*) At the very beginning of COVID in 2020 physicians were allowed to do their job and to throw EXISTING drugs at the wall to see what could stick. This practice was tolerated only while the usual suspects were concocting the vaccines. As soon as they were ready, not only was the practice discouraged and actively fought, but at the same time MSM started hammering the message that nothing would ever work except the wundervaccines.

    1. clarky90

      The incantation….. “Evidence Based Medicine” is a Medieval, Eastern European Magik Spell. It summons dark forces.

      This is wonderful example of “widdershins”…… (walking counterclockwise around a Church)

      Long ago, people would walk in a clockwise direction around a Church. It was believed that the sky and the universe rotated around a center point from the right, and that human beings should be in harmony with the Universe(!)…….

      Widdershins (walking counterclockwise around the Church) is the opposite. It is a Magik Spell.

      A recent example of “Widdershins”…

      The EcoHealth Alliance of Ukraine, which helped create COVID-19

      Often, if one turns the Oxford Dictionary meaning, of a current “Title”, upside down and then, inside out, one begins to decipher the “pig latin code” of the New Word salad…..

      Thus, “evidence based medicine” can mean whatever big money needs it to mean….. (inside out, upside down)

      I learned how to speak and understand “Pig Latin” when I was a little kid. Other little kids taught me.

      Hey fellow kids! Now we know how to understand the Neo-Pig Latin of our leaders!

      1. clarky90

        For fun, I made up a little quiz, for anyone who wants to practice their code-breaking proficiency.

        (1) U.S. military forces deployed to Afghanistan to combat terrorism on
        October 7, 2001, and designated Operation Enduring Freedom“. (Neo-Pig-Latin code)

        What does Enduring Freedom actually mean…?
        (write your answer here) ………………………………………………..

        (2) September 2021, Operation Enduring Sentinel began on October
        1, 2021, “as the new U.S. mission to counter terrorist threats emanating from Afghanistan. (Neo-Pig-Latin code)

        What does “Enduring Sentinel” actually mean…?
        (write your answer here) ………………………………………………..

        (3) Operation Prosperity Guardian is a United States-led military operation to respond to Houthi-led attacks on shipping in the Red Sea. (Neo-Pig-Latin code)

        What does Prosperity Guardian actually mean…?
        (write your answer here) ………………………………………………..

        (4) Operation Inherent Resolve and other U.S. Government Activities Related to Iraq and Syria……….

        “Inherent Resolve” means?

  3. PlutoniumKun

    A very fair overview – maybe too fair to the EBM movement. One thing that was very clear during covid that many of the loudest voices simply don’t understand risk, specifically long tailed risk. Sometimes a little statistical knowledge can be a dangerous thing – far too often we heard people claiming to be on the side of science, but horribly misapplying standardised techniques far beyond where they had any real applicability. Its a form of Dunning Kruger (as originally described) writ large.

    Too many supposed experts opining outside their area of expertise, and being apparently entirely unaware of the limitations of the epistemological tools at their disposal. It would be of just casual interest to philosophers of science, if it wasn’t for all the millions of people who died.

  4. Terry Flynn

    Thank you Yves. For most of my 25 years in health services research I worked directly or indirectly with academics who were heavily influenced by EBM (Bristol UK and then Sydney). In the early years (around turn of the century) people like Professor George Davey-Smith effortlessly used EBM but also newer and more “heterodox” methods (Mendelian randomisation etc) that sought to get away from the stereotypical population models incorporating appeals to the central limit theorem etc so that they would “use the right tool for the job”.

    Those were some interesting times to work in an EBM adjacent field (health econometrics/medical statistics). Matthias Egger and others in my department did a huge amount to help the average physician spot dodgy/inappropriate data – e.g. the “inverted funnel” to spot when all the statistically non-significant trials are “missing” for a treatment.

    However, early in my PhD I recognised that my “classical statistical” approach to the analysis of more complex (public health applicable since randomisation was by cluster) RCT was on its last legs. Thus I moved laterally in my post-doc; the Empirical Bayesians were rapidly colonising health econometrics and the parts of medical statistics in which I did my PhD. Senior people like Davey-Smith seemed to me to be fully aware of the risks in using the new “toys” (facilitated by much better statistical programs). On the other hand, the juniors often worried me: they had a bunch of new toys and to me at least, seemed messianic in their zeal to use these. The phrase “a solution in search of a problem” crossed my mind way back in 2001.

    Fast forward to 2010 when I was getting established in Australia and had been thoroughly immersed in a field (academic marketing) that flatly rejected practically the whole paradigm underpinning medical statistics. Instead, it took as read that the population of interest will be multi-modal, exhibit non-linearities in functional form of the stats, and be incredibly vulnerable to context and other effects, making “past evidence” at best “a useful starting point”. I interacted with lots of people from backgrounds in engineering and similar disciplines who routinely assumed that there would be non-linearities, misleading but attractive “hills” rather than “the Mount Everest” in the likelihood function and “tipping points” that we now hear of concerning climate change. I got a healthy and humbling education in what I didn’t know.

    I’m very glad my education was “multi-pronged” with an emphasis on getting “reasonable certainty about the general area of the solution, knowing the combination of interventions that is best”, rather than “huge, likely spurious, degrees of certainty, based on evidence that is highly selective”. I presented to the board of NICE in London in 2009 and was worried back then about the stranglehold EBM was exerting upon health econometrics and medical statistics. Increasingly skewed incentives in academia have only made the fetishisation of EBM worse as the “path to success” is just like in mainstream economics – construct some silly model that really only makes sense in la-la-land but if it convinces 4 other bigwigs across the world you’ll get the big prize.

  5. elpedo346@gmail.com

    I am laughing for the article uses ‘evidence’ to make arguments and come to conclusions.

    Then rejects evidence for medical decisions.

    Making decisions devoid of evidence is the height of uncritical thinking.

  6. flora

    Thanks for this post. I’ve learned a couple new things over the past 3 years.
    A lot of the scientific, biomed, peer reviewed papers published in prestigious journals have been retracted due to bad data (and sometimes fraudulent data). gigo.
    Over the past 10+ years the health of the public in the West has gotten worse. It could be a coincidence public health is getting worse at the same time the EBM approach is gaining strength in the medical world. Life expectancies are falling, and they started falling years before C19 hit. See the Case/Deaton study.

    1. CA

      “Over the past 10+ years the health of the public in the West has gotten worse.”

      “No”; public health has not been not worsening in the West, but worsening in the United States:

      https://fred.stlouisfed.org/graph/?g=1erjq

      January 15, 2018

      Life Expectancy at Birth for United States, United Kingdom, France, Germany and Italy, 2007-2021

      https://fred.stlouisfed.org/graph/?g=1erk0

      January 30, 2018

      Infant Mortality Rate for United States, United Kingdom, France, Germany and Italy, 2007-2021

  7. flora

    One question I have, and it’s not poke at anyone: Has Public Health been taken over by private philanthropic foundations?

    1. Terry Flynn

      I don’t have an answer that goes beyond what NC has reported a lot on in recent years concerning the dissolution/”revamping” of public health organisations in countries like the US and UK. However, FWIW, I know from various people that a number of “support services” (for drugs and alcohol dependence etc) – which are key in countering the epidemic of “diseases of despair” that are currently exacerbating COVID mortality – are increasingly being administered by non-public sector organisations in the UK.

      These are (as far as I can tell from quick perusal) generally not-for-profits/charities. However, I have been told that certain targets (such as “percentage of clients who have been got back into F/T employment and remained there for 6+ months”) are a quid pro quo for government help. This encourages gaming of the system, just like privatised national monopolies. The “easiest” cases get dealt with and the rest are left for a diminished state to deal with. This emphasis on “getting people back into employment” should surprise nobody who has kept up with NC pieces on how COVID has and has not been dealt with.

  8. Skk

    Thanks for a great article. Based on my experience in a career in math modeling and later on a data science, if the either / or choice is between evidence based propositions or intuitive feel for data based propositions, I’ll take evidence based any day.

    1. Arkady Bogdanov

      The point, at least as I understand it in the article, and based upon my own observations in the last few years, is that the “evidence” utilized by so-called Evidence Based Medicine, tends to be bullshit concocted to serve private interests (profits) rather than public interests (safety and improved health outcomes). EBM is just one more instance of Orwellian authoritarian dishonesty meant to silence public debate and any utilization of alternatives that likely provide better outcomes while also being less profitable/less expensive for the population. Maybe it was evidence driven back in the stone ages of its original conception, but it is just another perverted concept that serves the interests of a corrupt capitalist system now.

  9. New_Okie

    I think what we saw in the pandemic was that an EBM-adjacent position was used by people on both ends of the libertarian-authoritarian axis.

    So you had people pushing to prevent patients from getting ivermectin because there “was not enough high quality evidence of efficacy”, yet the same people would push for vaccine mandates, mask mandates, school closures, and lockdowns based on what they admitted was often limited evidence.

    The more authoritarian public health officials basically said “we have to force everyone to do what I think is right, because it is my best guess at how we save lives”. And the more libertarian ones said “we can’t force people to do anything until we are 100% sure, which we will never be”. And lost in this argument, most of the time, was an acknowledgement of what price would be paid if you were wrong.

    For example:

    What if lockdowns slowed the spread a tiny bit but also shuttered many small businesses and trapped people inside in homes with mold? And what about missed medical procedures and checkups?

    What if the vaccines turn out to be dangerous and less effective than initially thought? Or what if repeated vaccines teach the body to ignore the virus rather than kill it? What if some subgroups are at a very high risk of having an adverse reaction (see the ME association’s survey of ME/CFS patients where 13% claimed a severe and unresolved reaction to the covid vaccines).

    What if masks actually do work? Then what is the cost of not having mask mandates vs the arguably mild benefits of allowing people to not wear masks?

    As I mentioned before, what if Ivermectin, or even Hydroxychloroquine, actually did work?

    And of course, what is the overall cost to the government’s and scientists’ legitimacy when people are forced to do something that, in retrospect, turns out to have done more harm than good?

    These are the kind of questions public health officials should have been asking. At the very least, as Vinay Prasad wrote, they should have been gathering data with which to evaluate the success or failure of their policies. Instead we got the public health equivalent of intersectarian religious debates, where everyone quotes scripture and no one is open to changing their mind.

    So I do not blame evidence based medicine for all of this. The devil can cite scripture for his purpose. EBM is a good tool when used by a balanced mind, and a dangerous weapon in the hands of an idealogue. I think we must simply get better at identifying which is the case. And I think the first thing to look for is whether someone acknowledges what would happen if they were wrong.

    1. Terry Flynn

      If I read you correctly, you’re emphasising the “less immediate”/”downstream”/”difficult to quantify” effects of interventions. And that is not wrong. The health economics community spent about 10 years from the late 1990s (in Europe anyway) trying, without much luck, to get to grips with this, using an “individual level” focus. If an intervention to reduce risk of miscarriage in pregnancy is to be evaluated on cost-effectiveness grounds (for public funding), just how far do we “cast the net”?

      Do we somehow estimate the costs and benefits of what a viable child would cause as opposed to the case when it miscarries? We quickly get into a minefield where the only generally accepted answer is “we have no idea”. We must change paradigm.

      EBM, by its nature, has a very “individual level” focus. This sidesteps discussion of the “fuzzy difficult” societal level implications, following on from “people who would have lived” etc. Rightly or wrongly, I started thinking that public health was akin to macroeconomics (where we don’t claim to model individuals but concentrate on certain societal quantities) and EBM was microeconomics (something that can shed light on important discrete issues).

      I hope the above is *ahem* like positive economics. I, however, firmly hold to normative economics – what *should* be. That is much more in tune with theories of macroeconomics and public health. Though maybe I read too much Asimov and the “4th law of robotics” as a teenager :-)

  10. KidDoc

    Thank you for this.

    Evidence based medicine, historically, included nuanced evaluation of the methods used in research, consideration of factors not studied, and working with the patient to figure out the best approach. Art and medicine. Thoughtful consideration of available studies, experience, situational knowledge and judgement, is key to both medical care and public health.

    One-size-fits-all criteria (EBM), as graded and encouraged but the “evidence-based specialists”, are not the same thing. They likely encourage overconfidence in higher-rated studies. Inflexible rules, subject to interpretation and stretching, are a set-up for manipulation – reality does not fit rigid formats.

    EBM typically arranges real-world information into sub-groups, suitable for statistical analysis. Complicating factors are selectively eliminated (excluded from review) or adjustments made for correlated factors (smokers, socioeconomic group, etcetera) via educated guesstimate. Both leave room for study bias, and even more room for flawed conclusions. Selection of subjects also introduces bias risk.

    EBM is very expensive, so in most cases “long term” means 1 month to 2 years. This may be convenient (for researchers), but quite problematic. Coupled with funder influence on the design, selection of consultants (and their pay via private hidden contracts) and publication of results, it can be disastrous (Vioxx anyone).

    Longer term and unexpected (hence, unevaluated) problems are missed or found later. For example, DDT was widely used, then prohibited decades later when environmental damage was confirmed. Now, we are finding that the level of DDT metabolites (DDE), in the urine of pregnant women (today), is closely associated with obesity in their children in 5 – 10 years. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9512275/ Pesticides and adolescent mental health? https://neurosciencenews.com/herbicide-teen-cognition-24938/

    1. redleg

      Inflexible rules, subject to interpretation and stretching, are a set-up for manipulation

      More like set up for invoicing.

  11. Bruce Elrick

    Good science, like good journalism, starts when you learn how badly your own brain works then take the laborious steps to compensate for all its failures, then realize that you have to keep up with that labour on an ongoing basis.

    That being said, I think it is very useful to take advantage of the intuition of people who have demonstrated that they are good a science to fill in the gaps that we have yet to been able to afford to fill with proper science. These people hopefully express their (hopefully well-informed) opinions with qualifiers.

    The tricky bit is that identifying people who are good at not fooling themselves is itself a laborious task.

    Unfortunately, all this means that dishonest people can game the system. Sigh.

    1. Thistlebreath

      I knew that degree in English would pay off some day. Insights for free! For example, A. Pope’s cautionary verse about reaching conclusions too soon:

      A little learning is a dang’rous thing;
      Drink deep, or taste not the Pierian spring:
      There shallow draughts intoxicate the brain,
      And drinking largely sobers us again.
      Fir’d at first sight with what the Muse imparts,
      In fearless youth we tempt the heights of arts,
      While from the bounded level of our mind,
      Short views we take, nor see the lengths behind,
      But more advanc’d, behold with strange surprise
      New, distant scenes of endless science rise!

  12. Terry Flynn

    Thanks.

    The tricky bit is that identifying people who are good at not fooling themselves is itself a laborious task.

    I realised that a 5 year section of my career actually helped identify those of us who were good at not fooling themselves. 2001-2006 I was employed by the UK Medical Research Council, the primary non-clinical health research funding body in the UK. My unit was nested within the medical faculty of Bristol University. Although the process to continue funding was a horrendous time-consuming mess, it only happened every 5 years and in the meantime you got to do more or less what you considered important.

    During that period I only published 4 papers – all just sections from my pre 2001 PhD. I produced zilch from my MRC “ongoing” work. My first paper from my “job” only got published in 2007, but was in process and clearly influential in the 5th year of the cycle. That was enough to contribute to continuation of our project funding. This was effectively the “last gasp” of the “old funding model” where you didn’t insist on 6-12 month results, in the knowledge that a certain proportion of open-ended funded projects would pay off BIG in the long term.

    Just to show I know the “downsides”, our funding came from New Labour “funny money” (aka Public Private Partnerships). Once 2008 hit, we were toast, and I emigrated. Now the rewards are much more short term your statement about gaming the system holds true even more *sigh*

    EDIT – Meant to reply to Bruce Elrick

  13. juliania

    Please correct me if I’m wrong but I had the feeling reading IMDoc’s input here that he was helpfully giving examples of ‘evidence based medicine’ using his own experiences with patients. Quoting the article:

    “… “One of the things that evidence-based medicine is kind of pushing back against is an unbridled interventionism,” said Fuller, the Pittsburgh medical philosopher, who has written about the history of the movement. “If the standards of evidence are too low, you’re going to have lots of interventions, because there’s a lot of interested, invested parties that want to sell you things.” Raising the standards of evidence — as EBM aimed to do — will necessarily challenge some of those treatments, he added, and one result is “becoming less interventionist than you would have been before.”

    Using the standard ‘First, do no harm,’ evidence would seem to imply observing patients and whether or not something helped or harmed their difficulty. I’m not getting that emphasis from this article. I don’t think relying on statistics and graphs to form policy has been helpful during this crisis. IMDoc was. I extend to him my thanks.

    1. IM Doc

      This is a very simplified discussion of how I feel about the entire situation.

      In theory, “evidence-based” medicine should be how every one of us practice medicine. But we must realize that is strictly the realm of the “science” part of medicine. The other half, and the far more important part, is the “art” of medicine. The bedside manner. The learned ability to deal with and have success with all of the various personality disorders, character traits, and learned behavior that make each individual patient completely unique. This is a learned behavior – and you just have to trust me – dealing with people is much much more difficult than dealing with numbers. It is very difficult and even in the best of medical teaching environments only a minority ever achieve excellence.

      As such, evidence-based medicine should be founded upon the vigorous learning of statistical methods, what they each mean, how they can be used and abused, and how to relate them to each individual patients. This was absolutely done in my day before “EBM” became a thing. Now, I doubt there is 1 in 20 medical students who could even rationally discuss papers – I know, I work with them all the time. They have been indoctrinated into the “EBM cult” – they have their Holy Bible known as “Up-To-Date” conveniently funded by Big Pharma – and that is by its advertising and reputation 100% EBM all the time. So now instead of going to the source material, most med students, and now a generation later, most physicians just go straight to Up To Date – no thinking required. When you say “evidence-based” around current medical students or housestaff, that is largely what they have condensed it to – a huge gigantic multinational corporation’s distillation of current medical research.

      And when things are published in journals – they are peer-reviewed – therefore “flawless”. So anything in the big journals is promoted as God’s Holy Word. No matter how poor they may be. Remember the constant referring by the editors of the NEJM of the COVID vaccines as “triumphs” when the studies first came out in 2020. Not to mention that now that we have AI ( at least there is one good use for it), all kinds of lying, plagiarism, dishonesty, and fraud are coming out about big articles in big journals from our biggest universities……but those fraudulent studies have been for years “evidence”. When your foundation of evidence is made of sand – your house is not going to be very stable. And we have just seen the tip of the iceberg here – I fear there is far more to come.

      Another concerning aspect of this whole movement has been the “RCT is the only thing that is reliable” – “anecdotal evidence, case studies, and discussion among on-the-ground colleagues” are all basically crap. I will tell you for sure – in the past four years, anecdotal evidence from my own eyes – and daily discussions with colleagues has been what has kept me sane. We have seen repeatedly how “RCT’s” about all kinds of subjects have been manipulated and defrauded.

      Furthermore, in that regard, this RCT worship is just another way for Big Pharma to manipulate the entire profession, and they have now mastered it. “Never mind your own eyes and the patients before you – we have RCTS!!!!!” As far as evidence and discussion – just read the RCT plebe – you can look at the actual raw data 75 years from now.

      This is a total mess. “Evidence-based” medicine sounds like a wonderful thing and what we should all be doing. And the older ones among us actually are. But what is now known as “evidence based” medicine is a complete distraction and distortion from what it actually sounds like. Basically, the words “evidence-based” are basically dissembling propaganda tools – to confuse and comfort the listener and distract them from what is actually going on. Not unlike the CLEAR AIR ACT or the PATRIOT ACT.

      1. CA

        IM Doc, these discussions of yours are simply superb; necessary and superb.

        I think this article involving what should have been critically important work done at Colombia, Harvard and Stanford is related:

        https://www.nytimes.com/2024/02/15/science/columbia-cancer-surgeon-sam-yoon-flawed-data.html

        February 15, 2024

        A Columbia Surgeon’s Study Was Pulled. He Kept Publishing Flawed Data.
        The quiet withdrawal of a 2021 cancer study by Dr. Sam Yoon highlights scientific publishers’ lack of transparency around data problems.
        By Benjamin Mueller

      2. Terry Flynn

        Art vs science – Thank you! Yathew & Grilisches from the mid 1980s showed that any limited dependent variable model (logit/probit) – i.e. live/die or respond/don’t – has a fundamental problem. The likelihood function is a single equation with two unknowns – the mean (on some latent scale of perhaps “liveability”) and the variance (how often a given intervention will give an effect). You cannot separate the two. Elementary maths teaches that you can’t solve for two variables with one equation.

        EBM goes all in on science. But if you don’t have a secondary set of data (2nd equation) then you are (even if you don’t realise it) setting one of the key variables to be a constant across everyone to get the EBM solution.

        Evidence from a multitude of other disciplines shows this is mad. Though I strongly dislike putting the Economics “Nobel” on a pedestal, McFadden deserved a prize because he knew he needed a 2nd dataset to solve his problem of “how to design the BART in California before it was built”.

        IMDoc and others routinely use evidence and experience to help people. I used knowledge and experience of previous studies to give realistic predictions – my stuff is not on that level – not life-saving like IMDoc but the principle is the same.

      3. Susan the other

        Very nice stuff. And if anyone isn’t sure, please go to today’s Links, first one – “Our Shared Genetic Memories…”. AEON. David Walter-Toews. As captivating as Merlin Sheldrake. And his father.

      4. britzklieg

        Hear, hear! I’ve often felt vindicated for my contrarian views during the covid scourge by reading your comments and could not be more grateful for the courage you’ve shown in expressing them here at NC. Thank you!!! I hope your patients know how fortunate they are to have you and the hands-on experience which informs your professional decision making. As a professional singer who has both an advanced degree and has taught at the graduate level I can say, unequivocally, that the most important things I know about my art (and well beyond the technique one practices in the studio through repetition) were all learned on-stage, in front of an audience.

      5. Rick

        Your voice for balance in medicine is needed. I remember reading about the neurologist Oliver Sacks’ disputes with his colleagues about the value of individual cases as well as large studies since especially in his field there were often unusual and unrepeatable events that illuminated aspects of the way the brain works. Not sure if it’s relevant, but I appreciated my primary care doctor’s recommending postponing an MRI for an orthopedic issue because MRI results can provide “a target for surgeons”.

  14. Tedder

    Interesting report on a medical theory controversy; however, this seemed to have caused major collisions in public health regarding COVID19. While I agree that public health principals should have been more transparent, there is a major consideration not discussed, ie, the ‘elephant in the room.’
    When COVID hit Wuhan, Chinese public health had no idea what was happening, just that COVID19 was SARS-like and many deaths were possible. Acting on uncertainty, China locked-down Wuhan (as that is called for at this stage of an epidemic). Noteworthy, however, is that the people of Wuhan were well supported with the necessities of life and all of their rental or mortgage obligations were canceled—not paused, but eliminated altogether for the duration. In time, Chinese public health figured out necessary interventions and as a result, China experienced relatively few deaths.
    In the US, any kind of lockdown or quarantine faced opposition from business interests and from people as in a debt-ridden society based on rent-seeking, it is not possible to stop for an instance. Compound interest accumulates exponentially. And debt-that-must-be-paid or rent-that-must-be-paid extends from large corporations to hot dog stands to mansions and to hovels. No one can stop. Business must go on.
    This is the wrinkle that demands fealty from public health and is the main reason the US COVID19 response was so incompetent and caused so many deaths. The data is clear.

    1. CA

      Interesting report on a medical theory controversy; however, this seemed to have caused major collisions in public health regarding COVID19. While I agree that public health principals should have been more transparent, there is a major consideration not discussed, ie, the ‘elephant in the room.’

      When COVID hit Wuhan, Chinese public health had no idea what was happening, just that COVID19 was SARS-like and many deaths were possible. Acting on uncertainty, China locked-down Wuhan (as that is called for at this stage of an epidemic). Noteworthy, however, is that the people of Wuhan were well supported with the necessities of life and all of their rental or mortgage obligations were canceled—not paused, but eliminated altogether for the duration. In time, Chinese public health figured out necessary interventions and as a result, China experienced relatively few deaths…

      [ Fine summary. ]

    2. Wim

      Keeping most things open was not the problem with corona in the US. Sweden did quite well with very few restrictions.

      It was soon clear that your risk to get sick and how severe it was increased when you were exposed to lots of virus. It was also clear that this was just like the flue: you couldn’t stop it. And it was not dangerous for the great majority.

      So the obvious solution was to protect the weak and to reduce – rather than zero – the exposure. So let children – who seldom get sick – go to school. Be liberal in giving people days off when they seem ill. And have special protections for people who are or feel vulnerable – specially the old.

      1. Yves Smith Post author

        I think you’ve read too much self-serving minimization from your government and piling-on libertarians.

        See

        How did Sweden Fail the Pandemic? International Journal of Health Services

        Did Sweden beat the pandemic by refusing to lock down? No, its record is disastrous Los Angeles Times

        Scathing evaluation of Sweden’s COVID response reveals ‘failures’ to control the virus ABC

        Sweden’s Deadly COVID Failure The Tyee

        Sweden Has Become the World’s Cautionary Tale New York Times

  15. JustTheFacts

    I was amazed and deeply disappointed by the attitude of “Scientists” during COVID. Scientists turned into arrogant authoritarians laying down their views, rather than humble scholars trying to discover the truth, as best they can.

    Science is about admitting what you don’t know. Falsification exists to demonstrate that something you thought was so, just ain’t so. To quote Mark Twain: “It ain’t what you know that gets you into trouble. It’s what you know for sure that just ain’t so.”

    “We all want better data,” Gonsalves wrote. “But if you don’t have it. Do you sit and wait for it in a pandemic?”

    If you don’t know something, you shut up and research it. You don’t pontificate arrogantly about things in the name of “the Science”. And you work with your colleagues who might have found something that works better than what you want to believe will work, because reality is more complicated than your (or anyone’s) puny little brain can fathom, even for fields studying simple systems like the hard Sciences, let alone for fields which deal with messy complex systems like biology.

    In a sense, I find this debate amazing: use all the available knowledge, including your experience and that of others. Don’t just limit yourself to one form of knowledge. Don’t just read the conclusions, read the methods. Be aware that a significant fraction of papers are rubbish, produced to satisfy University Administrators who believe in “publish or perish”. What might be a good idea at one point (a short 3 week lockdown) isn’t always a good idea (an eternal lockdown), just as driving right is a good idea when the road’s going right but is bad when the road is going left, so don’t get wedded to your ideas.

    Medicine should “do no harm” whereas Public Health advocates “above all, avoid a preventable death.”

    By this logic, Public Health would take a healthy person, cut him up for parts to give them to 10 people each of whose lives will be extended by a new liver, heart, whatever. This way Public Health can save ten people at the cost of only one, a benefit of 10/11~91%, just as Kazuo Ishiguro portrayed in his story “Never Let me go”. Clearly medicine, with its admonition to do no harm uses a far more ethical framework than Public Health.

    1. Anonymous 2

      Well, Public Health does not advocate killing people in order to distribute their organs to others, so something has clearly gone wrong with your argument/analysis.

    2. Bryan

      Medicine should “do no harm” whereas Public Health advocates “above all, avoid a preventable death.”

      By this logic, Public Health would take a healthy person, cut him up for parts to give them to 10 people each of whose lives will be extended by a new liver, heart, whatever.

      No it wouldn’t, because that’s not the ONLY logic by which public health should operate. You can’t avoid preventable deaths by outright murdering others, for instance. You must avoid preventable deaths within the confines of basic protections almost everyone would accept. Having said that, some rights CAN be infringed on and SHOULD be infringed on in service of the public health ethos. The precautionary principle, which isn’t “science” but certainly is sound social policy, demands it. So long as you accept that the goal of society is not to build out as much liberty for individuals as possible with no infringement of it, you can live with the overextension of that principle in moments like Jan/Feb 2020.

      I share the anger at what was said in service of “the science.” But that language is used by deceptive public health officials because of the standing that “science” has (or had) in the culture. By relying on that deceptive coinage, they reduced the currency of science, for which they shouldn’t be forgiven. But blaming their perfidy on “public health” is like blaming socialism for the Bolsheviks.

      In a sense, I find this debate amazing: use all the available knowledge, including your experience and that of others. Don’t just limit yourself to one form of knowledge.

      Everyone understands this, but the issues are 1) what counts as “knowledge”; and 2) “how do you weigh them, or adjudicate between competing and incommensurable strategies/forms of knowledge”? Those are always political or ideological questions, not scientific ones. When they’re presented as scientific, the standing of science suffers.

      Public health is a matter of setting priorities for the optimization of a population or society, and then following them. Medicine treats individuals, whose priorities are often much different. Both are essential.

      1. JustTheFacts

        Yes, indeed, it’s a balance. I was just responding to what was said in the article above. My point is that if the principle “avoid a preventable death” is taken too far, problems arise. However, I believe this balance was lost because it strayed far too far from the “do no harm”/”people have bodily autonomy” principles. My evidence is simple: what happened.

        Precautionary principle: stop travel, wear masks except when harmful (e.g. to child development), lockdown for 3 or 4 weeks until you notice that no other country is doing it, therefore it’s pointless, install 222nm light bulbs in all public spaces and increase their ventilation, tell people to exercise/see each other outdoors, get sun, eat vitamin D, vigorously research whether hydroxycholoquine works for COVID since it worked for SARS and MERS and was recommended for a new coronavirus outbreak, vigorously research anything else that seems to work, and recommend it if it is known to be safe with no downside (e.g. vitamin D). Learn from your mistakes: move PPE production back to your country, bring basic medicine production back to the country and subsidize keeping such factories open if necessary, paying for it with national defense funds if necessary.

        Not precautionary principle: do minimal testing of a brand new technology and trust the companies’ representatives, force it on the public when you know it does not stop transmission while pretending it does, do not address known issues mentioned in Moderna’s own patent about liquid nanoparticles and plasmid DNA, pay hospitals more to diagnose COVID, pump air too forcefully into lungs of dying patients, prevent people from seeing their dying parents (UK), send sick people to old people’s homes (US), lie about what you know, create COVID detainment Camps (New South Wales/China), fire people looking for alternative ways of treating the disease, have a 1 size fits all methodology (come back to the hospital when you’re blue).

        Until I see public health behaving in a sensible manner, I shall at best judge them to be incompetent and dangerous. Just because it says “Public Health” on the tin it does not mean that its behavior is consistent with that label. Also, it’s important to realize that many of the targets of public health interventions, such as standard diseases, had already lost much of their killing power before the public health interventions were even introduced, because of hygiene and good food. If you think you’re reducing death rates from 1 in 100,000 to 1 in 10,000,000, you better be really certain you haven’t introduced any hard to measure side-effect causing excess mortality in the population that didn’t need your intervention. For instance, I have yet to see a proper accounting for the increased excess mortality after COVID from other causes.

        That’s not to say that it’s not good to have a public health that functions correctly. Eradicating polio is great. But it only works when public health convinces people: the problems they are facing arise in the countries where they’ve done a bad job convincing people. Bodily autonomy, habeas corpus, must still prevail, to prevent abuses. To conclude, if the thing currently occupying the public health shaped hole is not providing public health, we should be aware of it and act accordingly.

        With regard to using all available knowledge, I disagree with assuming a one-size fits all approach to be dictated from the top is best. Good doctors use both their experience and others’ discoveries — they act as true scientists, and are mercilessly honest to themselves as to what they know. Many doctors don’t — they act as technicians, applying what they have learned at school but not constantly revising their understanding. They may be skilled in routine matters, but they cope poorly with new situations. There’s no getting around the fact that people have different levels of skill. I advocate measuring people on their records until the cream rises to the top in each situation. Once those who have demonstrated success have something to say, then it is rational to listen to them. I certainly don’t advocate listening to the people who have obtained positions of power, because obtaining positions of power is a very different skill set, and we seem to have forgotten that.

  16. Carla

    “There are more than 1.5 million papers published each year in the biomedicine and life sciences, by one recent estimate.”

    And how many of those papers are crap sponsored by drug makers, medical device manufacturers and/or insurers? I’m not say that ALL papers sponsored by those players are necessarily crap, but given the all-mighty profit-motive, we know that at least parts of some of them definitely are. In addition to wading through and finding studies relevant to the case at hand, how many practicing physicians can sift through, evaluate, and separate the wheat from the chaff, or have the time to? Impossible in real life.

    1. Terry Flynn

      Not impossible. The “missing half” of the inverted funnel plot of treatment effects that I refer to above remains a big giant red flag indicating publication bias.

      If you want the detail, just google “egger funnel plot” or (probably due to google search issues) some similar search terms. If you know what to look for in summaries of trials then the shenanigans of pharma stand out like a sore thumb.

    2. Polar Socialist

      Cynically, I’d say if in the conclusion part of the article it “warrants further studies in the matter”, it’s written by a real research team fighting for funding and if it says “treatment X was proven to be efficient” it comes from a marketing budget.

  17. PlutoniumKun

    Many thanks for the contributions here – the article is good, but the comments below – especially from Terry and IMDoc, are incredibly enlightening to those of us outside this field. As always, NC rocks.

    1. Terry Flynn

      Thanks for the kind words. These days I “stay within my lane” when commenting as I have seen increasingly knowledgeable anecdata from the commentariat which make my comments from years ago seem cringe worthy! I’m aware some of my stuff is a bit esoteric (especially the statistical methods) but it’s nice when I see certain comments go through without moderation or even time-out.

      As you say, NC rocks.

  18. ilsm

    Denying informed consent was wrong!

    Science is hard, turning science into effective and safe outcomes is hard.

    The consumer risk on vaccines and other interventions was too high!

    Narrative posed a science.

    No one is looking

  19. Bryan

    IM Doc wrote:

    In theory, “evidence-based” medicine should be how every one of us practice medicine. But we must realize that is strictly the realm of the “science” part of medicine. The other half, and the far more important part, is the “art” of medicine. The bedside manner. The learned ability to deal with and have success with all of the various personality disorders, character traits, and learned behavior that make each individual patient completely unique. This is a learned behavior – and you just have to trust me – dealing with people is much much more difficult than dealing with numbers. It is very difficult and even in the best of medical teaching environments only a minority ever achieve excellence.

    Underlying this contest in the area of medical intervention is a broader project affecting many technical fields: the effort to redefine and codify expertise as something that can be delivered algorithmically, and therefore automatically. The goal is to make expertise devoid of “art” or judgment – essentially, to make human practitioners extraneous. Not everyone pushing EBM has this motivation of course, but they further this project through strict adherence to it.

    Reminder that the infamous Rosenhan experiment recently turned 50 year old. It’s been criticized up and down, but it was powerful enough in its condemnation of the messiness and (yes) harmfulness of psychiatric diagnosis that it accellerated the movement toward an EBM-world.

  20. Charles Peterson

    When money is involved, it depends on which side the money is on.

    EBM as it exists now is appropriate for evaluating clinical treatments, because otherwise interventions (which make money) rule over non-interventions. There’s every incentive to ignore the possibility of false positives, or that interventions shorten lives through other risks.

    In Public Health, the money may be on the opposite side of the table. There’s every incentive to send kids back to school, workers back to work, etc, before it’s desireable. Public health needs EBM that works in the opposite direction, examining false negatives, etc.

    It doesn’t surprise me that EBM became a headline thing during COVID, where it served the money, and not with regards to clinical tests and treaments, where it continues to be pretty much ignored until (and often after) the evidence is overwhelming.

  21. petergrfstrm

    Fake science produces greater profits than information leading to better health.
    Inflential people in Britain realised that already in the 1700s and they felt it was a great idea to massvaccinate people instead of allowing people to wait until they became sick before they contacted the doctor.
    Too few custonmers that way.
    If you allow yourself to be informed about what has transpired regarding vaccinations since then you will not be disapointed if you are looking for more examples of fake science.
    And the best informed have known it in every case but this didnt help.
    Large sums of money have been invested.
    It MUST generate profits.
    And it isnt just the owners and shareholders. It is the jobs of all those who produce equipment and all those who have some task to do related to the fake science of virology. Those at the top of the food chain protect those jobs by disallowing sound judgement with respect to our health.
    The west is very very sick. But to render it sound and sane without first allowing an economic collapse and a depression is hard for typical politicians.
    So they opt for war instead.

  22. brian wilder

    There are some things here that I cannot quite parse and maybe someone could translate for me.

    When the talk is of EBM being based on analysis of randomized controlled trials, how much of this — all of it or part of it — is premised on applying the toolkit of linear regression analysis to observations selected from convenient population samples?

    The assumptions that you have to make to do linear regression analysis on statistics and the amount of data you throw away distilling observations into statistics — I shudder at the combination. But, maybe I am misunderstanding entirely what’s going on here.

  23. Wim

    During my psychology study I was told that new therapies often initially seem to work very good in studies. And later on they are just as good as everything else. The secret: the participant therapists in the initial studies were pioneers who had consciously chosen this therapy and were enthusiastic about it. The participant therapists in later studies were just random employees who had the method imposed on them by their employer.

    This brings me to the subject of placebo. Just like the personal contact mentioned in the article it is important.

    My impression during covid was that many politicians on both sides seemed to adhere to the same principle that some dubious doctors use: if it hurts it shows at least that you have done something significant. That applied both to those who imposed lockdowns and those who ordered that no precautions were allowed.

    My preference is to stress freedom. Doctors have had an education and should be trusted unless they do something deliberately wrong. And when alternative medicine are not seriously harmful civilians should be free to use them. I found the way HCQ was forbidden a violation of the principles of human freedom.

Comments are closed.