John Ioannidis and the Carl Sagan effect in science communication about COVID-19

John Ioannidis and the Carl Sagan effect in science communication about COVID-19

When the COVID-19 pandemic hit, the world suddenly treated epidemiologists and biostatisticians like rock stars.
Some scientists became household names almost overnight, showing up on TV, podcasts, and social media feeds more
often than many celebrities. In the middle of that whirlwind, one name kept coming up in both admiration and exasperation:
John Ioannidis.

Ioannidis is famous for pointing out how often “breakthrough” studies don’t hold up under scrutiny. During COVID-19,
though, he didn’t just study bad science; he was accused of doing some of it. That strange reversal sets the stage
for a fascinating clash over who gets to speak for science in a crisis – and whether we still punish scientists for
being too visible, a phenomenon often called the “Carl Sagan effect.”

Science-Based Medicine took this clash head-on by critiquing Ioannidis’s bibliometric analysis of high-profile COVID-19
media experts and arguing that the study quietly smuggled in old prejudices about public engagement. The result is
a deeper conversation about what good science communication should look like when the stakes are literally life and death.

Who is John Ioannidis, and why is everyone arguing about him?

From hero of evidence-based medicine to pandemic lightning rod

Before COVID-19, John Ioannidis built a reputation as a kind of scientific myth-buster. His classic work on publication
bias and reproducibility helped popularize the idea that many published findings are false or exaggerated. Clinicians,
statisticians, and science skeptics loved him for pushing medicine to be more rigorous – and for being willing to say,
out loud, that the research emperor often has no clothes.

During the early pandemic, however, Ioannidis moved from critiquing other people’s shaky data to making some bold
statements of his own. He questioned how deadly SARS-CoV-2 really was, suggested infection fatality rates might be
substantially lower than feared, and criticized harsh lockdowns and restrictions as overreactions. Some of his
estimates and interpretations were widely criticized by other researchers as overly optimistic and methodologically weak.

The irony was hard to miss: the guy famous for explaining how easily researchers fool themselves was suddenly accused
of doing just that – and doing it in a way that aligned with political narratives eager to downplay the pandemic.
That tension is part of what makes his later work on COVID-19 media experts so controversial. It’s not just about
numbers; it’s about the story those numbers tell – and who gets to tell it.

What is the Carl Sagan effect?

Carl Sagan, the original “too popular” scientist

To understand why Science-Based Medicine sees a problem in Ioannidis’s paper, we need to step back a few decades.
In the 1970s and 1980s, astronomer Carl Sagan brought the cosmos into people’s living rooms with his book and TV
series Cosmos. He inspired millions, helped shape the public’s understanding of the universe, and became one
of the most recognizable scientists on the planet.

You’d think that would earn him unlimited academic high-fives. Instead, Sagan’s fame reportedly made some peers
uncomfortable. When he was nominated for the U.S. National Academy of Sciences, the story goes that his public
visibility worked against him. Even though his publication record and citation impact stacked up well against
existing members, many colleagues seemed to see him more as a TV personality than a “serious” scientist.

That dynamic – the idea that scientists who spend too much time talking to the public are probably weaker researchers –
has been nicknamed the “Carl Sagan effect.” It captures a lingering suspicion that media-friendly scientists must be
trading real scholarship for ratings and retweets.

Has the Sagan effect faded away?

In recent years, several studies and essays have asked whether academia has finally outgrown this snobbery.
Analyses of “celebrity” scientists suggest that many of them actually have publication and citation records at
least as strong as those of their less visible peers. Some institutions now actively reward outreach, recognizing
that talking to the public isn’t a distraction from science – it’s part of its job in a democratic society.

Yet the Sagan effect hasn’t vanished. Many scientists still fear that appearing on television or becoming active on
social media will be held against them in promotions, grants, or peer respect. And that’s where Ioannidis’s COVID-19
media paper – and the Science-Based Medicine critique – come back in.

Ioannidis’s paper on COVID-19 media experts

What the BMJ Open study tried to do

Ioannidis and colleagues set out to measure the scientific “heft” of media-visible COVID-19 experts in several
countries. They identified commentators who appeared frequently in news coverage and then calculated their citation
metrics and COVID-specific publication counts. In plain language, the study basically asked:

  • Who are the people the media keeps calling “experts” about COVID-19?
  • How much have they actually published in the scientific literature?
  • How often are those publications cited by other researchers?

Unsurprisingly, the authors concluded that many of these media experts were not among the most highly cited
researchers worldwide, and that relatively few had extensive COVID-specific publication records. On paper, that can
look like a mic drop: “See? The talking heads aren’t our best scientists.”

But as Science-Based Medicine argues, this framing quietly shifts from an interesting descriptive exercise into a
value judgment – one that echoes the Carl Sagan effect. When you insist that media-visible experts should also sit
near the top of citation leaderboards, you’re not just describing reality; you’re implying that public communication
without elite bibliometric status is inherently suspect.

Why Science-Based Medicine calls it a “terrible analysis”

The Science-Based Medicine critique doesn’t just gripe about tone; it goes after the logic of the study itself.
Several problems stand out:

  • Confusing different kinds of expertise. Some COVID commentators were front-line clinicians,
    public health officials, or risk communication specialists, not necessarily top-cited virologists. Being good at
    explaining risk to the public is not the same thing as maximizing citation counts.
  • Assuming that bibliometrics equal “true” expertise. Citation metrics can be useful, but they’re
    extremely blunt instruments. They favor older, larger fields and high-volume authors, and they say nothing about
    someone’s ability to communicate clearly, contextualize uncertainty, or maintain public trust.
  • Ignoring practical constraints. The most technically senior scientists often have enormous
    responsibilities, limited time, and little media training. Expecting every press quote to come from the single
    most highly cited person in a subfield is like expecting every weather forecast to be delivered personally by a
    Nobel laureate in physics.
  • Dropping the analysis into a loaded history. Because Ioannidis himself had become controversial
    over other COVID analyses, a paper that implicitly questions the legitimacy of media-facing experts looks less like
    neutral curiosity and more like a shot fired in an ongoing war over who “got COVID right.”

Put together, these flaws make it easy to read the study as a modern Sagan-effect argument: if you’re on TV a lot,
you probably aren’t a top-tier scientist – and the media should go find someone with a thicker CV instead.

The Carl Sagan effect in a pandemic: what’s really at stake?

The myth that popularity and quality can’t coexist

At the heart of the Carl Sagan effect is a seductive myth: that serious scientists stay away from the spotlight,
while those who seek fame must be cutting corners. COVID-19 exploded that myth in both directions.

On one hand, we saw excellent communicators who were also highly respected scientists: people who could explain
reproduction numbers, vaccine trials, and variant waves before breakfast and still publish solid research by lunch.
On the other hand, we saw some very accomplished researchers give interviews or write op-eds that, frankly, aged
badly – underestimating the virus, overselling quick fixes, or framing public health measures through a narrow
ideological lens.

Popularity and rigor are not opposites. A scientist can be both widely quoted and deeply careful – or obscure and
deeply wrong. Using media visibility as a red flag is lazy. Using citation counts as a purity test is just as bad.

The “infodemic” and the limits of bibliometrics

During COVID-19, the public didn’t just face a virus; it faced an infodemic – a raging storm of information,
misinformation, and disinformation. People weren’t asking, “Who has the highest h-index?” They were asking:

  • Should I send my kids to school?
  • Is this vaccine safe for my parents?
  • Can I visit my grandparents without putting them at risk?

Answering those questions required more than deep technical knowledge. It required empathy, clear language, an
understanding of risk perception, and a willingness to say “we don’t know yet” without sounding clueless. None of
that shows up in citation databases.

That doesn’t mean scientific credentials don’t matter; they absolutely do. But in a crisis, credentials are the
starting point, not the finish line. A public-facing expert needs three things:

  1. Relevant expertise (or at least familiarity with the evidence and methods).
  2. Honesty about uncertainty and limitations.
  3. Communication skills that translate jargon into decisions people can actually make.

A study that treats only the first component as “real” expertise risks missing what the public actually needs.

Science communication, humility, and the COVID-19 roller coaster

When being right isn’t enough

One of the toughest lessons of the pandemic is that being technically right, on average, over time, isn’t enough.
You also have to be:

  • Timely – information delivered after the decision point is just a nicely formatted “too late.”
  • Actionable – explanations that don’t translate into “So what should I do?” aren’t helpful.
  • Trustworthy – if people don’t believe you, the content barely matters.

Ioannidis’s career illustrates both the power and the danger of being a high-profile critic. His pre-pandemic work
improved medicine by exposing weaknesses in research culture. But when his early COVID estimates and arguments were
perceived as downplaying the threat, his public credibility became part of the story. The question stopped being
“Are his calculations correct?” and became “Is he listening to the full range of evidence, or just the parts that
fit his prior beliefs?”

That pivot from “pure methods guru” to “contested voice in a polarized debate” is exactly why the Science-Based
Medicine article uses the Carl Sagan effect as a lens. It’s not just about Ioannidis’s paper on media experts;
it’s about the broader culture of how we judge scientists for stepping into the public arena.

Lessons for scientists, journalists, and the public

For scientists: outreach is part of the job now

If the pandemic proved anything, it’s that staying silent doesn’t protect science from misuse. When trustworthy voices
are absent, less trustworthy ones rush in to fill the gap. The old attitude that outreach is “beneath” serious scholars
doesn’t just harm careers; it harms the public.

The Carl Sagan effect will persist as long as departments treat media work as a hobby rather than a contribution.
That means institutions need to:

  • Recognize and reward high-quality public engagement in promotion criteria.
  • Provide media training and support instead of leaving scientists to improvise on live TV.
  • Protect publicly engaged scientists from harassment and bad-faith attacks as part of their workplace safety.

For journalists: citations are not the only filter

Journalists covering science have a tricky job: they need sources who are credible, understandable, and available.
Citation counts can be a useful background check, but they’re not a magic sorting hat. A good rule of thumb is:

  • Start with people who work directly in the relevant area or adjacent fields.
  • Look at their publication history, but also at how they talk about uncertainty and limitations.
  • Be wary of sources whose confidence stays sky-high while the data keep changing.

It’s perfectly fine to interview communicators and policy experts who aren’t superstar bench scientists – as long as
you’re clear about who they are and what their expertise actually covers.

For the rest of us: beware both celebrity worship and credential worship

As news consumers, we’re often tempted to latch onto “our” favorite expert and treat them like a pandemic spirit animal.
That’s understandable – it’s exhausting to navigate a constantly shifting evidence base. But two shortcuts are especially
risky:

  • Celebrity worship: trusting someone mainly because they go viral or have the best one-liners.
  • Credential worship: assuming someone must be right because their CV is 20 pages long.

Better questions to ask include:

  • Do they explain how confident they are – and why?
  • Do they update their views when new evidence comes in?
  • Do they acknowledge the strengths and weaknesses of opposing arguments?

The pandemic reminded us that critical thinking is a team sport. No single researcher, however famous, can substitute
for a robust scientific community, accountable institutions, and an informed public.

Experience-based reflections and practical takeaways

Beyond the statistics and the academic debates, the story of John Ioannidis and the Carl Sagan effect lands where most
people actually live: in the messy middle between expert advice and everyday decisions. To make this more concrete,
imagine three overlapping “experience zones” most of us encountered during the pandemic.

Zone 1: The nightly news shuffle. For many people, COVID-19 information came through a small rotation
of TV doctors and science commentators. Some were superb at translating complex topics like vaccine trials and
ventilation into plain language. Others seemed to bounce between overconfidence and hedging. The difference often
wasn’t about citation counts; it was about preparation, respect for the audience, and willingness to say “Here’s what
we know right now – and here’s what could change.”

Zone 2: The group chat laboratory. Friends and family threads turned into miniature peer-review
forums, complete with screenshots, forwarded links, and occasionally all-caps arguments. Here, “experts” were often
the one or two people with some science or medical background – or at least enough curiosity to read beyond the
headline. These people became de facto science communicators, even if they never appeared on TV. Their success (or
failure) at calming panic, correcting misinformation, or admitting uncertainty mirrored the same skills required of
professional communicators.

Zone 3: Inside institutions. Hospitals, universities, and public health departments had their own
micro-dramas about who should speak publicly. Sometimes the most media-ready person wasn’t the one with the most
publications, but they were the one who could show up for daily briefings, survive live Q&A, and coordinate with
local leaders. The “Sagan effect” showed up here as well, in subtle eye-rolls about “PR doctors,” even when those
colleagues were doing exhausting, high-stakes communication work that directly shaped community behavior.

In all of these zones, one theme emerges: communication is its own form of expertise. It doesn’t replace deep knowledge,
but it amplifies or muffles that knowledge depending on how it’s used. When Ioannidis’s paper treats media visibility
mostly as an opportunity to compare citation counts, it misses the lived reality of these experience zones. People
weren’t asking, “Is this person in the top 2 percent of all scientists?” They were asking, “Can I trust this person
to help me make a sane decision this week?”

A more constructive approach would treat science communication as a multi-layered collaboration: highly cited experts
contributing depth; skilled communicators translating and contextualizing; local leaders integrating guidance into
policy; and the public developing the literacy to ask better questions. Instead of punishing scientists for stepping
into the public square, we should be building teams that blend these talents and giving them the training, support,
and feedback they need.

The pandemic won’t be the last contested, emotionally charged scientific crisis. Climate change, AI safety, future
pandemics, and emerging health threats will all need a mix of rigorous evidence and honest, human communication.
If we cling to outdated attitudes that treat visibility as a moral failing, we’ll keep repeating the same pattern:
a handful of people doing the communication heavy lifting while others snipe from the sidelines – and the public
pays the price.

Conclusion: beyond the Sagan effect

The Science-Based Medicine critique of John Ioannidis’s COVID-19 media study isn’t just a feud between skeptics.
It’s a case study in how old academic prejudices about popularity can quietly warp our judgments about expertise,
especially under pressure.

The Carl Sagan effect is a reminder that we don’t always reward the scientists who take the risk of speaking clearly
to the public. But it doesn’t have to stay that way. If we want better science communication in the next crisis,
we need to stop treating visibility and rigor as competing values and start building systems that demand – and support – both.