
During the first months of COVID-19, physicians around the world were testing combinations of existing, inexpensive medications on their patients. Some of those strategies showed early promise. Yet most people never heard about them — not because the science was settled against them, but because of where and when the research happened, who funded it, and how information platforms decided what you’d see.
In the video above, Joe Rogan interviews Dr. Robert W. Malone about exactly how those filters work.1 Malone described large-scale psychological influence — often called mass formation — as a state that emerges when people experience prolonged uncertainty, social isolation and fear, conditions widely documented during the COVID-19 period when lockdowns and social disruption affected billions of individuals worldwide.
At its core, mass formation is what happens when widespread anxiety and isolation drive people to latch onto a shared story — not because the evidence is strongest, but because believing it reduces fear and restores a sense of belonging. In practical terms, consider someone who was initially skeptical about a particular treatment. Over the following weeks, every colleague, news anchor, and social media post repeated the same conclusion.
No new data changed that person’s mind — but the sheer repetition made questioning it feel socially risky, even irrational. That’s mass formation at work. Your perception of scientific evidence, treatment options and risk becomes increasingly filtered through social reinforcement rather than independent evaluation.
This filtering works, in part, because of a basic feature of how your brain manages effort. Psychologists call it cognitive load — the mental energy required to process information. Your brain treats a familiar claim like a well-worn path: it takes less effort to walk down it than to cut a new trail through the brush. When the same narrative reaches you from multiple directions, accepting it becomes the path of least resistance, while evaluating alternatives demands significantly more effort.
That imbalance explains why repetition doesn’t just spread ideas — it makes them feel more true. That dynamic set the stage for the conversation’s other major threads: how regulatory structures and funding patterns determined which early treatment strategies advanced during COVID-19, and how algorithm-driven platforms influenced which medical viewpoints reached the public. Understanding these forces clarifies why consensus forms rapidly around some ideas while others stall.
A Revealing Look at Research Barriers and Narrative Influence
One of the most striking parts of the conversation involved Malone’s account of trying to study combinations of already-approved medications — including famotidine, celecoxib, and ivermectin — as early COVID treatment strategies.2
The question was straightforward: if these drugs already had established safety records, why couldn’t they move rapidly into clinical trials when used in new combinations? After all, repurposed drugs typically reach patients faster and at far lower cost than entirely new pharmaceutical development. The answer, as Malone described it, had less to do with science and more to do with structure.
• Approval requirements reshaped which therapies reached trials — Early trial proposals were rejected until researchers could produce specific laboratory antiviral data for ivermectin, even though the broader protocol involved multiple licensed medications.
The result? Ivermectin was removed from the proposed study just so the trial could move forward. In other words, administrative criteria — not clinical reasoning — determined which treatment strategies advanced and which ones the public never heard about.
• Research delays determined which treatments gained attention — While trial approvals stalled, public health policy moved quickly, opening a widening gap between early therapeutic ideas and the formal evidence needed to support them. This matters because when research starts late, the narrative gravitates toward whichever evidence appears first — not necessarily the best ideas initially explored. What you heard about reflected timing as much as scientific breadth.
• Combination strategies struggled inside single-drug frameworks — This is a key point. Regulatory frameworks frequently evaluate each drug component separately, even when the entire therapeutic hypothesis depends on synergy — multiple drugs working together to produce a stronger effect than any single one alone. Imagine testing whether a key works by examining the key and the lock in separate rooms — you’d never discover they fit together.
That’s essentially what happened when combination protocols were forced through a system designed to assess one drug at a time. The mismatch explains why biologically plausible strategies sometimes vanished from headlines despite having a clear rationale behind them.
• Funding direction accelerated some pathways while slowing others — Once major trials and funding streams locked in on specific approaches — antivirals and injections chief among them — alternative strategies received less attention, fewer resources, and slower evidence accumulation.
This created a self-reinforcing cycle: heavily funded pathways generated more data, which reinforced their prominence in guidelines and media coverage. If you’ve ever wondered why consensus seems to develop unevenly across competing medical ideas, this is a large part of the reason.
How Uneven Standards and Information Systems Shaped What You Saw
Newly developed pharmaceuticals advanced through structured regulatory pipelines, while repurposed generics faced additional justification requirements despite their established safety histories. The playing field wasn’t level, and that imbalance influenced which therapies were widely studied, recommended or reimbursed. But the filtering didn’t stop at the research level. Even when underfunded studies did produce results, a second layer of filtering determined whether those results ever reached you.
Algorithmic platforms, institutional messaging and media incentives controlled the information pipeline — meaning a treatment could clear a scientific hurdle and still remain invisible to the public. Understanding these two layers together — who shaped which research moved forward and who shaped which findings you actually saw — is essential to grasping why some treatments seemed to appear out of nowhere while others seemed to not exist at all.
• Institutional incentives shaped which hypotheses advanced publicly — Professional incentives, reputational risk and institutional alignment all influenced researcher behavior during crisis conditions. Scientists operated within systems that rewarded alignment with dominant frameworks and discouraged deviation — especially during periods of high uncertainty. The result was structural pressure that quietly filtered which hypotheses ever reached public attention.
• The medical perspectives you encountered weren’t selected for accuracy — They were selected by algorithms that prioritize clicks, shares, and watch time, combined with advertising pressure and platform content rules that determined which viewpoints were amplified and which were suppressed. Algorithms prioritized engagement signals — clicks, shares, and watch time — none of which have anything to do with scientific accuracy.
A provocative headline that generates outrage ranks higher than a careful clinical discussion that generates none. The result was that what appeared first when you searched for health information reflected platform dynamics as much as — and sometimes more than — the underlying scientific evidence.
• Repeated messaging strengthened perceived certainty over time — When the same explanation is repeated across multiple institutions, perceived certainty increases — even when the underlying evidence remains incomplete.
As described earlier, familiar narratives require less mental effort, making them easier to accept than complex or competing interpretations. Recognizing this pattern — and catching yourself defaulting to the well-worn path — is one of the most practical things you can do to strengthen your own independent evaluation and make more confident, deliberate health decisions.
Take Back Control of How Health Information Shapes Your Decisions
When information overload drives confusion, the root problem isn’t knowledge — it’s filtering. You face competing claims, shifting narratives and uneven research visibility. That environment creates decision fatigue and weakens confidence. What follows isn’t a treatment protocol or dosage guide — it’s something more foundational.
These are the evaluation habits that help you cut through noise and judge any health claim on its merits, whether you’re assessing a new supplement, a repurposed medication, or a headline about the latest clinical trial. Restoring clarity starts by changing how you evaluate evidence, not by chasing every new claim. When you strengthen your personal information framework, your health decisions become steadier, faster, and more grounded.
1. Build a simple evidence hierarchy you trust — To reduce confusion, rank information sources before you read them. Place primary research, full interviews and original data above commentary, headlines and social media clips.
Free databases like PubMed.gov let you search for clinical trials and peer-reviewed studies directly — no subscription or medical degree required. Once this ranking becomes habit, you stop wasting mental energy on “should I trust this?” and start spending it on “what does this actually mean?”
2. Track how timing shapes what you hear — Pay attention to when research begins, not just what conclusions appear. Early hypotheses often disappear when trials start late or receive limited visibility. Ivermectin offers a clear example. Physicians began exploring it as an early COVID treatment in the spring of 2020, but large-scale, well-funded clinical trials didn’t begin until much later.
In the gap between early clinical use and formal trial results, the public narrative had already moved on — and by the time data did emerge, many people had already formed firm opinions based on commentary rather than completed research. When you notice timing gaps like this, it becomes clear that absence of coverage doesn’t equal absence of investigation. This awareness protects you from assuming consensus too quickly.
3. Question who funded the research and what wasn’t studied — To strengthen your perspective, look beyond the findings themselves and ask who paid for the study, what alternatives were excluded and whether the research design favored a specific outcome. Funding shapes which questions get asked in the first place — and which ones don’t. When you make this a habit, you stop taking headlines at face value and start reading research with the context it deserves.
4. Limit algorithm influence with intentional information routines — Reduce narrative bias by choosing specific times and sources for research instead of relying on feeds. Saving original interviews, bookmarking primary materials and revisiting them later strengthens recall and reduces emotional decision-making. This turns information gathering into a repeatable skill rather than a passive experience.
5. Strengthen confidence through active comparison — Write down two or three competing explanations for any major health claim and compare their assumptions, evidence timing and incentives. This turns a feeling of “I don’t know what to believe” into an active investigation you control. When you practice it regularly, your ability to evaluate complex health debates improves, cognitive load drops and your decisions feel deliberate instead of pressured.
FAQs About How Information Systems Shape Health Decisions
Q: What is mass formation and why does it matter for health decisions?
A: Mass formation describes a psychological state where fear, uncertainty, and social isolation drive people toward shared narratives that provide emotional relief and belonging. This matters because it influences how scientific evidence, treatment options and risk information are interpreted, often shaping belief through repetition and social reinforcement rather than independent evaluation.
Q: Why were repurposed drug combinations harder to study during COVID-19?
A: According to Malone’s account, proposals to study combinations of already-approved medications faced regulatory requirements that altered trial design. Even when individual drugs had established safety records, additional data requirements and approval steps determined which therapies moved forward, influencing which treatment strategies became visible to the public.
Q: How did research timing affect which treatments people heard about?
A: When clinical trials start later, public narratives tend to form around the first available evidence rather than the full range of early therapeutic ideas. This timing gap means visibility often reflects when research was approved and funded — not simply which approaches existed.
Q: How do funding and institutional incentives shape medical consensus?
A: Funding direction determines which studies generate the most data, and those data influence guidelines, media coverage and professional alignment. Institutional incentives and reputational risk also affect which hypotheses researchers pursue publicly, contributing to uneven attention across competing medical ideas.
Q: What practical steps help you evaluate health information more independently?
A: Clear strategies include prioritizing primary sources, paying attention to research timelines, examining funding context, limiting algorithm-driven information exposure, and comparing multiple explanations for major claims. These habits reduce cognitive overload, strengthen confidence and support more deliberate health decisions.
Test Your Knowledge with Today’s Quiz!
Take today’s quiz to see how much you’ve learned from yesterday’s Mercola.com article.
How often were heart attacks or strokes preceded by at least one risk factor?









