Kolina Koltai’s written testimony for the U.S. House Select Subcommittee on the Coronavirus Crisis

Nov 17, 2021

On Wednesday, November 17, University of Washington Center for an Informed Public postdoctoral fellow Kolina Koltai offered witness testimony during a U.S. House Select Subcommittee on the Coronavirus Crisis hearing, “Combating Coronavirus Cons and the Monetization of Misinformation.” Koltai, who testified virtually, also offered written testimony expanding on her verbal remarks. | WATCH THE HEARING


Written Testimony for the House Select Subcommittee on the Coronavirus Crisis
November 17, 2021
Statement of Kolina S. Koltai, PhD
Postdoctoral Fellow, Center for an Informed Public, University of Washington

Introduction
Good afternoon Chairman Clyburn, Ranking Member Scalise, and Members of the Subcommittee. Thank you for the opportunity to be able to testify on the issue of the widespread proliferation of vaccine misinformation online. I want to acknowledge and thank the other witnesses here today for providing their testimony as well as my colleagues at the University of Washington and other misinformation researchers throughout the country who focus on vaccine misinformation.

I am a postdoctoral fellow at the Center for an Informed Public at the University of Washington. I have spent the majority of my career as a researcher and an academic exploring vaccine hesitancy and vaccine misinformation. I study the ways vaccine hesitant people use online spaces and social media platforms to find, spread, and assess content about vaccines. I received my doctorate from the School of Information at The University of Texas Austin where I focused my research on studying vaccine-hesitant spaces (e.g., Facebook groups against vaccinations) on Facebook. My research at the Center for an Informed Public has focused on how vaccine misinformation around the COVID-19 vaccine has been promoted and spread across multiple platforms. I am immensely honored to provide my testimony today as a researcher who has been embedded in these online spaces since 2015.

Overview
The sustained COVID-19 pandemic has led to the proliferation of conflicting narratives and messages about vaccines as well as other public health topics including wearing masks and the risks of the coronavirus. From fundamental questions about the safety and efficacy of the COVID-19 vaccine to more outlandish conspiracy theories, social media, and online platforms have been struggling with the difficult and complex problem of how to mitigate the spread of misinformation on their sites while still allowing users the freedom to discuss and share information about the pandemic. My testimony today will broadly highlight the ways that vaccine misinformation continues to thrive online despite efforts from social media platforms. I will focus on three important takeaways today:

  1. Prominent superspreaders consistently disseminate vaccine misinformation online despite social media platform content moderation policies.
  2. Vaccine misinformation is not isolated to one platform, but rather is a cross-platform issue.
  3. There should be more action taken against those who are spreading misinformation for
    personal financial gain.

This committee is no stranger is the problem of vaccine hesitancy. In July, the committee held a hearing on “Building Trust And Battling Barriers: The Urgent Need To Overcome Vaccine Hesitancy” that discussed the need to minimize vaccine misinformation to reduce vaccine hesitancy. Vaccine hesitancy is a significant barrier to ending the pandemic. As of November 14, around 59% of Americans are fully vaccinated, still below the recommended inoculation threshold to reach herd immunity. And while there are many different reasons why some Americans are still hesitant to get vaccinated, we cannot ignore the role vaccine misinformation plays in vaccine hesitancy. And to minimize confusion, I want to be clear about what I mean when I say “vaccine misinformation.”

Vaccine misinformation refers to incorrect or misleading information shared to influence public opinion or obscure the truth about vaccines. Vaccine misinformation can take many forms and sometimes contains kernels of truth. True and factual information about vaccines can also be de-contextualized or reframed in order to cement misinformation narratives and drive vaccine hesitancy. It is important to have the distinction between information that promotes vaccine hesitancy and vaccine misinformation.

For example, in April of this year, the Johnson & Johnson vaccine was paused to investigate the potential risk of blood clots. This real news story can promote vaccine hesitancy but is not vaccine misinformation. However, the Johnson & Johnson pause was used in connection with vaccine misinformation online. For example, misinformation online can portray the risk of blood clots with the vaccine as being very high. But in reality, when the vaccine was paused there were only 6 cases of blood clots from the nearly 7 million Johnson & Johnson vaccines that had been administered. In May, from 9 million doses, the cases of the rare blood clots were 28. However, by comparison, some estimates state that close to 20% of COVID-19 patients in ICU develop blood clots. The risk of blood clots is far greater with catching COVID-19 than it is to vaccinate. Vaccine misinformation is able to take the truth about the pause and use it to promote misleading or false information, like saying that the COVID-19 vaccine is extremely dangerous or that cases of blood clots are being covered up.

Another example is the way that data from the CDC and FDA’s Vaccine Adverse Event Reporting System (VAERS) data is decontextualized to promote vaccine hesitancy. VAERS is a system set up for anyone to report an ‘adverse event’ to a vaccine. It is designed to be a signaling system to detect when there are new, unusual, or rare side effects to a vaccine. VAERS was even used to help identify potential blood clot risk with the Johnson & Johnson vaccine. However, data from this dataset can be misused. For example, since anyone can report any symptom to VAERS, there is an unknown number of false reports. These false reports can range from gaining superpowers to getting struck by lightning as side effects of vaccination. VAERS is not designed to determine if any reported side effect is actually caused by a vaccine or by another factor. In addition, even minor, acceptable side effects, like redness or soreness at an injection site, can get reported to VAERS. And since anyone can access VAERS, some vaccine opposed activists will take figures from VAERS to mislead the public into thinking that vaccines are more dangerous than they are. You can imagine an image with text that says “CDC data shows there have been over 5,000 ‘adverse events’ from the vaccine reported” which removes all the explanations of where that data came from and what an “adverse event” is. The de-contextualization of information is a powerful tool in vaccine misinformation.

These are just two examples of the many ways vaccine misinformation takes shape online. Vaccine misinformation can be misleading or purely fictional, like with the conspiracy that microchips are being implanted.

Prominent superspreaders consistently disseminate vaccine misinformation online despite social media platform content moderation policies.

During the COVID-19 pandemic, The Center for Countering Digital Hate (CCDH) has released multiple reports coining the term “Disinformation Dozen to refer to a small group of people responsible for a large proportion of vaccine misinformation online. In one of their reports, they claim that around 12 people were responsible for around 2/3rds of the vaccine misinformation content online. The finding that a small group of people is responsible for an outsized portion of of vaccine misinformation content online was further reiterated via Facebook whistleblower Frances Haugen’s documentation that said that Facebook page administrators in the US who repeatedly share misinformation are responsible for around 78% of misinformation views on Facebook. While Haugen’s claims were not specific to vaccine misinformation, we can see that there is a pattern of a small group of superspreaders who make up a large portion of the vaccine misinformation space online. Further, as indicated by both CCDH and Haugen, very few sanctions are enacted on the accounts that consistently spread viral misinformation.

While many social media and online platforms have continuously updated their policies to address vaccine misinformation on their platforms during the pandemic, there is a gap between having a policy on misinformation and enforcing the policy. Drawing from research by myself and my colleagues, Dr. Rachel Moran and Ph.D. student Izzi Grasso, as well as the work from other researchers in this space, we know that vaccine opposed influencers can be highly proficient in getting around platform content moderation policies. From using tactics to avoid algorithmic detection to using dog-whistling language, accounts that spread vaccine misinformation continue to go unchecked. For example, users may spell the word vaccine in non-traditional ways (e.g., va((ine) or even use alternative words or emojis (e.g., “maxine”) to avoid their content being flagged as vaccine related. Specifically, on Facebook, users will post links to misinformation in the comment section instead of the main post to avoid an automated “fact-check” on the content. Our research also indicates that video and ephemeral content, like Instagram stories that disappear in 24 hours, often go unchecked with minimal fact-checking or moderation. These are just a few of the tactics used by vaccine misinformation superspreaders.

Relying on social media platforms policies to mitigate the spread of vaccine misinformation is insufficient without effective enforcement of those policies. Further, we as researchers are often limited in our ability to fully study the extent and spread of misinformation across multiple platforms. We are limited in the tools and resources needed to assess spread, especially with video content and ephemeral content. We rely on tools that give us limited access to these platforms. Often, it is only the platforms themselves that know the extent of misinformation spread, like in the case of Haugen’s testimony. At a minimum, there needs to be a greater effort from social media platforms to limit the algorithmic spread and promotion of misinformation on their platforms, especially fromprominent, influential accounts. As my colleague, Renee DiResta at Stanford says, “You have a right to speech, but not a right to reach.” Social media platforms have a responsibility to the public health of the nation to prioritize this effort. As the research shows, once misinformation reaches people, it can be difficult to correct. It is like trying to put toothpaste back into a tube.

And while there is no perfect solution to undoing the damage of misinformation, we don’t have to combat misinformation if it never reaches people.

Recommendation: I recommend members of the committee continue to put pressure on social media and online platforms to minimize the amplification of vaccine misinformation from known repeat offenders.

Vaccine misinformation is not isolated to one platform, but rather is a cross-platform issue.

The prevalence of vaccine misinformation is not simply isolated to one platform. From popular social media platforms like YouTube, Instagram, and Tiktok to other online platforms like Amazon, LinkTree, and GoFundMe, vaccine misinformation is an issue on multiple platforms and spreads from one site to the next. Content can originate on one platform and be easily spread across multiple platforms and spaces. For example, earlier this year, TikTok videos of people placing magnets on their arms claiming that they are magnetic post-vaccination quickly spread to Facebook, Twitter, YouTube, private messaging apps, and other platforms. Even if TikTok removes the original videos, the content can live continuously on subsequent platforms continuing to spread. This highlights the need to address vaccine misinformation across multiple platforms.

While addressing cross-platform spread is a complicated problem, platforms need to work together to mitigate the influence of prominent superspreaders. If one platform decides that a prominent user is continuously sharing viral vaccine misinformation and their account gets removed, other platforms should evaluate the potential removal of that user on their platforms as well. Removal of one problematic account on one platform does not address the influence of that individual.

For example, Robert F. Kennedy Jr.’s Instagram account was removed in February of this year for repeatedly sharing debunked claims about the COVID-19 vaccine. Kennedy, a member of CCDH’s Disinformation Dozen, has been a prominent figure who regularly shares vaccine misinformation, even prior to the coronavirus pandemic. In addition, his nonprofit group’s, Children’s Health Defense, social media accounts also share vaccine misinformation. However, despite his high-profile status and Facebook owning Instagram, only his personal Instagram account was removed. His accounts on Facebook and Twitter and his organization’s accounts are still up. It was only just this September that Children’s Health Defense’s YouTube channel was removed in YouTube’s efforts to minimize the spread of vaccine misinformation.

But even beyond traditional social media platforms, vaccine misinformation is still an issue. From platforms like NextDoor, Amazon, LinkTree, GoFundMe, and many more, vaccine misinformation can proliferate in spaces we may not expect. For example, the work of some of my colleagues at the University of Washington highlights the way that Amazon’s algorithms promote vaccine misinformation. Dr. Tanu Mitra and Ph.D. student Prerna Juneja’s work shows that in searching for vaccine information on Amazon, the top-recommended products are often books that support vaccine refusal and contain vaccine misinformation. This is an issue for both childhood vaccines and the COVID-19 vaccines. While we can acknowledge that Amazon wants to be able to provide an array of sources, we urge that resources that promote vaccine misinformation should not algorithmically be promoted to the top. All online platforms, especially those involved in e-commerce, need to evaluate how their sites contribute to both the spread of misinformation and the financial profitability of misinformation. For example, Dr. Joseph Mercola, another member of CCDH’s Disinformation Dozen, has been a long-standing advocate against vaccines, favoring alternative ways towards achieving immunity. His many books are not only easily accessible on Amazon, but often are the top-recommended products when searching for vaccine information.

Recommendation: I recommend that members of the committee urge social media companies to coordinate their approach to reprioritizing or removing misinformation from superspreaders.

There should be more action taken against those who are spreading misinformation for personal financial gain.

Spreading vaccine misinformation can be a profitable endeavor. Through the promotion of vaccine misinformation, activists and influencers are able to make a profit through the selling of their books, supplements, alternative treatments, consultation services along with collecting speaker fees and soliciting donations. There are many different actors in this space, from smaller influencers to the big names indicated in the Disinformation Dozen. But what ties these users together is how they leverage vaccine skepticism and vaccine misinformation towards a profit.

This can happen in a variety of ways. As I already mentioned, Amazon is a platform that allows prominent figures to sell their products, like in the case of Mercola. There was an exposé by the Intercept, a non-profit news organization, in September that showed how America’s Frontline Doctors (AFLD) were making millions off the telemedicine consults and drug sales. AFLD is a group of health care professionals who have consistently promoted vaccine hesitancy and vaccine misinformation during the pandemic. The Bollingers, also part of the Disinformation Dozen, have made millions of dollars from selling products to videos. They also have affiliate marketing relationships that allow other prominent vaccine opposed figures to make money from promoting their content. These prominent organizations and figures all make thousands, if not millions, off of the spread of vaccine misinformation. Even smaller accounts leverage the misinformation disseminated by larger accounts to sell their own products. This can range from selling fraudulent vaccination cards or just promoting their multi-level marketing products alongside vaccine misinformation.

And perhaps the most insidious component of vaccines misinformation is the way that some health care professionals have leveraged their credentials to not only spread vaccine misinformation but to directly profit from that spread. People like Dr. Joseph Mercola and Dr. Simone Gold, use their credentials to create uncertainty about vaccines and diminish the public’s trust in healthcare professionals and nurses while profiting from that uncertainty. And while not every healthcare professional who shares vaccine misinformation directly profits from it, they do degrade the trust the public has in vaccines as well as trust in the healthcare professionals who do promote the vaccine.

Recommendation: I recommend investigations and possible repercussions for those who consistently propagate viral vaccine misinformation, especially those who do so for personal financial gain.

I thank you for the opportunity to testify on this important issue and I look forward to answering your questions.

Other News