Over the past several months, our team at the University of Washington Center for an Informed Public has been responding to inquiries related to our research and UW associate professor Kate Starbird’s participation on an external advisory committee for the Cybersecurity and Infrastructure Security Agency (CISA). Previously, we addressed some of the misperceptions related to these inquiries. This week, we received questions from a reporter based on those responses. We have responded to the questions we received. For full transparency, we are publishing Starbird’s response here so others are able to see the full response in context.

What was the extent of the subcommittee’s communications and work with the following groups: Atlantic Council’s DFR Lab, Global Disinformation Index, Shorenstein Center, the Institute for Strategic Dialogue, First Draft and the Brennan Center?

None. As far as I know or can recall, we did not communicate with anyone from these groups in the course of our subcommittee activities.

At what point did Craig Newmark become involved with the subcommittee’s work, and what was the extent of his involvement?

Craig Newmark had no involvement with the work of the CISA subcommittee. As far as I know or can recall, we did not communicate with Craig Newmark about the work of the subcommittee.

What do you believe is CISA’s appropriate role in moderating MDM?

The subcommittee limited its scope to considering CISA’s role almost exclusively within the elections context, and my views are limited to CISA’s role around U.S. elections.

Our subcommittee recommendations are well-aligned with what I believe to be the role of CISA in countering mis- and disinformation — focused on education (about tactics and techniques of manipulation) and on supporting election officials in communicating factual information to counter harmful false claims and narratives that could disenfranchise voters (e.g., by confusing them about when or where to vote) or undermine trust in election materials, processes, or results.

Should CISA point to specific content for platforms to remove? Why or why not?

Our subcommittee did not discuss whether or not CISA should point to specific content for platforms to remove.

Personally, my opinion is that CISA and the U.S. government should not flag content for platforms to moderate — except in cases where the content is illegal (e.g., false information about when or where to vote that could disenfranchise voters), encourages illegal activity (e.g., voter fraud), encourages or threatens violence (e.g., against election officials) or is part of a foreign influence operation (e.g., Iranians impersonating Proud Boys in threatening letters to Democrat voters in 2020).

The subcommittee’s recommendations mention that CISA should “detect” MDM threats to “critical functions.” What does this involve? How would CISA rapidly respond to MDM if not proactively monitoring information channels?

We did not write that CISA should “detect MDM threats to critical functions.” This is a misleading edit of our recommendations.

Our subcommittee wrote the following: “In this work, CISA’s activities should be similar to the Agency’s actions to detect, warn about, and mitigate other threats to critical functions (e.g., cybersecurity threats).”

My understanding is that in their cybersecurity work, CISA collaborates with external partners (e.g., government offices, non-profit organizations, and companies) who proactively share information about cybersecurity threats (targeting those organizations) with CISA and each other. In this excerpt, we are recommending that CISA explore similar information-sharing pathways to create a shared awareness of harmful mis- and disinformation about elections.

The subcommittee repeatedly acknowledges in meetings that it shouldn’t be the arbiters of truth, and yet claims that CISA’s role should be to detect MDM. How is contradiction reconciled?

Again, the subcommittee did not advocate that “CISA’s role should be to detect MDM.” As we described above, we recommended that CISA collaborate with external partners (e.g., local and state election officials) to gain a shared awareness of potential harmful mis- and disinformation about elections.

Elections in the United States are run by thousands of different jurisdictions, often with different materials, procedures, timelines, and other rules. Our subcommittee saw CISA’s role, primarily, as helping to direct people to information from these election officials — to help correct misunderstandings and counter intentional falsehoods.

You refer to criticism of CISA’s mission over malinformation as in “bad faith.” Why?

Members of our subcommittee anticipated both good faith and bad faith criticism of our subcommittee’s work. We welcome good faith criticism, which will lead to better policies and outcomes. We were concerned, however, about “bad faith” criticism — i.e., criticism employed for strategic reasons to undermine the work of the subcommittee, score political points, and make it more difficult for society (broadly) to address misleading claims about elections.

We felt that the term “malinformation” was too broad and ill-defined to be useful to what we were recommending and that it would be an easy focal point for bad faith criticism that would attempt to equate efforts to address informational threats with “censoring content that we don’t like.”

In terms of “socializing” the subcommittee’s work with partners and NGOs, what role would these outside partners play?

Members of the subcommittee explored the idea of talking to external experts about the subcommittee, to describe what we were working on, solicit their feedback (and good faith criticism), and get their advice for how to approach communicating about our work and mission. My understanding is that very few of these conversations took place. Personally, I had one short conversation, with Jameel Jaffer, about the subcommittee and its work.