By auditing algorithms, iSchool researchers work to better understand misinformation we see and consume online

May 25, 2021

By Michael Grass

Even before the COVID-19 pandemic, there had been media reports about Amazon book search algorithms putting health- and vaccine-related misinformation at the top of reading lists. But as the pandemic has brought new attention to the risks of algorithm-amplified misinformation, just how bad is the problem? Is there any concrete, empirical evidence to prove or disprove anecdotal stories, opinion pieces and other assertions coming out in popular press in recent years? A paper recently presented by University of Washington Information School researchers at a leading academic conference shows the importance of auditing algorithms to better understand the ways they may or may not give greater traction to problematic online content like vaccination misinformation. 

Earlier this month at the 2021 ACM CHI Virtual Conference on Human Factors in Computing Systems, iSchool PhD student Prerna Juneja presented findings from a recently published paper, “Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Misinformation,“ which was awarded a CHI Best Paper Honorable Mention. 

The paper, written with iSchool assistant professor and Center for an Informed Public faculty member Tanu Mitra, details research that examined how vaccine misinformation has been amplified by algorithms used by Amazon and, through an auditing framework, found the e-commerce giant’s platform to be a “marketplace of multifaceted health misinformation,” as The Seattle Times wrote in a Jan. 28 article about their research. It’s a platform where searches surface books and other products that promote vaccine misinformation but also a place where “as customers engage with products espousing bogus science, Amazon’s recommendation algorithms point them to additional health misinformation.”

The research by Juneja and Mitra was conducted before the release of COVID-19 vaccines last year, but released earlier this year just as vaccine production and distribution was ramping up in the United States and elsewhere around the world.  

In a research presentation video recorded for CHI 2021, Juneja said: “Our findings also suggest that traditional recommendation algorithms should not be blindly applied to all topics equally. There is an urgent need for Amazon to treat vaccine-related searches as searches of higher importance and ensure higher quality content” for those searches.

During the iSchool’s May 18 Spring Lecture, “Algorithmic Bias and Governance,” which featured presentations from Mitra and iSchool associate professor Chirag Shah, Mitra pointed out that on the day of the lecture, Amazon searches she did continued to turn up some books promoting vaccine and other health misinformation. 

 

Mitra also discussed another paper she worked on with Juneja and Virginia Tech student Eslam Hussein that used an auditing framework to better understand how YouTube’s search algorithms and algorithms influencing Top 5 and Up Next recommendations promote 9/11 conspiracy theories, chemtrail conspiracy theories, flat earth, moon landing conspiracy theories and vaccine controversies. 

In “Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube,” published in Proceedings of the ACM on Human-Computer Interaction in May 2020, the researchers found that “demographics, such as, gender, age, and geolocation do not have a significant effect on amplifying misinformation in returned search results for users with brand new accounts,” they wrote. “On the other hand, once a user develops a watch history, these attributes do affect the extent of misinformation recommended to them.” 

Additionally, further analysis work reveals “a filter bubble effect, both in the Top 5 and Up-Next recommendations for all topics, except vaccine controversies; for these topics, watching videos that promote misinformation leads to more misinformative video recommendations,” they wrote. 

Mitra said during the May 18 lecture that their research findings “implies that YouTube reacts to some types of misinformation” but not all types of misinformative content on its platform. 

Given the broad reach of algorithms in shaping what we see, interact with and consume online and their associated risks and harms, how do tech companies, policy makers, researchers and others studying the dynamics and influence of algorithms set the path toward meaningful algorithmic governance?

Mitra discussed scenarios for what that governance could look like in the future. Part of the answer, she said, lies in laying the foundation for more algorithmic audits like the ones she’s pursued in her research.

External audits, conducted by academic researchers, government regulators or interested third parties, can identify certain risks in platforms or algorithms they use, issues like misinformation, bias, accessibility, fairness, representation, accountability and discrimination.

Mitra noted a significant challenge for researchers in this space: It’s hard to gain access to data used by platforms to train algorithms, which are often considered proprietary trade secrets. 

But the algorithms that tech platforms use will likely see more regulatory scrutiny in the years to come. 

During her lecture, Mitra discussed an April 19 Federal Trade Commission blog post that said that businesses using racially biased algorithms may be violating the FTC Act and other federal laws, a signal that the FTC under the Biden administration may flex its regulatory muscles more than it has in recent years.

Given that likelihood, Mitra said, more research and cross-sector collaboration is needed to better understand algorithms, related tech policy challenges and opportunities to mitigate harmful risks and behaviors. 

“It is in the interest of organizations to actually work with their users, collaborators, interested third-parties and academics to think about how they can chart this way forward … for algorithmic governance and control these sorts of harmful behaviors,” Mitra said.


Learn more about algorithms

 

Other News