The Center for an Informed Public has awarded Innovation Fund grants to three project proposals, funding that will help support collaborative, multi-disciplinary and timely work intended to advance the CIP’s mission to resist strategic misinformation, promote an informed society and strengthen democratic discourse. The Innovation Fund is intended to seed promising new ideas and generate proofs-of-concept for further development. Award amounts average around $10,000 and project periods are 6-12 months.
The CIP’s Innovation Fund grants for 2025 wouldn’t be possible without the generous financial support of the John S. and James L. Knight Foundation and the University of Washington’s Technology & Social Change Group (TASCHA).
The three projects awarded funding are detailed below:
Politicization and polarization in entertainment media
Alexandros Efstratiou, CIP Postdoctoral Scholar, Information School
This project will study political polarization in non-political domains — namely, the politicization of entertainment media — and how this may be used to conduct political messages to non-political audiences. This project is motivated by two main recent advances in the literature: (1) a growing body of evidence showing that the vast majority of people rarely engage with news or see politics on their feeds, and (2) recent findings demonstrating that certain entertainment titles (e.g., TV shows) have highly political audiences; to the extent that demonstrating preference for some of these entertainment-associated entities can be used as proxies to infer one’s political identity. Specifically, we pose three research questions:
- (1) What is the prevalence of politicization in entertainment?
- (2) What are the temporal trends of partisan polarization in entertainment over time?
- (3) Can we characterize certain entertainment-related entities as bridging or polarizing?
By addressing these, we hope to inform novel depolarization interventions and motivate regulatory incentives.
Social inclusion prompts for belief disengagement
Kristen Engel, PhD Candidate, Information School
Social media interactions provide pathways to problematic content engagement that can radicalize user beliefs and behaviors, but also provide pathways to support disengagement and recovery. While prior work considers the social, psychological, and linguistic dynamics contributing to engagement with online harms such as conspiracy theorizing, less is understood of the factors contributing to disengagement from such content. This project will study how supportive, socially inclusive language in comments on posts expressing doubt or dissonance in conspiracy theory beliefs facilitate continued and civil interactions. Retrieval augmented generation and natural language processing techniques will be used to support the identification of doubt or cognitive dissonance in beliefs. This work will investigate how we may leverage augmented large language models and peer-driven interventions to support audience-mediated disengagement from problematic content online. The project also aims to contribute insights into how prosocial user interactions may sustain healthier online conversations and recovery from false or misleading beliefs.
The future of ‘Trust & Safety’: Examining the shift to AI in online safety efforts
Rachel Moran-Prestridge, Senior Research Scientist, Information School
Over recent decades, Trust and Safety (T&S) teams have become integral to communications and technology companies, responsible for addressing issues like online harassment, CSAM and problematic information spread. Despite their importance, T&S teams have faced significant downsizing from 2022 to 2025, with mass layoffs at companies like X (formerly Twitter), Meta, Amazon and Alphabet. Previous research explored this decline and noted a growth in the use and development of AI-enabled tools to replace and/or support human actors in detecting, managing and reducing harmful and illegal content. However, research also highlighted concerns over this shift to AI, including limitations on adaptability, undermining of human labor and biased training data — prompting a need to further investigate the role of AI in shaping the future of online safety efforts. Accordingly, this project will:
- (1) Document the range of AI-enabled online safety tools being developed and/or used by T&S professionals.
- (2) Examine current professional attitudes towards the broader potentials of AI to mitigate current issues within the T&S field.
- (3) Identify concerns around the use of AI within online safety efforts.
This will be achieved through in-depth interviews with T&S professionals working with or developing AI tools for T&S work.