CIP in the News: June 2023

Jun 30, 2023

News coverage from June 2023 about the Center for an Informed Public and CIP-affiliated research and researchers.

  • National Public Radio (June 13): “AI-generated images are everywhere. Here’s how to spot them
    In a NPR LifeKit segment, reporter Shannon Bond cited the “SIFT” method for factchecking and contextualizing claims online developed by CIP research scientist Mike Caulfield as one way to help people spot AI-generated media.

***

  • The Washington Post (June 14): “Parts of Reddit are staying dark. Our search results may suffer for it.
    CIP research scientist Sukrit Venkatagiri was interviewed by The Washington Post about Reddit’s unique qualities as Reddit communities went private in June, protesting the company’s plan to charge software developers for access to its data. Venkatagiri emphasized the value of Reddit offering “diverse opinions on topics that aren’t necessarily influenced by commercial interests.”

***

  • PBS NewsHour (June 20): “Biden meets with tech leaders to discuss future and regulation of artificial intelligence
    In an interview about AI regulation, CIP co-founder Ryan Calo, a professor at the UW School of Law and Information School, said during the PBS NewsHour that “it shouldn’t really be left to the companies to make these kinds of decisions, because, left to their own devices, as we can see with past technologies like the Internet, they’re not going to fully address or mitigate the big range of harms, whether to the environment or job displacement, bias, privacy, misinformation.”

***

  • Undark (June 22): “AI creators want us to believe AI Is an existential threat. Why?
    CIP co-founder Ryan Calo, a professor at the UW School of Law and Information School, wrote an opinion piece in Undark magazine about how the public fixation on extinction from AI, a narrative promoted by industry insiders, distracts from AI’s immediate harms. Calo writes:  “There is no obvious path between today’s machine learning models — which mimic human creativity by predicting the next word, sound, or pixel — and an AI that can form a hostile intent or circumvent our every effort to contain it.”

Other News