In UW Public Lectures talk, Jevin West discusses the power, potential and pitfalls of generative AI technology

Nov 1, 2023

By Michael Grass
Center for an Informed Public
University of Washington

During “Generative Misinformation,” an October 27 talk at Town Hall Seattle sponsored by the University of Washington Graduate School’s Office of Public Lectures, iSchool associate professor and Center for an Informed Public co-founder Jevin West discussed how generative artificial intelligence is making it more challenging to make sense of the information we see online and understand what’s taking shape in the world around us. 

 

Opening his lecture, West said that when assessing the power, potential and pitfalls of the continued development of generative AI and its societal impacts, it’s important to keep four things in mind. 

First, West said: “Let’s be amazed” by AI technology, what it can do and the creativity it can spark. “This technology is magical. As a kid and even as an adult, I love magic shows. When I go to a magic show, I’m always excited to see a magic trick even if I know it’s a magic trick, it’s amazing and I want to feel amazed and leave amazed. That’s how I feel about this technology.” 

Second, amid all sorts of AI hype in media and popular culture, it’s important to be scared and not overlook the negative impacts of the technology, West said. 

“Let’s be scared” by AI’s real and potential harms, including the ability to use AI as a tool to spread disinformation, uncertainty and confusion in our information spaces, referencing comments West’s colleague, CIP director and HCDE associate professor Kate Starbird, made in a Wired article this summer about the capability to use AI technology to cheaply customize manipulative online content. 

“The difference now is that you can target these videos at a degree that you couldn’t before and you can do that incredibly cheaply and at scale,” West said. 

Third, West said: “Let’s be critical.” 

It’s important to ask tough questions about the uses of generative AI and its impacts on information integrity. “That’s where I’m spending most of my time as an author and a researcher right now. I have students working on this. I’m writing papers and perspectives with my colleagues at the University of Washington where we’re being pretty critical about the technology, talking about the limitations and talking about the ways this technology can have a negative impact on society.” 

And fourth, it’s also important to “be practical” about generative AI, West said. “The boat sailed on this technology. It’s here to stay,” including in the classroom. Educators have to be practical and find ways to adapt, he said.

West, who uses more written tests in his classes than he once did, allows his iSchool students to use generative AI in their work, but only if they clearly disclose how they’ve used it. That way, he said, students have a better understanding of how the technology works, where it falls short, and where it could lead to problems.

“It’s an interesting time for both the students and faculty in this space,” he said. “But the most important thing that students miss right now — they’re becoming query engineers, not writers. And writing, as we all know, is about crystallizing your thoughts. It’s the process of writing that’s of value and not always the product. And that’s one of my biggest worries with students” using this technology. 

West reminded the in-person audience and viewers online that the large language models AI chatbots and related technologies are built on are, first and foremost, word form distributions, “they are not models of knowledge,” he said. “I know they get anthropomorphized a lot. We give them these human-like characteristics. They do things like humans that are uniquely poised to do and the skill to do, but they are not models of knowledge.”

During his lecture, West shared a discovery earlier this year from using an AI chatbot, in this case the Meta-developed Galactica, to see how a large language model would define Brandolini’s Law. Also known as the bullshit asymmetry principle, Brandolini’s Law states that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.  

West detailed Galactica’s failure to properly define and provide other accurate information about the bullshit asymmetry principle in an op/ed in The Seattle Times earlier this year.   

Galactica’s answer: Brandolini’s Law is “a theory in economics [Not True] proposed by Gianni Brandolini [Not True], a professor at the University of Padua [Not True], which states that ‘the smaller economic unit, the greater its efficiency [Not True] …”

Or, as West succinctly put it: The chatbot “was bullshitting the bullshit principle.”

Michael Grass is the CIP’s assistant director for communications.


Lecture Video

Related

Other News