Advertiser influence and platform accountability: The X dilemma

Dec 4, 2023

By Elissa Melendez

Content providers and advertisers have always had a complicated relationship. They rely upon each other to keep viewers engaged and returning, but what happens when they are no longer in sync?  The recent sponsor exodus from social media platform X (formally Twitter), offers a compelling example.

As a legal fellow with the University of Washington’s Center for an Informed Public, and former advertising trade professional, I’m examining the relationship between advertisers and online toxicity. Advertisers are in a unique position to push for a more responsible and ethical online environment, addressing a gap where platforms have been slow to act. But it must be done with caution. 

X’s struggle with advertiser support is well-documented. Since Elon Musk’s 2022 takeover which saw a sharp cutback in content moderation, his vision for a platform of free expression has clashed with the stark reality that the environment is far from safe for users. Reports from the Anti-Defamation League (ADL) and the Center for Countering Digital Hate (CCDH) revealed a surge in hate speech, harassment, and extremist content within the first three months of Musk’s tenure. These reports led to a “high risk” label from the largest advertising conglomerates prompting a mass exodus of over 600 major advertisers including Coca-Cola, Unilever, Jeep, and Wells Fargo. In an effort to win advertisers and their revenue back, X launched a number of brand safety initiatives. Yet they have proven largely unsuccessful: harmful content and its visibility continues to rise.

As the Israeli-Hamas conflict unfolds, X remains a hotspot for hate speech.  According to a recent review by the CCDH, X was not removing posts that promoted bigotry and incited violence against Muslims, Palestinians, and Jews — actions which violated their own community standards. In November, Musk himself signaled his agreement with an anti-Semitic conspiracy theory, sparking IBM to suspend their advertising through the remainder of the year. It also led to a call from over 160 rabbis and activists urging Apple, Disney, and Google to follow IBM’s lead. Over 200 advertisers have since followed suit, with some revealing they do not plan to return to the platform at all. X is facing an estimated $75 million revenue loss by the end of year. 

X’s reaction to the withdrawal and financial loss exposes the complexities ahead. After their initial report, Musk threatened to sue the ADL for depicting X as anti-Semitic, fueling the early advertiser withdrawal and revenue loss. Subsequently, he filed defamation suits against both the CCDH and Media Matters for America in response to their reports of increasing hate speech on X, separately suing Media Matters for America for their initial reporting on IBM advertising appearing alongside his anti-Semitic post. And now Musk has a new message for advertisers as they continue to leave: “Go f*** yourselves.” X CEO, Linda Yaccarino, made X’s position clear: “And here’s my perspective when it comes to advertising: X is standing at a unique and amazing intersection of Free Speech and Main Street — and the X community is powerful and is here to welcome you. To our partners who believe in our meaningful work — Thank You.”

This sequence of events exposes the power dynamic between advertisers and content platforms. Advertisers seek high engagement and views, often drawn by the unfortunate allure of negative content. At the same time, they strive to avoid the adverse association that comes with their ads appearing next to content misaligned with their values. Suspending their advertising emerges as a powerful strategy to advocate for a more responsible content moderation process while protecting their own brand safety, but that has risks too. It’s crucial to maintain a balance, ensuring that advertisers don’t inadvertently contribute to economic censorship where they control narratives, or create a chilling effect on free speech.

The 2017-2019 “YouTube Adpocalypse” is among the notable digital media examples where major brands suspended advertising on YouTube due to concerns of association with harmful content. This content including anti-Semitism, terrorist extremism, and what was described as a “soft-core pedophilia ring.” This issue arose from YouTube’s “up next” algorithms, which facilitated pedophiles in networking and sharing links through the comment sections of videos featuring children. 

This led to changes in YouTube content policies, including increased restrictions on content eligible for advertising, and modifying YouTube’s Partner Program eligibility. The changes were controversial since the stricter policies negatively impacted some content creators, but finding the right balance has been a work in progress and is still evolving today. It is notable that YouTube was aware of these significant content moderation lapses prior to the advertiser’s withdrawals, but the advertiser’s actions are attributed to sparking the necessary policy changes

Context adjacency has been a consistent tool for advertisers to protect brand safety in both traditional and digital media. The influence and power should not be overlooked, and as we navigate the complexities of online platform content moderation, advertisers’ role in demanding accountability becomes even more crucial.

Elissa Melendez is a Legal Fellow at the University of Washington’s Center for an Informed Public and Master of Jurisprudence student at the UW School of Law.


PHOTO AT TOP: Elon Musk in 2022. Photo by Steve Jurvetson via Flickr / CC BY 2.0

Other News