With AI, instead of super intelligence, we’ve created super communicators, new study argues

May 16, 2025

Systems using artificial intelligence, including conversational chatbots, have become more human-like than had ever been imagined where their communication skills are now better than most people. 

In “The benefits and dangers of anthropomorphic conversational agents,” a new article published in the Proceedings of the National Academy of Sciences, researchers at the University of Sydney in Australia and the University of Washington’s Center for an Informed Public show how these systems become irresistible, letting users forget that they’re interacting with machines that do not possess genuine humanness.

“The general public is not prepared for what is coming,” said Sandra Peter, an associate professor at the University of Sydney Business School. “We always expected AI to be highly rational but lack humanity. Instead, AI developers built the opposite.”

The authors point out that a wide range of studies taken together show that the large language models that AI systems are built upon now outpace most humans in writing empathetically and persuasively. They excel at role-play and consistently pass the Turing test, fooling humans into thinking they are talking to a real human.

Anthropomorphism used to mean that humans ascribe human traits to machines. But conversational chatbots are different, according to Peter. They are anthropomorphic by nature, indistinguishable in communication because they mimic humanness convincingly.

“This is a significant development,” said co-author Kai Riemer, a professor of information technology and organization at the University of Sydney. “For the first time, machines are anthropomorphic and convincing in human appearance. And that makes resisting them increasingly difficult.”

Peter, Riemer and co-author Jevin West, a University of Washington Center for an Informed Public co-founder, coined the term anthropomorphic seduction to describe the human-like qualities exhibited in the ways AI chatbots interact with humans despite the absence of any true human traits. 

“We must not forget that these machines do not possess empathy or human understanding,” Riemer said. “They only appear like they do.”

Although chatbots provide new opportunities for user interfaces that make complex information widely accessible, as they have become more persuasive than humans, the authors warn that these conversational agents open the door to manipulation at scale since they allow for persuasion without moral inhibitions. 

Providers of popular AI systems, like OpenAI are increasingly moving to making their creations more engaging, giving them a personality. While that will make them more seductive, West said that it could make them far more “sticky,” making it easy for users to spend increasing amounts of time using them, and potentially giving up more data about themselves.

While AI companion apps might alleviate feelings of loneliness, systems that lean on anthropomorphic seduction have also been criticized for exploiting it, said West, a UW Information School professor.

The speed of technological advancement requires urgent awareness and regulation, said West. 

In the article, the authors suggest that policymakers consider implications across three areas: (1) risk level, (2) transparency and (3) mitigation, and point to the potential to design  safety ratings for chatbots, similar to ratings used in the entertainment industry for films, television and gaming.

“Systems that embody anthropomorphic abilities at levels that match or exceed most humans require transparency and should come with appropriate labeling and disclosure,” the authors wrote in PNAS, noting that there will be “increasing commercial pressure to take full advantage of their ability to fine-tune LLMs for increased human likeness, creating highly effective anthropomorphic agents that exploit anthropomorphic qualities for economic gain, with potential unintended consequences in the long run.”


Image at top generated using ChatGPT.

Other News