This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
written by researcher(s)
proofread
Some Reddit users just love to disagree, new AI-powered troll-spotting algorithm finds

In today's fractured online landscape, it is harder than ever to identify harmful actors such as trolls and misinformation spreaders.
Often, efforts to spot malicious accounts focus on analyzing what they say. However, our latest research suggests we should be paying more attention to what they do鈥攁nd how they do it.
We have developed a way to identify potentially harmful online actors based solely on their behavioral patterns鈥攖he way they interact with others鈥攔ather than the content they share. We at the recent , and were awarded Best Paper.
Beyond looking at what people say
Traditional approaches to spotting problematic online behavior typically rely on two methods. One is to examine content (what people are saying). The other is to analyze (who follows whom).
These methods have limitations.
content analysis. They may code their language carefully, or share misleading information without using obvious trigger words.
Network analysis falls short on platforms such as . Here, connections between users aren't explicit. Communities are organized around topics rather than social relationships.
We wanted to find a way to identify harmful actors that couldn't be easily gamed. We realized we could, focusing on behavior鈥攈ow people interact, rather than what they say.
Teaching AI to understand human behavior online
Our approach uses a technique called . This is a method typically used to understand human decision-making in fields such as autonomous driving or game theory.
We adapted this technology to analyze how users behave on social media platforms.
The system works by observing a user's actions, such as creating new threads, posting comments and replying to others. From those actions it infers the underlying strategy or "policy" that drives their behavior.
In our Reddit case study, we analyzed 5.9 million interactions over six years. We identified five distinct behavioral personas, including one particularly notable group鈥"disagreers."
Meet the 'disagreers'
Perhaps our most striking result was finding an entire class of Reddit users whose primary purpose seems to be to disagree with others. These users specifically seek out opportunities to post contradictory comments, especially in response to disagreement, and then move on without waiting for replies.
The "disagreers" were most common in politically-focused subreddits (forums focused on particular topics) such as , , and . Interestingly, they were much less common in the now-banned pro-Trump forum despite its political focus.
This pattern reveals how behavioral analysis can uncover dynamics that content analysis might miss. In r/The_Donald, users tended to agree with each other while directing hostility toward outside targets. This dynamic may explain why traditional content moderation has to address problems in such communities.
Soccer fans and gamers
Our research also revealed unexpected connections. Users discussing completely different topics sometimes displayed remarkably similar behavioral patterns.
We found striking similarities between users discussing soccer (on ) and e-sports (on ).
This similarity emerges from the fundamental nature of both communities. Soccer and e-sports fans engage in parallel ways: they passionately support specific teams, follow matches with intense interest, participate in heated discussions about strategies and player performances, celebrate victories, and dissect defeats.
Both communities foster strong tribal identities. Users defend their favored teams while critiquing rivals.
Whether debating Premier League tactics or League of Legends champions, the underlying interaction patterns鈥攖he timing, sequence and emotional tone of responses鈥攔emain consistent across these topically distinct communities.
This challenges conventional wisdom about online polarization. While echo chambers are often blamed for increasing division, our research suggests behavioral patterns can transcend topical boundaries. Users may be divided more by how they interact than what they discuss.
Beyond troll detection
The implications of this research extend well beyond academic interest. Platform moderators could use behavioral patterns to identify potentially problematic users before they've posted large volumes of harmful content.
Unlike content moderation, behavioral analysis does not depend on understanding language. It is hard to evade, since changing one's behavioral patterns requires more effort than adjusting language.
The approach could also help design more effective strategies to counter misinformation. Rather than focusing solely on the content, we can design systems that encourage more constructive engagement patterns.
For social media users, this research offers a reminder that how we engage online鈥攏ot just what we say鈥攕hapes our digital identity and influences others.
As online spaces continue to grapple with manipulation, harassment and polarization, approaches that consider behavioral patterns alongside content analysis may offer more effective solutions for fostering healthier online communities.
More information: Lanqin Yuan et al, Behavioral Homophily in Social Media via Inverse Reinforcement Learning: A Reddit Case Study, Proceedings of the ACM on Web Conference 2025 (2025).
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .