The rise of artificial intelligence-generated content poses significant challenges for moderators on Reddit, the platform known for fostering human interaction. While some moderators acknowledge the potential benefits of AI, they express concern that it could undermine the community’s authenticity and quality. According to a study led by Travis Lloyd, a doctoral student in information science, moderators are apprehensive about AI content affecting the quality of posts, disrupting social dynamics, and complicating governance.
Lloyd’s research, titled “’There Has To Be a Lot That We’re Missing’: Moderating AI-Generated Content on Reddit,” will be presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing from October 18-22, 2023, in Bergen, Norway. The paper has received an honorable mention for best paper. The study is co-authored by Mor Naaman, professor of information science at Cornell Tech, and Joseph Reagle, associate professor at Northeastern University.
Reddit boasts over 110 million active users daily, with discussions ranging from politics to entertainment. Users can share various content types, including news links and videos, and engage through comments and votes. Each subreddit operates under its unique guidelines, influencing how content is evaluated.
Lloyd’s research began in 2023, following the launch of ChatGPT. He aimed to understand how AI tools would impact Reddit’s information ecosystem. “Detecting AI-generated content is challenging,” he stated, recognizing that moderators would face similar difficulties. The study involved interviews with 15 moderators overseeing more than 100 subreddits, with memberships from 10 to over 32 million users.
Most moderators expressed skepticism towards AI-generated content. One moderator from the “Ask Historians” subreddit noted a positive use case, where non-English speakers utilized AI to translate their insights into English. “They write their answer in German and then use ChatGPT to translate it,” they explained, highlighting the intellectual contribution of users.
Conversely, another moderator from the subreddit r/WritingPrompts firmly stated, “Let’s be absolutely clear: you are not allowed to use AI in this subreddit; you will be banned.” The consensus among moderators indicated that content quality was their primary concern. One moderator remarked that while AI-generated content attempts to mimic the depth of traditional posts, it often contains “glaring errors in both style and content.”
Concerns about social dynamics also surfaced. Several moderators feared that AI might diminish meaningful interactions, leading to strained relationships and a violation of community values. The moderator of r/explainlikeimfive described AI content as “the most threatening concern,” emphasizing its disruptive nature and the difficulty in detection.
Naaman pointed out that the responsibility of maintaining Reddit’s human-centric ethos falls heavily on moderators, who are primarily volunteers. “It remains a huge question how they will achieve that goal,” he noted, stressing the need for support from Reddit and the broader research community to address these challenges.
Despite the hurdles, Lloyd remains optimistic. “This study showed us there is an appetite for human interaction,” he said. “As long as that desire exists, people will strive to create human-only spaces.” The research was partially supported by funding from the National Science Foundation, indicating a recognition of the ongoing importance of human engagement in digital communities.
