AI Models Get Brain Rot, Too


AI models may be a bit like humans, after all.

A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.

“We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AIs are trained on the same stuff?”

Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and ones that contained sensational or hyped text like “wow,” “look,” or “today only.”

The researchers then used several different benchmarks to gauge the impact of this “junk” social media diet on two open source models: Meta’s Llama and Alibaba’s Qwen.

The models fed junk text experienced a kind of AI brain rot—with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.

The results mirror research on human subjects, which shows that low-quality online content has a detrimental effect on people’s cognitive abilities. The pervasiveness of the phenomenon saw “brain rot” named as the Oxford Dictionary word of the year in 2024.

The results are important for the AI industry, Hong says, because model-builders might assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content may look like scaling up data,” he says. “But it can quietly corrode reasoning, ethics, and long-context attention.”

The fact that LLMs suffer from brain rot seems especially worrying when AI is itself increasingly generating social media content, much of which is seemingly optimized for engagement. The researchers also found that models impaired by low-quality content could not easily be improved through retraining.

The findings also suggest that AI systems built around social platforms, such as Grok, might suffer from quality control issues if user-generated posts are used in training without an eye toward the integrity of the posts.

“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,” Hong says. “Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

Leave a Reply

Your email address will not be published.