Can Hiding Angry Posts Actually Make People Nicer?

Can Hiding Angry Posts Actually Make People Nicer? - Professional coverage

According to Fast Company, a team of computer scientists has published research showing that reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. The researchers, including the article’s author, developed an open-source web tool that allowed them to rerank the feeds of consenting participants on X, formerly Twitter, in real time, a capability previously reserved for the platforms themselves. They used a large language model to identify posts likely to polarize people, such as those advocating political violence. These posts weren’t removed but were simply ranked lower, requiring users to scroll further to see them, which reduced the number of such posts users encountered. The study, published in Science, found this reranking directly affected participants’ emotions and their views of people with opposing political views. The full methodology and findings are detailed in the paper available here.

Special Offer Banner

The Algorithmic Lever

Here’s the thing we all intuitively know but rarely see proven: what you see changes how you feel. This research is a rare, controlled experiment that pulls back the curtain. Platforms have always claimed their algorithms are neutral tools for “engagement,” but this shows the ranking mechanism itself is a powerful dial for social temperature. And the team had to build their own tool to even run the test—a fascinating workaround in a walled-garden world. They basically created a proof-of-concept for a public interest algorithm. So, it’s not about censorship; it’s about friction. Making someone scroll through three cat videos before they find the post calling for their opponent’s imprisonment is a psychological speed bump. Seems like our brains appreciate the extra moment to cool down.

A Business Model Conundrum

Now, the billion-dollar question: why don’t platforms do this? We all know the answer. Anger and outrage are fantastic for keeping eyes glued to screens. That “engagement” drives the ad revenue machine. This research presents a direct conflict between a calmer public square and the platforms’ core business incentives. I think the most damning part is that the study didn’t need to remove a single post to see a positive effect. It highlights that the current “all-or-nothing” debate around content moderation—either it’s up or it’s banned—misses a massive middle ground. Platforms have the technical ability to implement this tomorrow. But will they? Probably not without serious regulatory pressure or a fundamental shift in how they measure success. It’s easier to sell ads against a heated argument than a calm discussion.

The Implementation Problem

But let’s play devil’s advocate. Who gets to define “polarizing”? The researchers used clear, extreme examples like calls for violence. That’s the easy, low-hanging fruit. In practice, drawing that line gets messy fast. Is a post criticizing a political policy “polarizing”? What about passionate advocacy? You’d need incredibly nuanced, and likely controversial, guidelines. And relying on an LLM to make those judgments introduces its own set of biases and errors. It’s a classic tech solution: elegantly simple in a controlled study, fiendishly complex in the messy real world. Still, the core finding is powerful. It suggests that even small, non-perfect adjustments to what we see first could have an outsized effect on our collective blood pressure. Isn’t that worth trying to figure out?

Leave a Reply

Your email address will not be published. Required fields are marked *