But platforms can train their systems to recognize this “borderline content” and make engagement look like the graph below:
In this scenario, the more inflammatory a post is, the less distribution it gets. Posts describing police in hateful terms might stay up but be shown to fewer people. According to Zuckerberg, this strategy of reducing the “distribution and virality” of harmful content is the most effective way of dealing with it.
He’s right: The strategy works! Facebook has recently touted reductions in the amount of hate speech and graphic content that users see on its platform. How did it make these improvements? Not by changing its rules on hate speech. Not by hiring more human content moderators. Not by refining artificial-intelligence tools that seek out rule-breaking content to take down. The progress was “mainly due to changes we made to reduce problematic content in News Feed.” The company used dials, not on-off switches.
Facebook’s critics accuse it of spreading hate and violent content because such material increases users’ time on the site and therefore the company’s profits. That trope is probably overblown and too simplistic. Advertisers don’t like their ads running next to divisive content, and in the long term, users won’t keep coming back to a platform that makes them feel disgusted. Still, some leaks from employees have detailed projects to tamp down divisive or harmful content that were killed internally for business reasons. And the top 10 most-engaged-with posts after the election contained many more mainstream press accounts than before the break-glass measures took effect. The list looked so different from the usual fare of right-wing viral content that Facebook released a blog post trying to explain it. (The company conceded that the temporary measures had played a role, but suggested they were not the primary driver for the change.)
Read: Facebook is a doomsday machine
Without any independent access to internal data, outsiders can’t know how much of a difference Facebook’s break-glass measures make, or where its dials usually sit. But Facebook has a reason for announcing these steps. (To the company’s credit, it at least announced measures in anticipation of the Chauvin verdict; other platforms seemed to just keep their head down.) What the company hasn’t explained is why its anti-toxicity measures need to be exceptional at all. If there’s a reason turning down the dials on likely hate speech and incitement to violence all the time is a bad idea, I don’t see it.
Facebook’s old internal motto “Move fast and break things” has become an albatross around its neck, symbolizing how the company prioritized growth and scale, leaving chaos behind it. But when confronted with inflammatory content, the platform should move faster and break more glass.
The Chauvin trial may be a unique event, but racial tension and violence are clearly not. Content on social media leading to offline harm is not confined to Minneapolis or the U.S.; it is a global problem. Toxic online content is not an aberration, but a permanent feature of the internet. Platforms shouldn’t wait until the house is burning down to do something about it.
Source link