Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation, and hate speech on the platform

Save for later
  • 3 min read
  • 23 Nov 2018

article-image

Mark Zuckerberg, CEO, Facebook published a “blueprint for content governance and enforcement”, last week, that talks about updating its news feed algorithm to demote the “borderline (click-bait) content” to curb spreading misinformation, hate speech, and bullying on its platform.

Facebook has been getting itself into a lot of controversies regarding user data and privacy on its platform.  Just last week, the New York Times published a report on how Facebook follows the strategy of ‘delaying, denying, and deflecting’ the blame for all the controversies surrounding it.  Given all these controversies it goes without saying, that Facebook is trying to bring the number down.

“One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. At scale, it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.”, said Zuckerberg.

Here’s what the natural engagement pattern on Facebook looks like:

 

facebook-plans-to-change-its-algorithm-to-demote-borderline-content-that-promotes-misinformation-and-hate-speech-on-the-platform-img-0

As per the Facebook research, it is observed that no matter where the lines are drawn for the kind of content allowed, once a piece of content gets close to that line, people engage with it more on average, despite them not liking the content.

Facebook calls this an “incentive problem,” and has decided to penalize the borderline content so that it gets less distribution and engagement. The natural engagement pattern has been adjusted and now looks like this:

facebook-plans-to-change-its-algorithm-to-demote-borderline-content-that-promotes-misinformation-and-hate-speech-on-the-platform-img-1

In the graph above, distribution declines as content get more sensational, and people are disincentivized from creating provocative content that is as close to the line as possible. “We train AI systems to detect borderline content so we can distribute that content less”, adds Zuckerberg.  This process by Facebook for adjusting the curve is similar to its process for identifying harmful content but now is focused on identifying borderline content instead.

Moreover, a research by Facebook has found out that the natural pattern of borderline content getting more engagement is applicable to not just news but all the different categories of content.  For instance, photos close to the line of nudity, the ones with revealing clothing or sexually suggestive positions, had more engagement on average before the distribution curve was adjusted to discourage this.  Facebook finds this issue most important to address. This is because although social networks generally expose people to more diverse views, some of the pages can still “fuel polarization”.  Therefore, Facebook has decided to apply these distribution changes not just to feed ranking but to all their recommendation systems that suggest things users should join.

An alternative to reducing distribution approach is moving the line to define what kind of content is acceptable.  However, Facebook thinks that it won’t effectively address the underlying incentive problem, which is the bigger issue in hand. Since this engagement pattern exists no matter where the line is drawn, what needs to be changed is the incentive and not simply the removal of content.

“By fixing this incentive problem in our services, we believe it'll create a virtuous cycle: by reducing sensationalism of all forms, we'll create a healthier, less polarized discourse where more people feel safe participating”, said Zuckerberg.


Facebook’s outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more

Facebook AI researchers investigate how AI agents can develop their own conceptual shared language

Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime