Facebook expands definition of terrorist organizations to limit extremism

click to enlarge Facebook expands definition of terrorist organizations to limit extremism
Jason Henry/The New York Times
The Facebook campus in Menlo Park, Calif., April 9, 2018. Facebook on Sept. 17, 2019, announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizations and planning to deploy artificial intelligence to better spot and block live videos of shooters.
By Davey Alba
The New York Times Company

Facebook on Tuesday announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizations and planning to deploy artificial intelligence to better spot and block live videos of shooters.

The company is also expanding a program that redirects users searching for extremism to resources intended to help them leave hate groups behind.

The announcement came the day before a hearing on Capitol Hill on how Facebook, Google and Twitter handle violent content. Lawmakers are expected to ask executives how they are handling posts from extremists.

Facebook, the world’s largest social network, has been under intense pressure to limit the spread of hate messages, pictures and videos on its site. It has also faced harsh criticism for not detecting and removing the live video of an Australian man who killed 51 people in Christchurch, New Zealand.

In at least three mass shootings this year, including the one in Christchurch, the violent plans were announced in advance on 8chan, an online message board. Federal lawmakers questioned the owner of 8chan this month.

In its announcement post, Facebook said the Christchurch tragedy “strongly” influenced its updates. And the company said it had recently developed an industry plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist accounts.

Facebook has long touted an ability to catch terrorism-related content on its platform. In the last two years, the company said, it has been able to detect and delete 99% of extremist posts — about 26 million pieces of content — before they were reported to them.

But Facebook said it had mostly focused on identifying organizations like separatists, Islamic militants and white supremacists. The company said it would now consider all people and organizations that proclaim or are engaged in violence leading to real-world harm.

Since March, Facebook had also been redirecting users who search for terms associated with white supremacy to resources like Life After Hate, an organization founded by former violent extremists that provides crisis intervention and outreach. In the wake of the Christchurch tragedy, Facebook is expanding that capability to Australia and Indonesia, where people will be redirected to the organizations EXIT Australia and ruangobrol.id.