Facebook has removed more than 12 million terrorist content from April to September 2018, the company wrote in a blog post.
Facebook said terrorist content are posts that praise, represent or endorse terrorist groups ISIS, al-Qaeda and connected groups.
The takedown of terrorist content falls under the company’s ongoing effort to clear its service from harmful, hateful content, including fake news, spam and propaganda.
“We measure how many pieces of content (such as posts, images, videos or comments) we took action on because they went against our standards for terrorist propaganda, specifically related to ISIS, al-Qaeda and their affiliates,” said Monika Bickert, Global Head of Policy Management, and Brian Fishman, Head of Counterterrorism Policy at Facebook.
The company took down 9.4 million terrorist content in the second quarter. It removed another three million posts in the third quarter. The company removed 1.9 million posts only in the first quarter.
“Terrorists are always looking to circumvent our detection and we need to counter such attacks with improvements in technology, training, and process,” Facebook said in the blog post.
“These technologies improve and get better over time, but during their initial implementation such improvements may not function as quickly as they will at maturity.”
Most of the terrorist content were old. In defense, Facebook said it removed 2.2 million new terrorist posts in the second quarter and 2.3 million in the third quarter. It took down 1.2 million posts in the first quarter.
Facebook said it is focusing on removing terrorist content before a wide audience can view it. It reduced the average time between the time a user reports a terrorist post to the time Facebook removes it. The average time fell to 18 hours in the third quarter, from 22 hours in the second quarter and 43 hours in the first quarter.
The company used new machine learning technology to spot terrorist content, especially ISIS and al-Qaeda posts. The tool generates a score to show how likely a post violates counterterrorism policies.
Facebook has a team of reviewers who highlight posts with the highest scores. Machine learning ensures that the reviewers focus on the high priority content first.
While trained humans review and remove terrorist content, the machine learning technology can also remove content automatically. Facebook trusts that the technology will remove content if it has high confidence that its decision will be more precise than the human reviewers.
Facebook has upped its efforts after the European Union ordered it, along with Twitter and Google, to remove terrorist content within an hour of being notified of its presence.
“While several platforms have been removing more illegal content than ever before – showing that self-regulation can work – we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights,” said Andrus Ansip, vice president for the digital single market in a statement in March this year.