
Credit: https://www.reuters.com/article/us-facebook-lawsuit/facebook-not-protecting-content-moderators-from-mental-trauma-lawsuit-idUSKCN1M423Q
Facebook Group is reportedly testing Keyword Alerts—a new feature that will alert admins and moderators when specific word, frame, or name is used. The feature, if eventually allowed to see the light of the day, will serve a good purpose for admins and moderators of large groups.
This piece of information was tweeted by reliable reverse engineer Jane Manchun Wong on her Twitter page on Monday. According to a screenshot posted by Wong on her Twitter page, Facebook will alert group admins and moderators as soon as specific keywords apparently pre-added by them are posted by members. Facebook however, won’t informed members when admins get alerted.
Though, the screenshot did not give exact number of keywords that can be added by a Group admin, every keyword will be separated by commas.
Facebook Groups is testing "Keyword Alerts" for admins and moderators. They will receive an alert when a member uses a word, a phrase or a name.
This will be super useful for those who moderate large groups.
But is the feature ready for ᎳοᏒĐՏ lᎥᏦê ᎢʜɩႽ? pic.twitter.com/qKI9olC3XM
— Jane Manchun Wong (@wongmjane) December 3, 2018
In other related Facebook news, the social media behemoth is reportedly working on another feature to curb the menace. Per Android Police, you may soon be able to clock certain comments that include specified words, phrases, or emoji from showing up on your timeline.
Citing Jane Wong, Android Police writes that the feature, which is still being developed would let you choose words, strings of words, and emoji that you don’t want to see appear in comments in your timelines. While those who posted such comments and words will be able to see them on their timeline, you and every other user that has blocked them won’t see them. Blocking such comments from appearing only on your timeline is a smart move as it won’t give room for people responsible for posting them to find a way around them.
While a lot still needs to be done to deal with trolls, blocking comments from appearing on a user’s timeline is a smart move. The feature, according to Wong, is still in its developmental stage, and a few more changes and testing will be required before it is finally launched.
A couple of weeks ago, Facebook announced that it had removed more than 12 million terrorist content between April and September 2018. In a blog post last month, the social networking giant defined terrorist content as posts that praise, represent or endorse terrorist groups ISIS, al-Qaeda and connected groups.
“We measure how many pieces of content (such as posts, images, videos or comments) we took action on because they went against our standards for terrorist propaganda, specifically related to ISIS, al-Qaeda and their affiliates,” said Monika Bickert, Global Head of Policy Management, and Brian Fishman, Head of Counterterrorism Policy at Facebook.
Facebook took down 1.9 million posts in the first quarter, 9.4 million terrorist content in the second quarter, and 3 million posts in the third quarter.
“Terrorists are always looking to circumvent our detection and we need to counter such attacks with improvements in technology, training, and process,” Facebook said in the blog post.