Twitter expands rules against hateful conduct

Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Credit: https://variety.com/2018/digital/news/twitter-jack-dorsey-declines-compensation-2017-1202750870/

Twitter is always up and doing when it comes to fighting trolls and hateful conducts on its platform. The microblogging giant is one of the most active in terms of fighting trolls—and regardless of whether you think they have done enough or not, Twitter is unrelenting in its fight against hateful conduct. Talking about hateful conduct, the social media behemoth has expanded its rules against hateful conduct.

Rules, according to Twitter, continue to evolve—and that is why changes are regularly being made by the company to put its house in order. They [rules] are added so they could guide people and help keep everyone on the platform safe—that perhaps, explains why they must be dynamic—they must reflect current trends and not be static.

The expansion covers what Twitter refers to as “dehumanizing language,” which the company claims reflects the “realities of the world we operate within.” The company said the latest change comes as a result of months of conversations and feedback from the public, external experts, and its internal teams. The new expanded rules, according to the microblogging company, will now include language that dehumanizes others on the basis of their religious beliefs.

Tweets such as the one in the screenshot below will now be removed once they have been reported by other users. Tweets that fall within the aforementioned category previously sent before this update will be deleted by Twitter, but will not attract any sanction such as account suspension.

Credit: https://blog.twitter.com/official/en_us/topics/company/2019/hatefulconductupdate.html?utm_content=buffer16232&utm_medium=social&utm_source=twitter&utm_campaign=buffer

In considering the feedback and discussions it had with outside experts, Twitter said certain factors came into play including the following:

  1. How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?
  2. How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
  3. How can – or should – we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?

A few weeks ago, Twitter released a simplified version of its rules for all users to make it easier for people to understand. The company went from having 2,500 to just 600 words, with each of its rules now in 280 characters or even less.

The rules have now been categorized into safety, privacy and authenticity—for simplicity. In addition, more details have now been added around other policies including election integrity, platform manipulation and spam.

Lately, the social media giant expanded the ability to report misleading tweets also known as fake news by adding a new tool. The tool is coming ahead of the forthcoming Lok Sabha in India and the EU elections. That said, Twitter expects that its latest addition will be extended to other places where elections will be held in the future in order to reduce external or undue influences.


Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Author: Ola Ric

Ola Ric is a professional tech writer. He has written and provided tons of published articles for professionals and private individuals. He is also a social commentator and analyst, with relevant experience in the use of social media services.

Share This Post On