The U.S. Congress has requested representatives from Facebook, YouTube and Twitter to testify about extremist content on their social media platforms.
Tomorrow, January 17, the internet giants’ people will appear before the Senate Commerce Committee to attend a hearing called “Terrorism and Social Media: #IsBigTechDoingEnough.”
Committee head Sen. John Thune said the meeting will focus on how social media platforms, especially Facebook, YouTube and Twitter, are handling extremist propaganda. His team will also discuss with them how they are preventing the spread of those posts.
Recode reports that the Senate will also tackle how the tech giants’ handle fake news, hate speech, racism, and other abusive content.
The three representatives who will testify before the committee:
- Monika Bickert, head of global policy management at Facebook
- Carlos Monje, director of public policy and philanthropy at Twitter
- Juniper Downs, head of global public policy and government relations
The Lead Up to the Hearing
Governments and political organizations have long questioned how social networks deal with extremist content. It has led to many changes within the companies’ community guidelines over the past two years.
While the networks are complying to the requests, the January 17 hearing will review if these changes are enough.
The U.S. Senate is not alone in the quest to prevent or stop extremist content on the internet. The EU is also pressuring social media firms over illegal content takedowns. They are calling for the networks to hasten the detection and removal of hate speech.
In 2016, the three social networks and Microsoft signed the EU Code of Conduct on hate speech to consolidate their database of extremist content. The goal is for quicker removal of these posts across all social platforms.
A Work in Progress
Last year, Facebook shared how it tackles extremist content using both humans and machine-based learning. Its AI is finding duplicates of removed videos to stop another group from sharing the content. Another algorithm crawls for keywords in textual content. Facebook’s review staff will also bump up to 20,000 this year. And their CEO Mark Zuckerberg is committing to stop abusive posts in 2018.
YouTube has made several changes after big brands started pulling their ads when they found extremist videos and hate speech in ad content. In August last year, the company said that its updated AI algorithms removed 75 percent of those videos before a YouTube user could flag them. Videos and users who infract community guidelines received penalties.
Twitter shut 230,000 accounts for extremist content in August 2016. This followed with the removal of 377,000 accounts in the following six months. Recently, it started removing the blue verification badge after critics found that a known white supremacist has it.
We rarely see representatives from these social media giants testify on Capitol Hill. Yet, they already did when they testified about Russian interference during the last U.S. presidential elections.
The recent hearings on hate speech and extremist content by different governments is a welcome development. Legislation will soon catch up with social media and other internet platforms. Some have already submitted new proposals to manage U.S. political ads. Better yet, Germany now has an active law to remove hate speech.