Controls on hate speech - SHINE

Controls on hate speech

Reuters
Social media companies Facebook, Twitter and Google's YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.
Reuters

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence.

Social media companies Facebook, Twitter and Google’s YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.

The EU has gone as far as to threaten social media companies with new legislation unless they increase efforts to fight the proliferation of extremist content and hate speech on their platforms.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram and Google+ will also sign up to the code, the European Commission said.

The companies managed to review complaints within a day in 81 percent of cases during monitoring of a six-week period toward the end of last year, compared with 51 percent in May 2017 when the Commission last examined compliance with the code of conduct. On average, the companies removed 70 percent of the content flagged to them, up from 59.2 percent in May last year.

Facebook reviewed complaints in less than 24 hours in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases. Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence. YouTube said it was training machine learning models to flag hateful content at scale.

The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.

Special Reports
Top