Now YouTube to combat extremist content on platformAuthor : AZIndia News Desk
New York, June 19 (AZINS) Three days after Facebook outlined Artificial Intelligence (AI) powered measures to combat terrorism, YouTube has introduced four steps to confront extremist activity.
In an op-ed in the Financial Times, Senior Vice-President and General Counsel of Google Kent Walker, wrote that YouTube has been working with various governments and law enforcement agencies to identify and remove this content and has invested in systems that help with that task.
He acknowledged that more has to be done in the industry and quickly.
The first of the four steps is expanding the use of their automated systems to better identify terror-related videos, using machine learning to "train new content classifiers to help us more quickly identify and remove such content".
The company is also expanding its pool of "Trusted Flagger" users -- a group of experts with special privileges to review flagged content that violates the site's community guidelines.
According to Walker, the company will almost double the size of the programme "by adding 50 expert NGOs to the 63 organisations who are already part of the programme".
This expansion would allow the company to draw on specialty groups to target specific types of videos, such as self-harm and terrorism.
The third step would be hiding videos like ones that contain inflammatory religious or supremacist content -- that do not violate community standards -- behind a warning.
The company will do more with counter-radicalisation efforts by building off of its "Creators for Change programme", which will redirect users targeted by extremist groups such as Islamic State (IS) to counter-extremist content.
Facebook said that it uses Artificial Intelligence to remove the terror-related content.
The social media giant is currently focusing its techniques to combat terrorist content about Islamic State (IS), Al-Qaeda and their affiliates.