About our Mission

The objective of the Global Internet Forum to Counter Terrorism is to substantially disrupt terrorists' ability to promote terrorism, disseminate violent extremist propaganda, and exploit or glorify real-world acts of violence using our platforms.

  • Employing and leveraging technology;
  • Sharing knowledge, information and best practices; and
  • Conducting and funding research.

We are an industry-led initiative, but we also know that to achieve our goals, we need to collaborate with a wide range of NGOs, academic experts, and governments. We are working in close partnership with the UN Counter Terrorism Executive Directorate (UN CTED), the ICT4Peace Foundation, and the Tech Against Terrorism initiative to share knowledge and expertise - and we hope to engage a broad group of stakeholders.

The GIFCT pledges to preserve and respect the fundamental human rights that terrorism seeks to undermine, including free expression, the role of journalism, and user privacy. To this end, we will also involve human rights experts and other civil society stakeholders in the GIFCT's work.

Since Facebook, Microsoft, Twitter, and YouTube announced the formation of the Global Internet Forum to Counter Terrorism to curb the spread of terrorist content online in June 2017, members have taken a number of individual actions to meet this objective.

The companies have found that investing heavily in proprietary and cutting-edge technological solutions such as photo and video matching and text-based machine learning classification techniques is delivering results. Members have used these innovative tools to remove content from terrorist groups that pose the biggest threat globally.

Results achieved from advances in machine learning so far includes:

  • YouTube: 98% of the videos YouTube removes for violent extremism are flagged by machine-learning algorithms. Machine learning is helping YouTube's human reviewers remove nearly five times as many videos than they were previously.
  • Twitter: Between July 2017 and December 2017, a total of 274,460 Twitter accounts were permanently suspended for violations related to promotion of terrorism. Of those suspensions, 93% consisted of accounts flagged by internal, proprietary spam-fighting tools, while 74% of those accounts were suspended before their first tweet.
  • Facebook: 99% of ISIS and Al Qaeda-related terror content that is removed from Facebook is content that is detected before anyone in its community has flagged it, and in some cases, before it goes live on the site. Once Facebook is aware of a piece of terror content, it removes 83% of subsequently uploaded copies within one hour of upload.

These actions are in addition to the shared industry hash database, which now contains more than 50,000 hashes, where companies can create "digital fingerprints" for terrorist content, remove matching content and, in some cases, block terrorist content before it is even posted.

We recognize that our work is far from done, but we are confident that we are heading in the right direction.