Fighting Terror with Tech: The Evolution of the Global Internet Forum to Counter Terrorism
The following is an Insight from GIFCT’s Executive Director, Naureen Chowdhury Fink and GIFCT’s Membership and Programs Senior Director, Dr Erin Saltman, capturing key highlights from their co-authored chapter “Fighting Terror with Tech: The Evolution of the Global Internet Forum to Counter Terrorism” in the latest edited volume from the Trust and Safety Foundation, Trust, Safety, and the Internet We Share: Multistakeholder Insights (Taylor & Francis, 2025), expected to be published in late 2025. Fill out this form to receive a copy of the volume once it’s published.
Their chapter presents phases of GIFCT’s evolution, from a multilateral tech initiative to an independent non-profit organization managing tools, incident response frameworks, and knowledge-exchange for an increasingly diverse range of platforms.
***
Introduction
This chapter serves as an illustrative case study from the trust and safety sector, highlighting the evolution of a unique industry-led structure aimed at preventing and countering terrorists and violent extremists from exploiting digital platforms. The Global Internet Forum to Counter Terrorism (GIFCT), first convened in 2017, was born out of the need to create a space for cross-platform collaboration to counter terrorist threats online posed by ISIS, largely through shared tooling. Efforts had to evolve further in reaction to the terrorist attacks carried out in Christchurch, New Zealand, where the perpetrator livestreamed his attacks at two mosques in 2019, initiating the need for shared incident response protocols and an independent structure.
Now GIFCT faces a third evolutionary turn with the acceleration of emerging technologies and a-typical threat actors, requiring nuanced knowledge exchange and signal sharing. These evolutionary phases have seen efforts grow and change the nature of how GIFCT is governed and functions; from a cross-platform initiative led and managed by four tech companies, to an independent non-profit supporting cross-platform and cross-sector efforts with over 35 tech company members.
As this chapter highlights, knowledge exchange between industry, governments, law enforcement, and civil society is critical. As GIFCT enters a distinct third phase in its organizational journey, there will be a need to adapt to adversarial shifts and industry tools. Still, the case study GIFCT offers may be useful for others developing cross-platform and cross-sector solutions to tackle online harms. Multistakeholderism is not just a “nice-to-have,” but a necessary component of understanding the threat, developing solutions, and deploying best practices at scale. To ensure effectiveness GIFCT weaves multistakeholder approaches throughout its workstreams. Regional workshops bring together industry, international organizations, officials, and civil society experts to exchange knowledge. Webinars and virtual events allow for broad-based awareness, and the Global Network on Extremism and Technology (GNET), GIFCT’s academic arm, connects industry to global experts and perspectives and offers global insights for the research and practitioner communities.
In the mid 2010s the nature of terrorist content online began to reflect broader cross-platform internet usage and saw an unprecedented volume and scale in content developed to recruit, support, glorify, finance, and perpetrate terrorist acts. Thus, the need to bring together not only big, but diverse tech industry members has been reflected in the expansion of GIFCT. This chapter examines the context in which this took place and the key issues driving these developments, providing a retrospective on the emergence and evolution of GIFCT.
The Why Now and What For: GIFCT Foundations
Between 2014 and 2017 it became apparent to the world that there was a new form of Islamist extremist terrorism on the rise in the so-called Islamist State, or ISIS. Members and sympathizers of ISIS were unabashedly open about embracing social media and wider online technologies to radicalize, recruit, and distribute propaganda (Saltman & Winter, 2014). Previous international terrorist organizations, such as Al-Qaeda, were shrouded in an ethos of secrecy and limited distribution of propaganda through trusted networks online and offline. ISIS, in comparison, widely used Twitter, Facebook, Tumblr, Reddit, and other public forums openly, successfully recruiting men and women of different socioeconomic backgrounds estimated at 30,000 fighters from more than 85 countries as of December 2015 and developing targeted strategic communications to persuade recruits and incite violence (Benmelech, E. & Klor, E.F., 2016).
As international concerns about terrorism heightened against the backdrop of a brutal civil war in Syria and lone actor attacks across Europe, the European Commission launched the EU Internet Forum (EUIF) in December 2015 to address the misuse of the internet for terrorist purposes. The EUIF’s goals were to reduce accessibility to terrorist content online and increase the volume of effective alternative narratives online (European Union Internet Forum, 2025). Social media companies were brought to the table by the EUIF, the United Nations Security Council’s Counter Terrorism Executive Directorate (UN CTED), and individual countries asking companies to discuss their policies and practices for effectively identifying and removing terrorist content. In the hallways and coffee breaks of these meetings, company representatives would talk to each other and compare notes on major online terrorist trends.
For tech platforms, it was very apparent early on that no single platform alone could track the online lifecycle of terrorist recruitment, propaganda dissemination, and operationalization. For example, early outreach from an ISIS recruiter might start on one larger, open social media platform, and migrate their target quickly to a smaller, less regulated platform where the conversation became more specific. This became even more apparent as larger, better-resourced platforms began to crack down on terrorist content at scale, leading to a jihadist information ecosystem that was (and is) a large and complex network, connecting a vast array of platforms (Fisher, A. et. al., 2019). Adversarial shifts, coded language, and the exploitation of newer, smaller platforms were apparent. It was clear that companies needed a more structured approach to working with one another on countering terrorism and violent extremism.
Pressure on leading social media platforms was mounting in 2016 through the EUIF, UN CTED, G7, and Five Eyes governments to better scale efforts to counter terrorist content online as ISIS waves of violent propaganda became commonplace. Since the attacks of September 11, 2001, there had been widespread efforts through forums like the United Nations to foster international legal and policy frameworks to address terrorist threats in partnership with civil society and practitioners. However, the private sector had been underrepresented in these efforts. In parallel, there was no existing peer space where tech companies could meet to discuss trends, share knowledge, and explore cross-platform tooling and solutions. Thus, in December 2016, Facebook (now Meta), Microsoft, YouTube, and Twitter (now X) launched the infrastructure for a hash-sharing database to focus on known terrorist content (Verney, 2016), and, by June 2017, the Global Internet Forum to Counter Terrorism was established. As such, GIFCT was formed as a voluntary consortium, a tech-led initiative to foster collaboration among tech companies, advance relevant research, and share knowledge (see: Meta Blog, 2019).
Within its first six months of being established, GIFCT had hosted multistakeholder convenings in San Francisco, New York, Brussels, and Jakarta. These created important opportunities for policymakers, practitioners and civil society organizations to better understand the approaches and solutions offered by tech platforms, and vice versa, for the tech sector to deepen understanding of terrorism and counterterrorism dynamics. By the end of 2017, GIFCT announced it had added its first 40,000 hashes and added seven companies to the hash-sharing database and an academic wing was established to connect industry to global experts who could publish on terrorist and violent extremist trends online (see: Meta Blog, 2019b). An increasing number of tech platforms were being drawn to the GIFCT initiative, diversifying the tech stack involved and speaking to industry needs on this topic. GIFCT was rapidly expanding beyond the capacity of what a multilateral initiative managed between four companies could fully handle—when a new form of terrorist threat hit platforms.
From Voluntary Consortium to Independent Non-Profit Organization
On March 15, 2019, a white supremacy inspired lone terrorist killed 51 worshippers at the Al Noor and Linwood mosques in Christchurch, New Zealand. The attacker livestreamed the murders on Facebook and imbued the video with a gamified sense (Macklin, 2019). He also released a 74-page manifesto online to ideologically justify his targeting of the Muslim community. The virality of the attacker video and manifesto went above and beyond what most terrorist specialists had ever seen before. Facebook reported it removed an estimated 1.5 million uploads of the attack video globally within the first 24 hours after the attack, blocking 1.2 million shares at the point of upload (Sonderby, 2019). YouTube reported its systems saw videos of the attack being uploaded at a rate of one per second directly after the shooting (Dwaskin & Timberg, 2019). The Christchurch attacks represented a new form of viral online terrorist content, inspiring a new generation of white supremacy and accelerationist attackers.
This attack led GIFCT to enable its first “incident response” protocol, allowing for the labelling and hashing of perpetrator content associated with the attack. Previously, content within the hash-sharing database had to correlate with groups and individuals on the United Nations Security Council Consolidated List associated with sanctions under Resolution 1267 on groups and individuals associated with Al-Qaeda and ISIS, reflecting international consensus through the Security Council for over two decades (see: UN Security Council Consolidated List). The inclusion of hashed material from the Christchurch attacks represented a new development for GIFCT’s database in expanding to content meeting certain behavioral, rather than list-based, criteria. The scale of the online spread of the attack material lent itself to this rapid shift in approach. Within 48 hours after the attack, the hash-sharing database housed over 800 visually distinct hash variations of the attacker video and was able to share URLs and context on enforcement to GIFCT members (see: GIFCT Industry Cooperation).
The decision to transform GIFCT into an independent non-profit organization was closely shaped by the need to expand the scope of GIFCT to include global, cross-platform incident response work while welcoming a growing number of industry members. GIFCT also enhanced and solidified its membership criteria, ensuring companies met certain transparency, human rights, and policy frameworks before joining. As a diverse set of tech platforms looked to join GIFCT, an independent organization was best placed to work consistently with incoming companies, rather than having assessments carried out by peer tech companies (GIFCT Membership, 2024). As new technologies and tactics emerged, adversaries adapted to counterterrorism and online moderation efforts targeting terrorist and violent extremist content. This shift highlighted the increasing need to ensure a space for intra-industry and multistakeholder collaboration, and GIFCT continued to grow beyond the initial four members.
At the 2019 opening of the United Nations General Assembly session, then-serving Prime Minister Jacinda Ardern of New Zealand and President Emmanuel Macron of France held an event discussing GIFCT’s progress with the founding company leads, announcing that GIFCT would transform into an independent non-governmental organization (NGO), underscoring its complementarity with the government-led Christchurch Call to Action. The Christchurch Call was therefore established as a government-led initiative in 2019 with the goal of eliminating terrorist and violent extremist content online and worked directly with GIFCT member companies to develop a 9-Point Action Plan. This included both individual platform actions and collaborative actions (Microsoft Blog, 2019). All this took place against the background of an international push for collaboration to address a diversified terrorist threat that went beyond the Islamist extremist groups that had mobilized much of the counterterrorism effort globally until then. The emphasis on the need to address extreme white supremacist and accelerationist violent online content therefore shaped the follow up actions after the attack.
Furthermore, they announced that GIFCT would officially lead an industry focused Incident Response Framework (IRF) to guide a collaborative response amongst GIFCT members to terrorist attacks, commence publishing an annual transparency report, publish a cross-platform Campaign Toolkit to help civil society organizations build online campaigns to challenge hate based extremist ideologies online, and release algorithms for hashing image and video content to aid smaller companies joining the database (see: Next Steps for GIFCT). In this transition, the four founding member companies became the NGO’s Operating Board, and the Board representatives set up an Independent Advisory Committee (IAC) made up of government and non-governmental representatives to guide GIFCT and the Board on key issues. Thus, “GIFCT 2.0” emerged in 2020 as an independent 501(c)(3) non-profit organization.
Visit the chapter’s SSRN page to continue reading.