Presenting a Human Rights-Based Approach to Preventing Terrorist and Violent Extremist Exploitation of the Internet
The Global Internet Forum to Counter Terrorism (GIFCT) today opens its annual Global Summit, convening its 17 member companies with partners and global stakeholders from government, civil society, academia and marking one year operating as an independent entity. Over the last 12 months, inaugural Executive Director Nicholas Rasmussen and the team at GIFCT incorporated cutting-edge research and input from a range of experts and practitioners to set a strategy and direction for enhancing the collective capacity of dedicated technology companies to combat terrorist and violent extremist activity online.
Last week, we shared how we will work to bring our commitment to human rights to every aspect of our efforts, detailing the steps we’ve already taken and our plan to further pursue the recommendations in the recently published human rights impact assessment in the months and years to come. Starting today at the Global Summit, we present the progress we’ve achieved and what is next in our roadmap for classifying terrorist content online, convening global working groups on the challenges at the nexus of technology and terrorist and violent extremist activity, and fulfilling our mission with our commitments to human rights and transparency at the core of our work.
Transparency on our Progress
Transparency is one of the three organizational principles that guide our work at GIFCT preventing terrorist and violent extremist exploitation of digital platforms. We require our member companies to produce regular transparency reports and believe strongly in holding ourselves to the same standard. Today we are publishing the third annual GIFCT Transparency Report, detailing the latest updates against GIFCT’s three strategic pillars; prevent, respond, and learn.
- Prevent: We have increased our hash-sharing database to over 320,000 visually distinct hashes and enabled our member companies to access information related to 49,000 URLs, all in an effort to prevent the spread of terrorist content online.
- Respond: Since April 2019, we have responded to more than 150 incidents to determine any online dimension to an offline violent attack and activated our Content Incident Protocol twice as a result.
- Learn: Since July 2020, we launched the Member Resource Guide, convened more than 600 participants from tech, government, civil society, and academia through monthly e-learnings with our partner Tech Against Terrorism, and continued to fund cutting-edge research with our academic partner the Global Network on Extremism and Technology, producing 198 insights from 245 authors based in 24 countries around the world.
In the coming year, we will continue to find ways to enhance the transparency of our work, setting the standard for what combating terrorist and violent extremist activity can look like when this is required. As we share more, we also seek greater input from our global stakeholders on where we can further improve and enhance our impact.
Expanding the Taxonomy for Terrorist Content Online
At the beginning of 2021, we launched an effort to engage a wide range of experts on how best GIFCT could expand the taxonomy that determines what content qualifies for our hash-sharing database beyond the United Nations Security Council’s consolidated sanctions list and live-stream footage from mass violent attacks that have activated our Content Incident Protocol. Our work must be complementary and mutually-reinforced with human rights and fundamental freedoms, starting with the content we recognize as terrorist content online. Expanding the taxonomy through this process means we can empower our members to combat a broader range of terrorist activity, address biases that exist in the larger counterterrorism field, and remain diligent to impacts on human rights ranging from potential biases to over-censorship.
Today, we are publishing the first series of results of this effort, including the initial three ways we plan to expand our taxonomy, our findings on the feasibility and impact of expanding the hash-sharing database for our member companies, and recommendations on how to develop a common taxonomy for terrorist content from the academics, practitioners, and civil society leaders selected during our Request for Proposals announced in February.
Over the next few months, we will implement an initial expansion of the hash-sharing database’s taxonomy with three new hashed categories, prioritized based on feedback from global experts, our Independent Advisory Committee, and our member companies about how the threat of this content manifests online:
- Manifestos from terrorist and violent extremist attackers in PDF form;
- Terrorist publications that use specific branding and logos for the organization in PDF form; and
- URLs identified by Tech Against Terrorism as where specific terrorist content exists that are often shared and amplified on other platforms
We will continue working to expand the reach and impact of the hash-sharing database in order to respond to terrorist activity online across the ideological spectrum while also working to bring greater transparency to this specific area of our work.
Bringing Stakeholders Together to Counter Terrorism and Violent Extremism Online
One year ago we created six working groups for our technology company members and stakeholders from civil society, government and academia to collaborate and focus on specific challenges within the nexus of technology and terrorist and violent extremist activity. In that time we convened over 200 experts and practitioners from across the world; holding more than 55 meetings with representatives from 10 tech companies, 13 governments and international governing bodies, 26 civil society organizations and 41 research and academic institutions. Together, they explored new technical solutions, refined crisis response protocols, studied legal frameworks addressing terrorist and violent extremist content, pursued innovations in positive interventions, and explored how to enhance transparency.
After a year of collaboration, we have published the efforts of each working group to date, providing authoritative information on the current dynamics of each issue area and what is needed next in identifying and deploying solutions. In the year ahead, working groups will focus their efforts on strengthening understandings of algorithmic and AI impacts, enhancing transparency and access to information, refining our collective capacity for responding to violent incidents, and growing the effectiveness of counterspeech initiatives helping to disengage those vulnerable to violent ideologies. Lastly, it is also worth noting that all of this working group activity and output was achieved in a virtual setting because of the COVID-19 pandemic and we look forward to when our working group participants can engage in person and supplement the work we are already doing virtually.
This Week and the Road Ahead
After one year operating an independent organization, establishing our team of counterterrorism and technology experts, and growing our membership to 17 technology companies, we applaud all of our partners and stakeholders for the progress achieved but believe our work is just getting started. At the United Nations General Assembly in 2019, GIFCT’s mission as an independent organization was set, and we have established clear priorities and concrete objectives to guide our work in the months and years ahead. There is much that remains to be done.
We look forward to welcoming everyone to our Global Summit later today and are grateful to the experts and practitioners joining us to discuss important issues at the nexus of terrorism and violent extremism, technology, human rights, and public policy.