Asset-Herausgeber

Veranstaltungsberichte

Artificial Intelligence: Threats, Opportunities, and Policy Frameworks for Countering VNSAs

Policy Brief

The Policy Brief "Artificial Intelligence: Threats, Opportunities, and Policy Frameworks for Countering VNSAs" was developed by the Global Internet Forum to Counter Terrorism (GIFCT) in partnership with the Konrad-Adenauer-Stiftung (KAS) to examine the intersections of artificial intelligence (AI) with non-state actor, terrorist and violent extremist exploitation online and how best to frame policy to ensure safety by design.

Asset-Herausgeber

Emerging technologies and in particular, generative Artificial Intelligence (GenAI), are increasingly deployed for everyday use by individuals around the world. While these tools have enormous potential, the emergence of new technologies has heightened concerns about their manipulation and utilization by violent non-state actors (VNSAs), and in particular, terrorist and violent extremist groups.

 

In March 2024, the UN General Assembly adopted a landmark resolution on Artificial Intelligence, stressing the importance of harnessing the technology for sustainable development and collective good and ensuring that no government could use AI to undermine peace and human rights, while the 8th review of the Global Counter-Terrorism Strategy called on Member States to counter the use of new technologies, including artificial intelligence, for terrorist purposes.

 

On 29th April 2025, the Global Internet Forum to Counter Terrorism (GIFCT), the Konrad Adenauer Foundation (KAS) New York Office and the United Nations Office of Counter-Terrorism (UNOCT) convened representatives of the UN, governments, the private sector and civil society to examine the intersections of AI with non-state actors, including terrorist and violent extremists and explore policy responses.

 

The roundtable launched the Policy Brief of GIFCT in partnership with KAS, Artificial Intelligence: Threats, Opportunities, and Policy Frameworks for Countering VNSAs, examining key concerns and offering suggestions for policymakers, practitioners, and industry to counter violent extremist content online.

 

The event raised several key concerns and policy options to tackle at the intersection of VNSAs and Artificial Intelligence, summarized below:

  • The misuse of AI by violent non-state actors (VNSAs) is no longer a theoretical risk - it has already materialized. As a result, debates around the design and regulation of AI tools are taking place while these technologies are already in widespread use. VNSAs exploit generative AI for propaganda and disinformation, recruitment and radicalization. AI’s cross-platform nature enables these actors to coordinate and operationalize attacks more effectively, while its widespread accessibility significantly lowers the barrier to entry.
  • AI tools also present opportunities for mitigation: The same features exploited by malicious actors can be used to strengthen security in response, for example, through AI-powered moderation tools detecting logos or textual data.
  • Industry-wide standards must be set: The industry should work toward consistent definitions of VNSAs, standardized taxonomies, share best practices and foster cross-sector partnerships.
  • The existing global regulatory environment for AI is highly fragmented. The UN should assert its norm-setting role by promoting international standards and capacities through a dual effort: strengthening technological capabilities of Member States and law enforcement officials, while ensuring AI tools and frameworks are firmly grounded in a human rights framework.
  • Member States must keep pace with rapid technological advancement: while some have taken steps forward, many still lack basic digital infrastructure. Cross-sector collaboration and strengthening private sector partnerships with governments are essential for addressing AI-related challenges.

Asset-Herausgeber

P1100743
P1100730
IMG_7173

comment-portlet

Asset-Herausgeber

Asset-Herausgeber