In this rapidly evolving digital era, South-East Asia is witnessing significant growth in digital technology, exceeding global averages in social media engagement. This surge is most notably pronounced among the region’s dynamic and digitally savvy young population. On the one hand, this phenomenon means more access to information and opportunities for development. On the other, it means increased risk of potentially vulnerable youth being exposed to illegal and harmful content online, including violent extremist narratives.
Indeed, violent extremist and terrorist groups have been increasingly exploiting the digital domain for their propaganda, incitement, recruitment, and other operations. Recent research also indicated that these groups are experimenting with generative artificial intelligence (AI) services to enhance efforts related to propaganda, for instance, by exploring media spawning, automated multilingual translation, fully synthetic propaganda, variant recycling, personalized propaganda, and subverting moderation to produce, adapt, and disseminate content.
To effectively respond to the evolving risk of violent extremist groups using AI and other advanced technologies, responses by criminal justice, law enforcement and other relevant authorities need to ensure due consideration and protection of human rights and fundamental freedoms. These efforts are in line with the recommendations of the Eighth Review of the Global Counter-Terrorism Strategy (A/RES/77/298), adopted by the General Assembly in 2023, which called upon Member States to consider additional measures to counter the use of such technologies for terrorist purposes, including but not limited to AI. Furthermore, the Delhi Declaration on countering the use of new and emerging technologies for terrorist purposes, adopted by the United Nations Security Council Counter-Terrorism Committee in October 2022, emphasized the need to strike a balance, fostering innovation while preventing and countering the misuse of new and emerging technologies for terrorist purposes as their applications become more widespread.
In this context, UNODC partnered with the Government of Singapore to co-organize a regional workshop to respond to the misuse and exploitation of online spaces by violent extremist and terrorist groups by enhancing knowledge and capacities of over 30 criminal justice, law enforcement, and other relevant government officials from Singapore, Indonesia, Malaysia and the Philippines. Held in Singapore on 16-18 April 2024, the event provided a platform for participants, researchers, and private sector representatives to discuss challenges, share experiences and good practices related to the ethical use of AI and advanced technologies in preventing and countering terrorist exploitation of online spaces. Discussions delved into the dual nature of AI, recognizing it as both a challenge and a tool in preventing online radicalization. Participants explored how these technologies impact efforts to counter and prevent violent extremism, including their role in detecting disinformation and moderating online content while safeguarding human rights and privacy. It further emphasized the importance of having in place relevant national policy and regulatory frameworks in compliance with international human rights standards. Recognising the significance of multi-stakeholder collaboration and public-private partnerships in these efforts, the workshop was followed by a study visit to META Singapore which enabled participants to learn more about META’s approach to addressing the threat of terrorist exploitation of online spaces and its cooperation with law enforcement agencies within the region.
The activity was funded by the Government of Japan and the Government of Singapore.