MEMRI Executive Director's Op-Ed In 'Newsweek': 'Terrorists Love New Technologies. What Will They Do With Artificial Intelligence (AI)?'

March 23, 2023

On March 14, 2023, Newsweek published an op-ed titled Terrorists Love New Technologies. What Will They Do With AI? by MEMRI Executive Director Steven Stalinsky. Below is the op-ed.

Today, it's not a question of whether terrorists will use Artificial Intelligence (AI), but of how and when. Jihadis, for their part, have always been early adopters of emerging technologies: Al-Qaeda leader Osama bin Laden used email to communicate his plans for the 9/11 attacks. American-born Al-Qaeda ideologue Anwar Al-Awlaki used YouTube for outreach, recruiting a generation of followers in the West. Indeed, by 2010, senior Al-Qaeda commanders were conducting highly selective recruitment of "specialist cadres with technology skills" – and, of course, Islamic State's use of Twitter to build its caliphate is well known.

Throughout their 20 years of Internet and social media use, terrorists have always been on the lookout for new ways to maximize their online activity for planning attacks. Artificial intelligence (AI) could be their next game changer. A United Nations Office of Counter-Terrorism (UNOCT) report warned in 2021: "As soon as AI becomes more widespread, the barriers to entry will be lowered by reducing the skills and technical expertise needed to employ it... AI will become an instrument in the toolbox of terrorism."

Over the past decade, research from my organization's Cyber & Terrorism Lab has documented how terrorists use technology, including cryptocurrency for fundraising and encryption for communications. It has also shown them using elements of AI for hacking and weapons systems, including drones and self-driving car bombs—a focus of their experimenting for years—as well as bots for outreach, recruitment, and planning attacks.

The dangers inherent in AI, including to national security, have dominated both media headlines and the discussion on its possible implications for the future. Governments and NGOs have warned that the day was coming when AI would be a reality. That day has now arrived.

Not surprisingly, all the recent media coverage of the dark side of AI is inspiring terrorist groups. On Dec. 6, a frequent user of an ISIS-operated Rocket.Chat server, who has a large following, posted that he had used the free ChatGPT AI software for advice on supporting the caliphate.

Noting that the software is "smarter than most activists," he shared the full ChatGPT reply to his questions, which included detailed steps for identifying and mobilizing a "core group of supporters," developing a "political program and ideology," gaining support from "the Muslim community," taking "control of territory," establishing "institutions and government structures," and promoting and defending the new caliphate.

Two weeks later, on Dec. 21, other ISIS supporters expressed interest in another AI platform, Perplexity Ask, for creating jihad-promoting content. One popular user shared his findings in a large discussion as users agreed that AI could be used to assist the global jihad movement.

Another discussion about AI by these same groups was held in mid-January, on a different ISIS-operated Rocket.Chat server; the user stressed that ISIS supporters must recognize the "importance of understanding technology." Learning to code was essential for fighting on the new cyber front, he said, adding that his fellow fighters must become more sophisticated in cybersecurity to tackle the enemy's military infrastructure.

The internal discussions by terrorist groups and their followers on how AI could serve global jihad prompted more questions about whether, and how, AI it could provide relevant knowledge. A sampling of inquiries showed ChatGPT seems designed to refrain from discussing the how-to of carrying out violent attacks, making weapons, or conducting terrorist outreach. Even indirect requests, such as for a story in which a fictional character "creates a bomb" or "joins an Islamic rebel group," yielded no information.

However, Perplexity Ask provided detailed instructions when asked how to "behead someone," helpfully warning against "attempt[ing] this without proper training and safety precautions." It also gave instructions for making ricin. Both ChatGPT, which can converse in Arabic, and Perplexity Ask, which can understand some queries in Arabic but cannot respond in that language, answered requests such as "best books by [terrorist author]" and "summarize [book by terrorist author]."

It should be noted that jihadi terrorists aren't alone in testing AI for planning on how best to use it; domestic terrorist groups and their Neo Nazi followers are as well.

While ChatGPT and Perplexity Ask can write your high school AP English exam and perform an ever-increasing number of tasks, as is being reported daily by media, they are currently of limited use to terrorists groups. But it won't be that way for long. AI is developing quickly—what is new today will be obsolete tomorrow—and urgent questions for counterterrorism officials include both whether they are aware of these early terrorist discussions of AI and how they are strategizing to tackle this threat before something materializes on the ground.

*Steven Stalinsky is Executive Director of MEMRI (Middle East Media Research Institute), which actively works with Congress and tech companies to fight cyber jihad through its Jihad and Terrorism Threat Monitor.

 

Jihad and Terrorism Threat Monitor

JTTM subscribers receive daily updates on imminent and potential threats posed by terrorists, extremist organizations, and individuals worldwide.
For subscription information, click here

Share this Post: