It has been known for some time that extremists are using artificial intelligence (AI) for nefarious purposes, and that it allows extremism. This is despite the fact that many involved with this technology have promised otherwise. In March 2016, Microsoft released the AI chatbot Tay, but shut it down within 24 hours after users manipulated it into tweeting pro-Hitler messages. For this, Microsoft blamed "a coordinated attack by a subset of people exploit[ing] a vulnerability in Tay," adding: "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack."
In discussions, extremists around the world – some with programming experience – are looking at AI as a tool for spreading their message. They are also exploring the use of AI-generated voices to bypass voiceprint verification to hack into bank accounts, as well as using it to write articles about guerilla warfare. ChatGPT was used by one leading extremist to find out where "American critical infrastructure" is most vulnerable to attack; the answer was "the electrical grid." Another called for engineers with experience in AI to contact him, and discussed the potential uses of ChatGPT. Yet another shared his interchange with ChatGPT about a hypothetical situation involving "a 50 MT nuclear warhead in a city of 20,000,000" and commented on the possible use of AI in policing the U.S.-Mexico border.
These online discussions increase by the day. The threat of terrorist groups and entities using AI is a growing national security issue; NATO warns that AI is one of the "emerging and disruptive technologies" that "represent new threats from state and non-state actors, both militarily and to civilian society."
Since January 2023, there has been a major increase in online chatter about AI by leading extremists on platforms they favor. Many of the individuals listed in this report are tech-savvy, and have created their own software and platforms. Their use of AI should be taken seriously. As MEMRI Executive Director Dr. Steven Stalinsky wrote in Newsweek in March 2023, "The dangers inherent in AI, including to national security, have dominated both media headlines and the discussion on its possible implications for the future. Governments and NGOs have warned that the day was coming when AI would be a reality. That day has now arrived."
Recent events underline some of the ways that AI could be used by neo-Nazis and white supremacists. AI promotion of Nazism can be seen on the Hello History chat app, released in early January 2023 and downloaded over 10,000 times on Google Play alone. On the app, users can chat with simulated versions of 20,000 historical figures, including Adolf Hitler and Nazi propagandist Joseph Goebbels.
This report reviews online discussion of AI by neo-Nazis and white supremacists.
YOU MUST BE SUBSCRIBED TO THE MEMRI DOMESTIC TERRORISM THREAT MONITOR (DTTM) TO READ THE FULL REPORT. GOVERNMENT AND MEDIA CAN REQUEST A COPY BY WRITING TO DTTMSUBS@MEMRI.ORG WITH THE REPORT TITLE IN THE SUBJECT LINE. PLEASE INCLUDE FULL ORGANIZATIONAL DETAILS AND AN OFFICIAL EMAIL ADDRESS IN YOUR REQUEST. NOTE: WE ARE ABLE TO PROVIDE A COPY ONLY TO MEMBERS OF GOVERNMENT, LAW ENFORCEMENT, MEDIA, AND ACADEMIA, AND TO SUBSCRIBERS; IF YOU DO NOT MEET THESE CRITERIA PLEASE DO NOT REQUEST.
The full text of this post is available to DTTM subscribers.
If you are a subscriber, log in here to read this report.
For information on the required credentials to access this material, visit the DTTM subscription page