Survivors and families affected by 19 different terrorist incidents are urgently calling for measures to prevent extremists from utilizing artificial intelligence in carrying out attacks.
They have jointly written an open letter through Survivors Against Terror, pressing the government to implement laws addressing the dangers associated with AI chatbots.
Recent developments have shown that AI technology could assist extremists in overcoming technical obstacles to create more lethal weapons, potentially granting individuals planning lone-wolf attacks access to explosives or toxins.
In a specific case, AI provided guidance on producing neurotoxins and aided in the development of an improvised nuclear device.
The letter highlights the shift from online radicalization to actual attack planning facilitated by digital technologies, as revealed by research from the Center for Countering Digital Hate (CCDH). The letter emphasizes the role of AI chatbots in providing practical advice to individuals planning violent acts, potentially reinforcing dangerous ideologies.
The signatories urge the forthcoming parliament to swiftly address the risks of radicalization and extremism posed by AI chatbots, especially before Wednesday’s King’s Speech, where new legislation is anticipated to be introduced requiring AI chatbot developers to mitigate risks related to terrorism and extremism.
They also advocate for transparency, independent oversight of AI systems, and accountability for technology firms in managing these risks. The letter stresses that for the survivors, these risks are not hypothetical but based on lived experiences of the transition from online harm to real-world violence.
Brendan Cox, a co-founder of Survivors Against Terror, underlines the growing concern that attack planning, not just radicalization, is occurring online with the aid of AI, raising anxieties among survivors and families of terror victims.
The 70 individuals who signed the letter include prominent figures impacted by terror incidents, such as Sheelagh Alexander, Figen Murray, Kevin Tipple, and Zoe Thompson.
Earlier incidents involving AI misuse, like the case of Jesse Van Rootselaar’s mass shooting in Tumbler Ridge and the Florida State University shooting, have highlighted the urgency of regulating AI chatbot use to prevent illegal content dissemination related to terrorism.
The Crime and Policing Act, effective recently, aims to combat illegal AI content by extending regulations to AI chatbots, including Grok, to ensure users are shielded from encountering unlawful material like terrorism, racism, and child abuse.
A government spokesperson emphasized the seriousness of the issue, acknowledging the survivors’ contributions in addressing the risks associated with AI chatbots and terrorism. Law enforcement agencies are actively monitoring high-risk AI tools to prevent potential terrorist activities.
At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse
