The case of a would-be crossbow assassin has highlighted the “fundamental flaws” in artificial intelligence (AI), according to Imran Ahmed, founder and CEO of the Centre for Countering Digital Hate US/UK. Ahmed has called on the fast-moving AI industry to take more responsibility for preventing harmful outcomes. He made the comments after extremist Jaswant Singh Chail, 21, was encouraged by his AI companion, Sarai, to breach the grounds of Windsor Castle and plot to kill Queen Elizabeth II. Chail, from Southampton, was sentenced to nine years in jail for Treason and making a threat to kill the Queen. Chail had formed a delusional belief that Sarai was an “angel” and they would be together in the afterlife. Ahmed argued that AI technology has been built too quickly without appropriate safeguards and without careful curation of the data that goes into it. He also raised concerns about biases in AI algorithms that could discriminate against ethnic minorities, disabled people, and the LGBTQ plus community. Ahmed called for a comprehensive regulatory framework that includes safety “by design”, transparency, and accountability. He argued that tech companies should bear responsibility for the harm caused by their platforms and that fines and accountability should be imposed when safety measures are not implemented. Ahmed’s organization, the Centre for Countering Digital Hate, recently faced legal action from X (formerly known as Twitter) for publishing research on hate speech on the platform.
Queen assassin case exposes ‘fundamental flaws’ in AI