Home Artificial Intelligence AI: Voice cloning tech emerges in Sudan civil war

AI: Voice cloning tech emerges in Sudan civil war

by Joey De Leon

A campaign using artificial intelligence (AI) to impersonate former Sudanese leader Omar al-Bashir has gained significant traction on TikTok. An anonymous account has been posting videos claiming to be “leaked recordings” of al-Bashir since August. However, the voice in the videos is actually fake. Bashir, who was accused of organising war crimes and was overthrown by the military in 2019, has not been seen in public for a year and is believed to be seriously ill. The campaign adds confusion to an already troubled country dealing with civil war.

Experts warn that campaigns like this demonstrate how new tools, such as AI, can quickly and easily distribute fake content through social media platforms. Hany Farid, a digital forensics researcher at the University of California, Berkeley, expresses concern over the democratisation of access to sophisticated audio and video manipulation technology, allowing even the average person without technical expertise to create fake content.

The recordings of al-Bashir are posted on a TikTok channel called “The Voice of Sudan.” The posts consist of old clips from press conferences, news reports, and various “leaked recordings” attributed to al-Bashir. The authenticity of the recordings was assessed by a team of Sudan experts at BBC Monitoring and was deemed unlikely to be recent due to al-Bashir’s reported illness. However, this does not necessarily mean that the voice in the videos is not his.

Further analysis of the recordings revealed that the very first one, posted in August 2023, matched a Facebook Live broadcast by a popular Sudanese political commentator known as Al Insirafi. Comparisons of the audio waves showed similar patterns in speech and silence, indicating the use of voice conversion software to mimic al-Bashir’s voice. At least four more recordings were found to be taken from the same blogger’s live broadcasts. The motivation behind the campaign is unclear, but it may aim to trick audiences into believing that al-Bashir has reemerged to play a role in the ongoing war in Sudan or to legitimize a specific political viewpoint.

Impersonating al-Bashir on such a scale has regional significance and the potential to deceive audiences. This raises concerns among AI experts who fear that the proliferation of fake audio and video could lead to widespread disinformation, sparking unrest and disrupting elections. Mohamed Suliman, a researcher at Northeastern University’s Civic AI Lab, warns that these false recordings could even create an environment where people doubt the authenticity of real recordings.

Spotting audio-based disinformation can be challenging. Individuals should question the plausibility of recordings before sharing them and ensure they come from reputable sources. However, verifying audio can be difficult, especially when circulating on messaging apps. The technology to detect synthetic audio is still in its early stages of development, while the technology to mimic voices is already quite advanced. The TikTok account impersonating al-Bashir was banned after the BBC contacted TikTok about it.

You may also like