British Politicians at Risk of Manipulative Audio Deepfakes Threatening Election Integrity

British Politicians at Risk of Manipulative Audio Deepfakes Threatening Election Integrity

Audio deepfakes are cheaper to make than video ones and can be harder to debunk. There have already been several cases of elections attempting to interfere with across the world.

As AI deepfakes cause havoc during other elections, experts warn the UK’s politicians should be prepared. British Politicians at Risk of Manipulative Audio Deepfakes Threatening Election Integrity

“Just tell me what you had for breakfast,” says Mike Narouei, of ControlAI, recording on his laptop. I spoke for around 15 seconds about my toast, coffee and journey to their offices.

Within seconds, I hear my own voice saying something entirely different.

In this case, I have written: “Deepfakes can be extremely realistic and have the ability to disrupt our politics and damage our trust in the democratic process.”

Tamara Cohen's voice being turned into a deepfake
Image: Tamara Cohen’s voice being turned into a deep fake

We have used free software; it hasn’t required any advanced technical skills, and the whole thing has taken next to no time at all.

This is an audio deepfake—video ones take more effort to produce—and as well as being deployed by scammers of all kinds, there is deep concern in a year with some two billion people going to the polls in the US, India and dozens of other countries, including the UK, about their impact on elections.

Sir Keir Starmer fell victim to one

Sir Keir Starmer fell victim to one at last year’s Labour Party conference, purportedly of him swearing at staff. It was quickly dismissed as fake. The identity of those who made it has never been uncovered.

London Mayor Sadiq Khan was also targeted this year with fake audio of him making inflammatory remarks about Remembrance Weekend and calling for pro-Palestine marches to go viral at a tense time for communities. He claimed new laws were needed to stop them.

Ciaran Martin, the former director of the UK’s National Cyber Security Centre, told Sky News that expensively made video fakes can be less effective and easier to debunk than audio.

“I’m particularly worried right now about audio because audio deepfakes are spectacularly easy to make, disturbingly easy,” he said. “And if they’re cleverly deployed, they can have an impact.”

Those which have been most damaging, in his view, are an audio deepfake of President Biden, sent to voters during the New Hampshire primaries in January this year.

A “robocall” in the president’s voice told voters to stay home and “save” their votes for the presidential election in November. A political consultant later claimed responsibility and has been indicted and fined $6 million (£4.7 million).

Ciaran Martin, the former NCSC director
Image: Ciaran Martin, former NCSC director

Professor at the Blavatnik School of Government at Oxford University

Mr Martin, now a professor at the Blavatnik School of Government at Oxford University, said: “It was a very credible imitation of his voice and anecdotal evidence suggests some people were tricked by that.

“Not least because it wasn’t an email they could forward to someone else to have a look at, or on TV where lots of people were watching. It was a call to their home, which they more or less had to judge alone.

“Targeted audio, in particular, is probably the biggest threat right now, and there’s no blanket solution; there’s no button there that you can just press and make this problem go away if you are prepared to pay for it or pass the right laws.

“What we need, and the US did this very well in 2020, is a series of responsible and well-informed eyes and ears throughout different parts of the electoral system to limit and mitigate the damage.”

He says there is a risk of hyping up the threat of deepfakes when they have not yet caused mass electoral damage.

A Russian-made fake broadcast of Ukrainian TV

A Russian-made fake broadcast of Ukrainian TV, he said, featuring a Ukrainian official taking responsibility for a terrorist attack in Moscow, was simply “not believed,” despite being expensively produced.

The UK government has passed a National Security Act with new offences of foreign interference in the country’s democratic processes.

The Online Safety Act requires tech companies to take such content down, and meetings are being regularly held with social media companies during the pre-election period.

Democracy campaigners are concerned that deep fakes could be used not just by hostile foreign actors or lone individuals who want to disrupt the process, but by political parties themselves.

Polly Curtis is chief executive of the thinktank Demos, which has called on the parties to agree to a set of guidelines for the use of AI.

Polly Curtis, the chief executive of Demos
Image: Polity Curtis, chief executive of Demos

She warned: “Foreign actors could create content.”added, “Political parties might do the same.” She emphasized: “Ordinary people on the street could also stir the pot of truth and falsehoods.”

We want the team to come together and agree on how to use the tools during the election. They should not generate AI, amplify it, or label it when using these tools.

This technology is so new. There are so many elections going on. The potential for a big misinformation event in an election campaign is significant. This could affect people’s trust in the information they’ve got.

Deepfakes have already been targeted at major elections.

Last year in the Slovakian presidential election, an audio fake surfaced. It was released hours before the polls closed. In the recording, one of the candidates falsely claimed to have rigged the election.He was heavily defeated. His pro-Russian opponent went on to win the election.

The UK government established a Joint Election Security Preparations Unit earlier this year. Whitehall officials now work with police and security agencies to respond to threats as they emerge.

Security is paramount.

A UK government spokesperson stated: “Security is paramount.” They added: “We are well prepared to ensure the election’s integrity.” The spokesperson emphasized, “Robust systems are in place to protect against potential interference.”

The National Security Act provides tools to address deep-fake election threats. Social media platforms should take proactive action against state-sponsored content interfering with elections.

Shadow security minister Dan Jarvis made the following statement: “Our democracy is strong.” He added: “We cannot and will not allow any attempt to undermine the integrity of our elections.”

The rapid pace of AI technology poses a challenge. Malign actors may use deepfakes and disinformation to undermine trust in our democratic system. Therefore, the government must always be one step ahead.

“Labour will be relentless in countering these threats.”

Read more:

Exploring the Tory National Service Scheme

The Potential Benefits of Artificial Intelligence for Democracy

List of MPs Retiring at General Election

Labour pledges to improve sick pay benefits

Leave a Reply

Your email address will not be published. Required fields are marked *