Three reasons to take note of the sinister rise of deepfakes
TechnologyArticleFebruary 14, 2022
Deepfake videos that manipulate images of politicians and celebrities are often used for amusement, but they pose much greater risk. Here’s what you need to know.
A TikTok video of actor Tom Cruise on the golf course is fairly unremarkable, given the often outlandish world of social media. But that was followed by the actor doing magic tricks, playing guitar and offering cookery demonstrations. This wasn’t a promotional stunt for a new movie.
In fact, Tom Cruise wasn’t involved at all. The videos were all deepfakes – performances by an impressionist that were altered by an artificial intelligence (AI) program trained on footage of the actor. The computer took the impersonator’s performance – with a voice and mannerisms that were similar to Cruise – and turned them into something totally convincing.
Similar videos have brought to life fake versions of former U.S. President Barack Obama and Meta (formerly Facebook) boss Mark Zuckerberg. Deepfakes can also be used to generate blackmail material to falsely incriminate victims, to create fake news or to produce disturbing hoaxes – often involving celebrities. And there is the potential to use deepfakes for fraud or other criminal activities.
Although videos are the most high-profile deepfakes, the technology isn’t restricted to video. Audio, photos, text and any other digital material can be manipulated using AI. And it isn’t just about manipulating existing material – AI has been used to scan masses of faces and then generate lifelike images of new faces. In many cases the intent is to be humorous; but we should take the risks very seriously indeed.
“Deepfakes can be highly disruptive and erode trust in digital technology,” warns Lisa Bechtold, Global Lead Data Governance and Oversight at Zurich Insurance Group (Zurich). “At Zurich, we are committed to inspire confidence in a digital society. This is why we have adopted an AI Assurance Framework to ensure we use AI ethically to honor the trust customers are placing in us.”
So, what are those three reasons to take note of the sinister rise of deepfakes? Read on:
1. They are getting harder to spot
The first deepfakes were not especially convincing. The faces would move in a slightly unnatural way or parts of the image would flicker. However, they are improving rapidly – and the people making them are finding ways to compensate for the weaknesses. The Tom Cruise videos, for example, would not have been as convincing had they not been based on an actor who already had a slight resemblance to Cruise and the skill to mimic the star’s mannerisms.
Other tricks include intentionally downgrading the quality of the deepfake, for example, making a low-resolution video so that any flaws are harder to spot. And people are easier to fool when they are not expecting a trick. For example, when the chief executive of a UK-based energy firm received a phone call from his boss one day in 2019, he didn’t consider for a moment that the caller was using AI to trick him. The deep-faked voice was convincing enough that the victim transferred EUR 220,000 (USD 243,000) to the fraudster. These fraudulent calls are commonly called ‘vishing’ – or voice phishing – and deepfake technology is enabling criminals to create even more convincing and persuasive calls.
2. They are getting easier to create
The significant work on deepfakes has happened only in the past few years but the technology has progressed rapidly. Producing a deepfake used to require a lot of time and computing power. The AI would be ‘trained’ using thousands of images or hours of video footage, for example. This could take several days and high-end cloud-computing resources. Today, there are companies that can make them to order and even offer free online applications to create deepfakes, though these are usually limited to a selection of presets on which the system has already been trained.
Nevertheless, there is every reason to think that the ability to make high quality deepfakes will soon be available to anyone with a computer and an internet connection. “This means criminals will no longer need in-depth expertise to create fakes used in fraud,” Bechtold adds.
3. They erode trust in an insidious way
We are already at a point where any digital media we come across could be a deepfake. A politician’s speech, a phone call from work, or even medical records can all be faked with relatively little effort. In 2018, researchers demonstrated the ability to add or remove signs of lung cancer on a patient’s 3D CT scan. The power of internet connectivity and social media exacerbates the problem. A fake political speech can spread to millions of people before it can be debunked and many of them will never see or indeed believe the debunking. The potential risk to our trust in commercial, professional and civic life is enormous.
There is a lot of work to do. Jonathan Davis, Lead Data Scientist at Zurich UK, says we need to undertake more detailed risk assessments of how deepfakes could affect certain parts of government and business. “Mitigating the risk of deepfakes remains a continuous challenge as technology is evolving at a rapid pace. In our insurance industry, for example, automated claims processing could be targeted using deep-faked evidence to support a bogus claim.”
Raising awareness and educating people about the potential risk of deepfakes is important. And we need more research into technology that can verify digital content and expose deepfakes. Governments also have a key role to play and should work together to create legal frameworks to deal with malicious deepfakes.
The EU’s draft Artificial Intelligence Act – which is expected to be adopted later this year – will help, says Zurich’s Bechtold. “It will impose specific transparency obligations on AI systems that generate or manipulate content – i.e., deepfakes. This should help protect people from the risks that misuse of this technology can trigger.”
Businesses are also getting involved. For instance, a coalition of technology companies led by Adobe have published the first version of a new standard that would prevent digital tampering.
Without more work on these issues, the consequences could be dire indeed.