Deepfake Danger: How to protect your business

Deepfake technology is more affordable and accessible than ever. Where they once typically resided in the porn industry, we’ve seen deepfakes now infiltrate other spaces, from politics to popular culture to fraud vectors.

A recent example is well-known indie band, The Strokes, appearing as though they are 20-years-old again in their latest music video, “Bad Decisions.” In reality, the group of roughly 40-year-olds do not appear in the video at all, but rather, each one of their faces has skilfully been created by Paul Shales using deepfake technology. By encompassing AI, the face and voice of existing imagery can be manipulated to impersonate a real person (or persons). This can give a convincing, yet false, illusion of them saying words that were never spoken.

Deepfakes are becoming immensely popular , with the number of them online doubling in less than a year from 7,964 in December 2018 to more than 14,000 just nine months later. Shales, responsible for the de-ageing of The Strokes, could certainly testify to this growth with his meme page, The Fakening, where one repost of his deepfake video can gain him a staggering 10,000 followers.

The dark side of deepfakes

While virtually turning back the clock to get rid of those wrinkles is fairly jovial, it’s a serious problem when the faces of the people being used are put on bodies without consent and when sophisticated deepfakes are taken out of context and mistaken as real life. This problem emerged during the Obama administration, with a report by the U.S. Government Accountability Office stating how “deepfakes could influence elections and erode trust.”

Living in a world where fake news makes headlines, the added sophistication of deepfakes makes it harder for the public (and organisations) to confirm a person’s true identity online, as well as more difficult for them to distinguish what is authentic and what is not. With deepfakes of Donald Trump declaring war on North Korea, Hillary Clinton praising the Illuminati, Barack Obama delivering phoney speeches and Italian Prime Minister, Matteo Renzi, insulting other Italian politicians — misinformation can spread on a catastrophic scale whereby general elections and politics may become skewed.

How can deepfakes threaten a business?

Deepfakes don’t just pose a threat to our political systems. They are a very real problem for the digital economy and the evolution of digital identity.

While deepfake technology is increasingly cheaper and easier to use, even available now as an app for mobile devices, their repercussions for modern enterprises could be far-reaching. In 2019, a UK energy boss fell victim to a fraudster who used audio technology over the phone to mimic the voice of the victim’s boss. The fraudster succeeded in his impersonation and tricked the victim into transferring £200,000 to a Hungarian bank account.

Unfortunately, this is not the first case of deepfakes being used to trick individuals into sending money into fraudulent accounts. The situation highlights the importance for organisations to always be vigilant for deceptive fraudsters and to proactively seek ways to better protect themselves as this new type of fraud continues to transition from the light-hearted to the genuinely threatening.

In fact, over three-quarters (77%) of cyber security decision makers are worried about the potential for deepfake technology to be used fraudulently – with online payments and personal banking services thought to be most at risk – but barely a quarter (28%) have taken any action against them (source, iProov, January 2020).

Arming yourself against the new threat

It comes as no surprise that organisations are looking for new technologies to combat against the most advanced threats. For example, many banks are now asking for a government-issued ID and a corroborating selfie to verify a person’s digital identity when creating new accounts online. Nevertheless, deepfake technology could be used to create a spoof video and dodge the selfie requirement.

It’s paramount that more sophisticated identity verification methods are needed to maximise security. Embedded certified liveness detection is key to finding advanced spoofing attacks, including deepfakes, and ensuring that the remote user is physically present during the verification process.

The majority of liveness detection solutions simply require the user to perform eye movements, nod their head or repeat words or numbers — however, these can be bypassed with simple deepfakes.

Unless the identity verification provider has certified liveness detection (validated by the National Institute of Standards and Technology), fraudsters could still trick the system. Certified liveness detection can quickly detect when a video, photo or even a mask is being used in lieu of an authentic selfie. Level 2 certification with iBeta quality assurance means that authentication solutions can differentiate videos from real selfies to withstand a potential sophisticated deepfake attack. It is this certification which could be the difference between a secure ecosystem and one that is vulnerable to the imminent threats of tomorrow.

While deepfakes have infiltrated politics, social media, and industries such as entertainment, deepfakes pose a legitimate threat to business ecosystems when used as a vehicle to bypass liveness checks during new account onboarding.  It’s time to get serious and start acting. Rather than waiting and responding to the threat reactively, we must fight AI with AI — the kind of AI that powers modern identity verification and combats deepfakes.

By Labhesh Patel, Jumio’s CTO and Chief Scientist


Join our free-to-attend digital event, Last Thursday in Privacy, addressing data protection, privacy and security challenges including working from home, COVID-19, global regulations and more. Visit https://digital.privsec.info/.

We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.

Privacy Culture: Data Privacy and Information Security Consulting, Culture & Behaviour, Training, and GDPR maturity, covered. https://www.privacyculture.com/