By Sanjay Gupta, Global VP at Mitek
Deepfake videos are becoming commonplace. So much so that you may have come across one without realising. Today, over 15,000 deepfakes are reportedly lurking online.
While being a relatively new phenomenon, their potential to morph into a bigger problem is worrying: their potential to distort and manipulate reality could be limitless.
Despite this, so far, these AI-driven videos often impersonate politicians, business magnates and A-listers. Just recently, the BBC called out a hyper-realistic deepfake of Boris Johnson and Jeremy Corbyn endorsing each other in the UK general election. This was a clear example that high-quality deepfake videos are no longer as obvious as we once thought.
Deepfake technology started appearing following the exponential growth of ‘fake news’, a roadblock to separating reality from fiction. As ‘spoofs’ become more difficult to ascertain with a naked eye, we are likely to see more examples of how deepfakes can facilitate the fabrication of false stories and be used as convincing photo, video or audio ‘evidence’ to make it appear like someone said or did something.
What’s even more astonishing is that only a couple hundred images are required to produce a highly believable ‘synthetic’ video of someone. And it’s not just politicians and celebrities at risk from sophisticated manipulations using deepfake technology.
As deepfake technology matures, driven by AI, big data and advances in digital manipulation of image and sound, a much more concerning use of the phenomenon is emerging.
Deepfake technologies are being used to produce nearly flawless falsified digital identities and ID documents – and the range and quality of the technologies available to fraudsters is the underlying driver. Technology that was once considered to be available to a few industry mavericks – using it for the right reasons – has now gone ‘mainstream’.
Identity fraud is an age-old problem. Yet, tech innovation has taken it to the next level. Banks and financial institutions already employ thousands of people to stamp our identity theft, but to fight the ever-growing threat of these new forms of fraud, businesses must look to technology to spot what the human eye can’t always see.
As the annual cost of punitive ‘Know Your Customer’ (KYC) and Anti-Money Laundering (AML4/5) non-compliance fines for a typical bank rises to an eye-watering €3.5 million, technology can and should be part of the solution.
By digitising customer identity verification alone, a typical bank could save €10m a year, while the best technology is being deployed to ensure customers’ identity documents are digitally verified via an ID scan and a selfie in near-real time. Technology that can verify identities – of customers, partners, employees, or suppliers – is not new. It has become a must-have, especially in regulation-heavy industries like banking.
The good news is that the ability to identify deep fakes will only improve with time, as researchers are experimenting with AI that is trained to spot even the deepest of fakes, using facial recognition and, more recently, behavioural biometrics.
By creating a record of how a person types and talks, the websites they visit, and even how they hold their phone, researchers can create a unique digital ‘fingerprint’ to verify a user’s identity and prevent any unauthorised access to devices or documents. Using this technique, researchers are aiming to crack even the most flawless of deepfakes, by pitting one AI engine against another in the hope that the cost of ‘superior AI’ will be prohibitive for cybercriminals.
Meanwhile, the tech industry is testing other safeguards. Apple has expanded its Near Field Communication (NFC) on devices to enable them to unlock and read data stored in security chips of passports. Once unlocked, it allows the device to see biometric data and high-res photo of the biometric document owner, and is now being used by the UK Home Office to enable European citizens to apply for Settled Status through both iPhone and Android apps.
For the deepfakes antidote to work as intended, biometrics will need to keep pace with cutting edge innovation. Biometric technologies will always be a preventative measure – the only real indicator of who you really are. Overall, the adoption of biometrics as an option for digital identity verification, or account and device sign-in, is a reliable measure of security.
New websites like thispersondoesnotexist.com, however, can generate incredibly life-like images, revealing how fraudsters can become very realistic actors within the digital ecosystem. This means that fake digital identities could impact nearly every organisation that operates a digital onboarding solution – which, in the age of the instant real-time experience, is any company who wants to win new digital-savvy customers.
As social media platforms and governments are stepping up efforts to curb the use of deepfakes, many businesses look to technology to help verify the identities of customers through AI and machine learning at the point of onboarding. This minimises potential threats, even before new regulations are introduced to curb the threat. In addition, these tech providers should be up to date on the most pressing risks and the technologies or techniques making the rounds among malicious threat actors.
As regulations for deepfakes remain unclear, businesses cannot afford to lose. Businesses have a duty to protect the biometric information they capture from customers or prospects. Now is the critical time for businesses to ensure biometric data and consumer identities are not compromised and used to create fake identities.
Ultimately, the potential ramifications of deepfakes are yet to emerge. But businesses can’t afford to take a back seat. Only education and action can help us stamp out the threat – and to be able to trust our eyes again.