Trump
also shared two other deepfakes of high-profile people: one of US Vice-President Kamala Harris leading a communist rally at the Democratic National Convention; the other of him dancing with social media platform X (formerly Twitter) owner Elon Musk.
While we might laugh off deepfakes of A-list celebrities, tech tycoons and presidential candidates as entertainment fodder, worlds away from average New Zealanders, the rise of deepfakes is unfortunately casting an ominous shadow as it collides with the criminal underbelly of the internet.
Just two days ago, a Taranaki grandmother lost $224,000 after spotting an AI deepfake video on Facebook of Prime Minister Christopher Luxon urging pensioners to supplement their income by investing in Bitcoin.
These kinds of attacks will soon become commonplace. Once the domain of highly technical hackers, the proliferation of easily accessible generative AI tools has lowered the barrier for entry for virtually anyone attempting to commit cyber fraud using deepfake images.
But realistic dupes of public figures aren’t the only form of deepfake bait, and vulnerable individuals aren’t the only targets.
We’re now seeing increasing examples of cybercriminals using these same tools against the corporate world, impersonating trusted people within organisations to scam and defraud unsuspecting victims, be it customers or colleagues.
In fact, scams and phishing attacks leveraging deepfakes are predicted to cost businesses more than US$40 billion by 2027.
Meanwhile, deepfake detection technology is scrambling to keep up with the rapid uptake of this developing technology.
As AI technology becomes faster and better at producing realistic outputs, it’s much harder to decipher what content is real and what is not. And Kiwis are especially wary.
Recent research by the National Cyber Security Centre found participants from New Zealand had the lowest level of confidence (48%) in their ability to identify AI-generated content than any other country. Only 47% said they trusted companies to implement AI responsibly.
DIY deepfakes, the next iteration of corporate cyber crime
Deepfakes are AI-generated synthetic media designed to mimic existing photos, videos and audio, making someone appear to look and sound as though they are doing something they’re not.
AI uses machine-learning to create deepfakes, by taking “inspiration” – so to speak – from the trove of online videos, images and other personal information available on the internet.
Cybercriminals, earlier adopters of any new technology that will aid them in their malicious endeavours, are using AI-generated deepfakes to create sophisticated misrepresentations of actual people, generated to dupe unwitting victims. And with plenty of local leaders active on social media or prominently featured in their businesses’ marketing collateral, there’s a treasure trove of content to train the AI on to produce startling representations of CEOs, CFOs and company directors.
One of the most convincing executions of deepfakes is when they rear their (very realistic) heads in video calls. A malicious actor may use a video conferencing platform like Zoom or Teams to call an employee, imitating a member of the executive team such as a chief financial officer and requesting a transaction or the transfer of money.
In fact, this exact scenario happened early this year to UK engineering firm Arup. An employee was invited to join a video call with several senior executive staff – all deepfakes – and instructed to transfer £20 million in funds to the scammers.
Back on our shores, in late 2023 the CFO at Zuru was targeted by a deepfake video version of his boss, Nick Mowbray, on a Teams call, attempting to dupe the CFO into transferring cash.
New Zealand workers, and workplaces, need not be sitting ducks in suits, waiting to fall victim to a deepfake cybercrime. There are plenty of ways employees and businesses can mitigate and protect themselves without needing to invest in expensive AI monitoring and detection products – because the secret to protecting yourself from deepfake cybercrimes lies with your people and the processes you wrap around them.
Five tips to avoid getting scammed
- Human-to-human checks: These are a highly effective and inexpensive AI detection method. For example, consider a policy requiring all financial transfer requests to trigger a mandatory out-of-band check to confirm the request.
- Upskill your employees: Your people are your greatest risk, and your best line of defence. Ensure your cybersecurity training modules are regularly updated and consider interactive computer-based training that deals with AI deepfakes over video, phone and text.
- Instil good behaviour: Regularly raise awareness about the threat of AI deepfakes and remind employees to stop, slow down and think before actioning an unusual, urgent or emotion-eliciting request.
- Set boundaries: If you’re going to use monitoring tools, ensure the right safeguards are in place to prevent those same tools from compromising the privacy of your people, your customers, and your precious IT.
- Include deepfakes in your cybersecurity strategy: Make sure to regularly review your strategy to determine if you are effectively detecting and mitigating deepfake threats from scamming your business.