Online exclusive
Peter Griffin’s consumer tech columns appear fortnightly on listener.co.nz
We all like to think we are savvy enough to avoid falling into the clutches of a scammer, but the numbers tell a different story.
About $200 million is lost to scams in New Zealand each year. The New Zealand Herald showcased some of the biggest individual losses in April, when it looked at the cases of 25 scam victims who had a total of $6.7m stolen from them. Only $700,000 was recovered.
The amounts stolen ranged from $40,000 to a massive $1.4m and mainly saw victims lured into fake investment schemes. They were duped into transferring large amounts of money, in the hope of receiving a huge return on their “investment”.
It’s embarrassing to fall victim to a scam that starts with a text message or Facebook chat from a stranger. That’s why official figures for scam-related losses are just the tip of the iceberg.
The latest New Zealand Crime & Victims Survey (NZCVS) from the Ministry of Justice shows that 11% of New Zealanders were victims of fraud or cybercrime in 2023. When it comes to reporting crimes to police, fraud and cybersecurity has the lowest reporting rate at 10%. That compares to 57% for vehicle offences, 42% for burglary and 33% for interpersonal violence.
Smooth-talking criminals mastermind these scams, but the initial approach is often via an email, text message, phone call or chat message. With the rise of conversational artificial intelligence (AI), scammers are programming chatbots to strike up initial conversations, hoping their robots are convincing enough to hook some victims. Then they will usually step in to carry on the conversation and work their charm on the victim.
Be wary of group chat messages
Facebook, WhatsApp and Telegram group chats are increasingly being targeted by scammers. If you are a member of such a group, particularly one focused on investment advice, be very wary about responding to any direct messages sent to you offering financial advice or investment opportunities. It’s quite likely it will be a chatbot’s opening gambit to a scam.
The real response to the rise of AI-powered scams is Big Tech platforms, banks and telcos doing more to keep their customers safe, coupled with education that warns people how to stay safe online. But that same AI chatbot technology, underpinned by large language models trained to serve up convincing chat conversations, is also being harnessed to battle scammers.
So-called scambaiting isn’t a new thing. For years, enterprising YouTubers have been entertaining themselves and millions of viewers by turning the tables on scammers, wasting their time and, in many cases, succeeding in having scam operations shut down. But it’s like a game of Whack-a-Mole. As one scam syndicate is dismantled, three more start up. Can AI scale up such efforts?
In 2017, cyber safety group Netsafe, seeing the growing wave of online scams, developed Rescam, an automated system designed to respond to scam emails with convincing messages. The idea was to keep scammers engaged in a conversation, chewing up their time so they couldn’t go after real targets.
Netsafe last month relaunched Rescam using AI chatbot ChatGPT to come up with more realistic and responsive messages. If you receive a dodgy looking message asking you for personal details or money, you can simply forward the email to me@rescam.org.
I did exactly that a couple of weeks back when I received an email supposedly from an online casino offering me a coupon to redeem a US$1500 prize:
Rescam analysed the email and sent the following automated response:
The scammer hasn’t taken the bait - yet. When and if they do, Rescam will send me a transcript of the conversation. The Rescam chatbot adopts the persona of a potential victim, responding to the scammer’s prompts with plausible but ultimately meaningless information. This keeps the scammer engaged, believing they are on the verge of a successful con.
Rescam relies on email recipients proactively forwarding suspect emails so the more critical mass it gets, the more effective it will be at thwarting the efforts of scammers. The more example emails Rescam gets, the better its conversational abilities will become.
Uncovering the mule accounts
UK-based security firm Netcraft has employed similar methods to respond to scam emails and text messages. Its aim is not only to frustrate scammers by tying them up in fruitless conversations, but also to try to uncover the international financial infrastructure that allows the scammers to operate.
“We have extracted thousands of criminal money mule bank accounts across 73 countries and more than 600 financial institutions,” Netcraft explained in a blog post on the two-year research project it undertook using AI-powered chatbots to engage scammers.
“In one case, we have received 17 mule accounts from one conversation. The top four crypto wallet addresses Netcraft identified have received more than $US45 million (1000 BTC).”
Not surprisingly, scammers are particularly keen on schemes that involve victims transferring cryptocurrency, which can be sent to anonymous wallets without the scrutiny of a bank or financial regulators. Netcraft also found that scammers are willing to stay engaged with potential victims for a long time if they feel there’s a chance of a payout.
“When we see the whole scam play out, on average, criminals send more than 32 messages despite receiving only 15 replies. Standing out in the data is that criminals are eager to engage quickly and frequently and maintain these scams over an average of more than 47 days.”
Netcraft shares the information it gathers with law enforcement agencies which assists in efforts to shut down the bank accounts scammers rely on to receive payments from their victims. Because scammers often pose as representatives of legitimate companies, one of the main ways our banks could help victims of scams is by introducing “confirmation of payee” technology which would help customers verify that they are sending money to the intended recipient.
After mounting pressure from the Banking Ombudsman and facing heat over the large losses victims have been stung with, in April the big banks confirmed they will collectively roll out a confirmation of payee system. A similar move in the UK a few years ago has dramatically reduced the financial losses from scams.
According to Netcraft, the main types of scams that result in big financial losses include romance fraud, where fraudsters attempt to strike up a fake romantic relationship with the victim, advance fee fraud, which is designed to trick victims into making modest upfront payments in return for larger payments, and so-called pig-butchering scams.
“So-called because the criminals ‘fatten victims up’ and then take everything they can,” Netcraft explains. These scams require the greatest amount of effort from fraudsters, who can spend months attempting to gain the trust of victims before asking for money.
Kitboga, a Twitch streamer and Youtuber with 3.64 million followers, has made a business of scambaiting fraudsters, publishing his interactions with them as he poses as witless victims. In 2022, as conversational AI was on the rise, he used ChatGPT to create an AI bot using voice-to-text recognition and pre-recorded responses to trick scammers into divulging the bank accounts they were using.
Some of the resulting videos offer fascinating insights into the tactics of scammers, and hilarious as Kitboga poses as a tech-illiterate septuagenarian, luring the scammers into revealing more information.
Is it ethical to deceive the scammers? The use of AI chatbots to fight scammers is largely seen as a positive development by law enforcement and cyber security agencies, as long as they don’t engage innocent individuals or violate privacy norms.
But you can be guaranteed that the fraudsters are using the exact same tools to make their scam efforts more convincing and effective. The ability of conversational AI systems to recreate the voice of a person based on just a few snippets of audio has security analysts bracing for a wave of phone-based scam attempts.
As large language models become more sophisticated, so too will the conversations they can have on behalf of scammers. AI is only escalating the cybersecurity arms race, but it’s good to know at least that it’s a two-way street and innovative AI-powered efforts to stymie the scammers are gathering steam.