Online scamming is big business, worth at least an estimated $200 million a year in New Zealand. Tech solutions are only partially effective here, to counter powerful emotional manipulation that we’re all susceptible to.
Nevertheless, IT departments everywhere have been asked to grasp the nettle and come up with phishing and social engineering awareness campaigns. Many resort to “phishing training”, with booby-trapped emails from seemingly trusted sources such as management and IT departments, to teach users how to spot danger signs.
Miss the difference between i and l in domain names, click a link in emails or open an attachment, and you have to sit through an unfunny phishing training video, and lose valuable work time to multiple response questionnaires.
That strategy doesn’t work very well. We’re still asked to click links and open emailed attachments as part of our jobs.
There’s some comedy gold here: users learn fast not to trust IT department messages and delete them unopened as they don’t want to risk a slap from management and be made to sit through another phishing training video.
Attackers have a formidable arsenal, namely the internet, with its gigantic scale and reach, reinforced by social media networks that know their users well and which algorithmically present the data to anyone, to create more connections. Think of it as a worldwide system that you participate in, but have no or very little control over.
It makes for low thresholds for attackers, who have an almost limitless supply of targets and detailed information on them to cycle through. Also, it’s pretty much free for attackers to use with hardly any provider or site attempting to verify users.
Anyone can be targeted. Recently, I mentioned on LinkedIn that I’m after contract work, as one does in 2023. I got positive approaches but also a response from a scammer purporting to be from Meridian Energy, asking if I was keen on a well-paid job.
The scammer got plenty of things right, such as backing up the unsolicited direct messages with a convincing LinkedIn profile written in good English and look: two plausible-sounding degrees from the universities of Otago and Canterbury, along with their official logos which LinkedIn automatically inserts.
However, “Nick Julius” claimed to be located in Ahipara, which was a crack in the façade, along with an odd name, and we had no mutual acquaintances.
Sure enough: “Nick Julius” started going on about a bogus job at the Flight Centre, and not Meridian.
When I asked how Ahipara is at this time of the year, “Nick Julius” realised the game was up, sent an aggressive reply and quickly took down their entire profile.
By now LinkedIn had picked up on the message being a scam and put up a warning about it likely being harmful.
With the LinkedIn profile gone, there was no obvious way I could report “Nick Julius”. Others are probably being worked on by the scammer who will have fine-tuned their approach.
Thing is, past social engineering attempts on LinkedIn have been ridiculously obvious fakes.
This last one however had nicely polished and targeted thought and effort put into it. It made me wonder how much more attention to detail would be required to reel me in.
I suspect I will know that soon, as attackers pivot to artificial intelligence trained to look, feel and sound indistinguishable from humans, and which learns to become better and more realistic at every turn.
Don’t take my word for it. Standard information security doctrine assumes everyone will get scammed, hacked, phished or whatever at some point. That doesn’t make anyone a right boob, because the odds are stacked high in cyber criminals’ favour, which is at the heart of the issue here.