Those threats included allowing cybercriminals to quickly create email or social media phishing lures that were more convincing, making it hard for people to tell what was legitimate.
On top of that ChatGPT could also generate code. Roundy said while the chatbot made developers’ lives easier with its ability to write and translate source code it could also make it easier for cybercriminals by making scams faster to create and more difficult to detect.
Cybercriminals could also use ChatGPT to create fake chatbots which could impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information in order to gain access to sensitive information, steal money or commit fraud.
Roundy urged the public to avoid chatbots which did not appear on a company’s website or app and to be careful sharing any personal information with someone they were chatting to online.
He also warned against clicking on links that came via unsolicited phone calls, emails or messages.
Hervé Le Goff, Senior Threat Analyst at CERT NZ, told the Herald while they had not received any reports directly, they have seen articles overseas discussing AI tools, such as Chat GPT, being used to write scripts for phishing emails.
“The ability for AI tools to emulate human interactions opens up channels for scammers to create far more realistic wording in their campaigns. When combined with AI image-generating tools the scammers can create profiles in bulk and at speed.
“We used to be able to confidently say ‘look out for poor spelling and grammar’ as a way of determining if you were being scammed. Now scammers can use these AI-generated scripts in emails or messaging services and fool more people into handing over their credentials or financial details.
“Our recommendation for New Zealanders going online is to remain vigilant and be careful of clicking any links. These scams may be written in a more convincing manner, but they still require you to believe the scenario and act before thinking. For example, if they claim to be from your bank, check through official channels, such as a phone number or website, before reacting.”
Charlie Bell, executive vice president of security, compliance, identity and management at Microsoft said in a blog on AI and cybersecurity that there had long been a perception that attackers had an insurmountable agility advantage.
“Adversaries with novel attack techniques typically enjoy a comfortable head-start before they are conclusively detected. Even those using age-old attacks, like weaponising credentials or third-party services, have enjoyed an agility advantage in a world where new platforms are always emerging.”
But he said the asymmetric tables could be turned.
“AI has the potential to swing the agility pendulum back in favor of defenders. Al empowers defenders to see, classify and contextualise much more information, much faster than even large teams of security professionals can collectively triage. Al’s radical capabilities and speed give defenders the ability to deny attackers their agility advantage.”
Bell said it had learned from past experience and built security into everything it did.
“We know the time to secure these systems is now, as they are being created. To that end, Microsoft has been investing in securing this next frontier. We have a dedicated group of multi-disciplinary experts actively looking into how Al systems can be attacked, as well as how attackers can leverage Al systems to carry out attacks.
“While there will always be bad actors pursuing malicious intentions, the bulk of data and activity that train Al models is positive and therefore the Al will be trained as such.”