Microsoft executive Bret Arsenault – in Auckland for the opening of the tech giant’s $1 billion data centre at Westgate – heaped praise on Technology Minister Judith Collins’ AI strategy and explained how his firm has reinvented its security strategy following two security breaches.
Microsoft global exec on Judith Collins’ AI strategy, security turnaround
The second – disclosed by a Microsoft blog post and market filing in January this year – saw Russian intelligence gained access to the emails of some of Microsoft’s senior executives, beginning in late November 2023, in the “Midnight Blizzard” attack.
“Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations,” the CSRB report said.
“Unfortunately, throughout this review, the board identified a series of operational and strategic decisions that collectively point to a corporate culture in Microsoft that deprioritised both enterprise security investments and rigorous risk management. These decisions resulted in significant costs and harm for Microsoft customers around the world.”
The CSRB and Microsoft both linked Storm-0558 to the Chinese government, which has denied any association with the hacker group.
Smith told the House committee Microsoft accepted the CSRB’s findings “without equivocation or hesitation”.
His company had already engineered a huge culture shift, he said.
Arsenault, who was in Auckland for the launch of Microsoft’s first NZ hyperscale data centre last week, is one of the key executives leading Microsoft’s Secure Future Initiative (SFI) initiative, launched this time last year in response to the Exchange breach and other incidents.
Arsenault was Microsoft’s chief information security officer from 2009 to late last year. He now serves as chief cybersecurity officer.
SFI’s core ethos is to be “secure by design, secure by default and secure in operation”, Arsenault said. “If a product doesn’t meet the security bar, it doesn’t ship.”
His firm had adopted all 16 recommendations from the CSRB report but had also gone further, he said.
While there is always pressure to ship first, Microsoft chief executive Satya Nadella said in a May blog post that sometimes feature upgrades would have to be delayed. “Security is a team sport, and accelerating SFI isn’t just job number one for our security teams – it’s everyone’s top priority and our customers’ greatest need. If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security.”
Part of the Microsoft executive team’s compensation is now tied to meeting security milestones.
SFI is also a response to the increasing scale and severity of cybersecurity threats, particularly with the rise of artificial intelligence, Arsenault told the Herald.
“We’re not seeing novel threats [from AI],” he said. But just as businesses were using AI to be more efficient and effective, the bad guys were deploying artificial intelligence to sharpen their threats and generate a higher volume of attacks.
“In the last few years, our Ghost and Mstic teams have gone from in the last few years tracking 300 adversaries to over 1500 adversaries,” Arsenault says.
The increase is partly down to better monitoring tools. But there are also more bad actors, per se, he said.
“There’s some espionage and nation state activity, but financial motivation is driving a lot of it,” he says.
SFI has translated to changes and investments in Microsoft in Australia-New Zealand, Arsenault says, including a new threat-hunting team based in Canberra, Mstic, and a team operating on both sides of the Tasman, Ghost.
Mstic (pronounced “mystic’) stands for ‘Microsoft Threat Intelligence Centre’, a group which, among others, works closely with intelligence agency cyberthreat units.
Ghost stands for “global hunting, oversight and strategic triage”. The team is involved in proactive threat detection, assessment and disruption, Arsenault says.
NZ balance ‘right’
In the age of artificial intelligence, security is broader effort that attempts to repel or foil hackers through brute force.
Arsenault recently published a post called “More value, less risk: How to implement generative AI across the organisation securely and responsibly”, which looked at how to deal with five concerns among business leaders about AI: data security, hallucinations, threat actors, biases and legal and regulatory issues.
Mid-year, Technology Minister Judith Collins presented a paper to Cabinet that said, “We will take a light-touch, proportionate and risk-based approach to AI.”
Her generally hands-off strategy, compared to Australia, the EU and, to a degree, the US, got a mixed reception.
Noting the lack of funding for AI initiatives, as well as Collins’ regulation-as-a-last-resort approach, BusinessDesk commentator Peter Griffin asked if her’ approach to AI was “light-touch or lightweight?”
Victoria University AI lecturer Andrew Lensen said Collins had all but ruled out AI regulation, in his view, which scared him.
Arsenault’s take: “Governments have a responsibility to protect their constituents, but also to drive a thriving economy. And I think you have to get that balance right.”
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.