Opinion: On July 24, Technology Minister Judith Collins announced a gradualist approach to the regulation of artificial intelligence. “We will take a light-touch, proportionate and risk-based approach to AI,” the accompanying cabinet paper said. “We already have laws that provide some guardrails; further regulatory intervention should only be considered to unlock innovation or address acute risks.”
Rather than the straitjacket of regulation, Collins has asked the Ministry of Business, Innovation and Employment to create AI guidelines for the public and private sectors.
A proper regulatory approach would be hard to design. Continual disruptive change has been a characteristic of the digital paradigm, no more so than in the field of AI.
However, Victoria University of Wellington senior lecturer Andrew Lensen wrote in the Herald that the government’s approach ignores issues such as data privacy, political polarisation and inequities in service delivery, which will escalate without legislative oversight.
The usual arguments for AI regulation are that it will protect human rights and dignity – AI systems should not intrude on privacy, equality and democracy and should not be used to manipulate, deceive or coerce human users. AI systems should ensure identifiable social and economic benefits and be transparent and trustworthy.
A 2023 report revealed 72% of New Zealanders are very or extremely concerned about unregulated AI. A similar number are concerned it will be used maliciously or have unintended consequences. Lensen wonders why the government is not listening.
There is often a fear of downsides when a new technology appears. Science fiction writer Isaac Asimov, in one of his robot stories, characterised technological fear as the “Frankenstein Effect”. Adapting his term specifically for AI, I call it the “Terminator Effect” after the movie franchise based on the premise that thinking machines took over the world.
The disadvantages of AI regulation are that it may stifle innovation and creativity, as it may impose rigid and uniform standards, rules and procedures that may not be suitable for the dynamic and diverse nature of AI.
Regulation may introduce red tape that slows down and complicates the development and deployment of AI. It may also create uncertainty and inconsistency, and be subject to interpretation and revision by different authorities and jurisdictions.
It may infringe on rights and freedoms and limit the choices and opportunities for individuals and organisations to use and benefit from AI. It may impose unreasonable responsibilities that are costly and burdensome for developers and users.
There are sound reasons for a light-handed regulatory approach that fosters innovation and creativity and allows more room for experimentation and adaptation to the changing needs of diverse users. Such an approach might also encourage co-operation and collaboration if it relies on trust and dialogue rather than coercion and enforcement.
It might enhance efficiency and effectiveness if it reduces administrative and operational costs. It might also increase certainty and consistency and be more responsive and adaptable to continual disruptive change.
A light-handed approach might in fact respect rights and freedoms, relying on self-regulation and accountability and empowering individuals and organisations to make responsible decisions about the use and impact of AI.
I oppose overactive government interference in areas where the risks are not clearly identified. A light-handed, hybrid and adaptive regulatory approach to AI may be appropriate and desirable. This would allow for the adjustment and alignment of the level and scope of regulation according to the specific characteristics and circumstances of AI.
David Harvey is a retired district court judge.