People in the start-up community know Andrew Chen as an Auckland University research fellow turned partner with Matū Group, a venture capital firm that invested in “deep-tech” businesses.
But after five years with Matū, Chen has been hired by the NZ Police, where he’ll now spend 80 per cent ofhis work week as chief advisor, technology assurance.
He brought a strong interest in tech ethics.
The topic of Chen’s 2019 PhD thesis was: “The Computers Have a Thousand Eyes: Towards a Practical and Ethical Video Analytics System for Person Tracking”.
The theme has a lot of resonance at a time when concerns are growing about the number of police surveillance cameras, with automated number plate recognition at times misused; close police co-operation with private security camera operators and privacy concerns over Police experimentation with AI.
It comes at a time when our lawmakers have done nothing to address the new technology and there are general concerns over bias issues with facial recognition systems, such as the recent incident that saw a Māori mum misidentified as a trespassed “thief” at a Foodstuffs supermarket trialling the technology, and issues over police officers using facial recognition tools without recommended safeguards in place.
The Herald asked Chen about his new role.
Herald: What led to the career hop from venture capitalist to a role with the NZ Police?
Chen: Since finishing my studies I’ve had two careers in parallel – one doing investment in early-stage deep tech start-ups, and one in technology ethics including all the work on digital contact tracing during Covid-19.
So, it’s less of a career hop and more a shift-in-balance of my time – now I’m focusing most of my time on technology ethics at Police 0.8 FTE [full-time equivalent], while maintaining my connections to venture capital and investment with the other 0.2 FTE.
Is it a new role?
The role has been around in Police since 2020 and I owe a lot to my predecessor Inspector Carla Gilmore for setting up the team [as manager of emergent technologies] and establishing good processes across the organisation.
Technology is moving rapidly and it’s important that Police have a good grasp of the human rights, legal, ethical, privacy, and other implications of using these technologies. I’ve worked with Police in the past as a contractor through the review of facial recognition technologies and as an independent member of their Expert Panel on Emerging Technologies.
What will you be doing day-to-day? Will you be assessing areas like number plate recognition and facial recognition?
Our Technology Assurance team receives proposals for any new use of technology – some of which are very basic, like a new way to provide information to frontline staff through their iPhones. We spend most of our time on more complex technologies like ANPR [automated number plate recognition] and ensuring they are being used in line with Police policy.
Our system of policing relies on public trust and confidence, so it’s important that these technologies and tools are used appropriately.
Our work is about keeping everyone safe – both Police and the public – by finding the right balance between protecting people’s right to privacy and effective policing which ensures offenders are held to account.
What role do you see AI playing in future law enforcement?
I think in the future there is an opportunity to support work behind the scenes. With new technology people tend to go straight to frontline operations and how tools might catch more offenders directly. I think uses that are perhaps less direct, but still just as important, have the potential to make a real difference to frontline delivery of services.
We haven’t begun any work programme on this, but the sort of things I personally think may have some merit are around how we collect and filter crime data, train new staff, and inform work to allocate resources.
What keeps you up at night when you think about new technologies and policing?
I think generative AI is creating new challenges for policing all over the world. These tools are so widely available and there are a million ways of using them that we haven’t imagined yet. We keep seeing new ways criminals are exploiting technology to harm innocent people and communities.
While generative AI is not currently approved for any Police use, I’m starting to think about whether we might be able to use these tools in low-risk ways to help Police do their job better and more efficiently.
With this I am also mindful of all the issues like reliability, bias, information security, and data sovereignty. Police and the public sector generally are rightfully held to a higher standard in ensuring that what we do is safe and appropriate.