Rethinking Digital Regulation in Aotearoa New Zealand

Aug 27, 2025 | Blogs

by Keenan Evans, Research Assistant @ ALTeR 

Divider Line

The saturation of technology products and services influences everyday life – from social media algorithms to legal practice and even government environments – the question is no longer whether to regulate, but how. While the growth of technology is an essential driver of future economies, the systemic harm borne by digital consumers is increasingly viewed and accepted as a symptom of this growth (Crofts, 2024). How can we advocate for a stronger legislative framework to empower the New Zealand regulatory landscape?  

In Aotearoa, the government has taken a flexible approach to emerging technologies. For example, AI is regulated through the use of existing legislation as “guardrails”. While this strategy aims to create space for innovation, it instead creates ambiguity, leaving businesses without a clear regulatory framework and consumers vulnerable to poor practices.

In this post, we reflect on key areas where New Zealand’s policy can evolve to promote a more responsible and resilient digital environment, especially in the face of Big Tech’s influence.  

I. Artificial Intelligence: The Need for Clearer Boundaries

New Zealand’s reliance on flexible regulatory approaches leaves critical gaps in algorithm transparency and accountability that put both the public and democratic processes at risk. 

AI is central to economic growth and innovation, as evidenced by its massive growth in financial and technology markets. However, without standalone regulation, it can also pose real risk to consumers, businesses, other users and even non-users – particularly when it comes to the way algorithms curate content.

AI has a unique place in social media, where algorithms collect direct and indirect data from users and propagate content based on the AI’s prediction of what would be most relevant for the user. While users may enjoy the ‘curated’ feed, it raises questions about the transparency of filtering algorithms. For example, if a user believes that the algorithm is filtering content to what they want to see, would they inherently believe any political content from the algorithm as something that was a result of their own agency?

There has been clear evidence of algorithms propagating content that maximises engagement rather than diverse and accurate content. A clear example of this was the hyperbolic conspiracy theories spread in the 2016 US presidential election and the Brexit Referendum, providing evidence that algorithms can skew political discourse and amplify harmful narratives (Kosilova et al., 2022). If a human were to make similar decisions resulting in serious consequences, courts can determine their intent and hold them legally accountable. Can the same be said for AI?

We need clearer legal boundaries around how algorithms operate, not to limit innovation but to ensure transparency and accountability. As AI systems become more complex and integrated with daily life, assigning responsibility and legal accountability becomes increasingly critical.

II. Data Protection: Beyond Principles Towards Enforcement 

New Zealand’s Privacy Act 2020 includes strong principles – like transparency, consent and extraterritorial oversight. However, accountability is difficult to incentivize when relying on compliance notices and reputational damage.

The Clearview AI trial with NZ Police highlights this tension between policy and enforcement. Clearview AI is a facial recognition technology (FRT) and scrapes data on people from public sources without consent. The Clearview AI trail was investigated by the Privacy Commissioner with subsequent policy to prohibit live facial recognition. While the Privacy Commissioner has ordered images in the databases to be properly removed, the data collected by Clearview AI is unlikely to be penalised. This trail highlights how brief divergences from internal governance can result in a breach of the Privacy Act with minimal effective enforcement.  

 Additional insight can be found overseas in the DeepMind and Royal Free scandal in the United Kingdom. The scandal was regarding the use of an algorithm to treat patients, however, the process in doing so was transferring the patient’s data without explicit consent or notice (Powles & Hodson, 2017). However, most relevant to New Zealand was how this agreement was implemented. The process was a check with a non-legal third party and an internal privacy assessment, analogous to the procedure employed by the NZ police in implementing the Clearview AI trial. With evidence that internal frameworks do not incentive accountability, it highlights the need for more stringent enforcement methods. 

To better protect citizens, the Office of the Privacy Commissioner should be empowered with greater enforcement tools. The Privacy Act 2020 provides a legislative framework with a robust foundation that could benefit from additional enforcement capabilities to create appropriate regulatory adherence. Additionally, aligning our approach with more comprehensive international models like the EU’s GDPR, where data protection is treated as a fundamental right, is worth considering.

III. International Inspiration

While the above-mentioned examples are important, we don’t have to start from scratch. Inspiration can be drawn from both the EU’s GDPR and Taiwan’s vTaiwan 

The GDPR is a more robust and agile regulatory framework due to the legal recognition of data protection as a fundamental human right to privacy. Through this codification, there is stronger legal justification for more stringent enforcement of regulation towards data privacy (Buttarelli, 2016). If New Zealand were to align with legal recognition of data protection beyond an interest of privacy, it would provide much-needed legislative authority for prohibitive enforcement. The EU’s GDPR demonstrates how consumer-oriented privacy laws not only offer citizens strong protections now but also in a rapidly changing future.

Taiwan’s vTaiwan is an online discussion forum that involves citizens in policymaking, helping those with opposing view to come to a consensus on proposed regulations. Although not binding on the Taiwanese government, it allows citizens to participate in a digital democracy and understand regulation that is being passed. An example of its effectiveness was through the regulation of Uber upon its initial entry into the Taiwanese market. Through vTaiwan, citizens came to a consensus on regulations for Uber that protected the local market while allowing foreign entry. This raises interesting questions: Could New Zealand benefit from a hybrid model – combining legal strength with public input?

Conclusion 

Aotearoa has a strong legal foundation for digital regulation. But the scale and complexity of digital technologies demand more than just “guidelines”. 

To protect digital consumers today and in the future, we need to shift toward a regulatory framework that is proactive, principled, and responsive to the realities of the constantly growing digital landscape. 

Responsible regulation is not about slowing down progress – it’s about shaping it.  

 Read More

Does Regulating Emerging Technologies Promote Innovation?

Does Regulating Emerging Technologies Promote Innovation?

Simply: Yes! Written by Tobit Lingnau, a student in LAWCOM733: Special Topic: Shaping the Law in Tech Driven Era (2025); This blog summarises his longer academic essay. “[L]aw was not important only to regulate or promote innovation. During some periods, law was...

Why Your Data Matters and Could AI Protect Our Data?

Every click, swipe, and search you make leaves behind a trace. For years, most people ignored it, assuming their data was invisible or insignificant. Not anymore. A recent Time magazine survey found that 74% of Americans now consider their personal data “very important.”

read more

Building Local, Thinking Global

The Future of Legal Tech in New Zealand by Professor Alexandra Andhov Legal services in New Zealand have traditionally followed a conventional...

read more