From a public broadcaster to a student newsroom to the union representing working journalists, New Zealand's media sector has spent recent months putting its AI positions on paper. The documents differ in scope and detail, but they converge on the same core principles: humans write the news, AI does not; transparency with audiences is non-negotiable; and generative AI applied to journalism raises risks that enthusiasm for the technology does not dissolve.
RNZ's AI Principles, published in August 2024, set the baseline for public media. The document is direct: "RNZ will generally not publish, broadcast or otherwise knowingly disseminate work created by generative AI." Permitted uses include transcription, translation, and research assistance, with all AI-generated material treated as unverified source material requiring human verification before publication. The principles invoke Te Tiriti alongside the RNZ Charter, require disclosure when AI has been used, and insist on senior editorial sign-off for any exceptions. The framing throughout is protective of audience trust, described as "taonga we must protect."
Te Waha Nui, the student-run newsroom at AUT, arrived at almost identical conclusions independently. Their policy, published in March 2026, opens with the same premise: "News is best created by humans." AI may assist with transcription, translation, and research. It may not write. Every story carries a disclosure. The policy closes with a pointed note: "AI was not used in the creation of this policy." For a student newsroom, the reasoning goes beyond editorial principle. Kyla Blennerhassett, Te Waha Nui's AI lead, put it plainly: if readers cannot tell whether AI wrote something, they will not trust anything. The students also argued that outsourcing the writing task undermines the skill development that journalism training is meant to provide.
E tū, whose members include New Zealand journalists, has taken a broader view. Their statement calls for Māori journalists to be fully involved in any AI development to protect te reo Māori and Treaty principles, noting that AI systems tend to reflect dominant perspectives and underrepresent Aotearoa's diversity. They are developing sector-wide guidelines and have called for government involvement, placing New Zealand's conversation within a wider international effort that includes the Paris Charter and work by Australian media alliances.
The convergence across these three very different organisations is itself notable. A public broadcaster, a university student newsroom, and a union have each reached similar conclusions without coordinating. The consensus position: AI is a tool for workflow assistance, not a substitute for reportorial judgment, and any use requires disclosure and human oversight. Where they differ is in scope. RNZ and Te Waha Nui are drawing internal operational lines. E tū is pushing for sector standards and structural protections for workers, including recognition that commercial pressures will always pull toward cheaper production.
That commercial pressure is visible in global data. Research published last year found that roughly 9% of newly published articles were partially or fully AI-generated, with smaller and local outlets accounting for a disproportionate share. News deserts, the communities already underserved by collapsing local media economies, are getting more AI-generated content, not less. The policies coming out of RNZ and Te Waha Nui represent what well-resourced, principle-driven organisations can commit to. The harder question is what happens in the newsrooms without the resources or the mandate to hold that line.
For now, New Zealand's media sector has articulated a clear position: the story belongs to the journalist. Whether that position holds as financial pressures intensify is the question none of these documents can answer.