Written by Insights

Why Public Sector AI Projects Fail: They Are Designed for Committees, Not Citizens

Why Public Sector AI Projects Fail Ministry of Insights NZ

In the public sector, artificial intelligence holds enormous promise: faster processing of citizen applications, more accurate risk assessments in social services, predictive maintenance for infrastructure, and efficient allocation of limited resources. Yet across governments worldwide—including in New Zealand—many AI initiatives stall, deliver underwhelming results, or quietly disappear after substantial investment.

The failure rate for AI projects is high in general (often cited at 70–85% or more across enterprise and public contexts, per sources like Gartner, RAND, and NTT Data), but the public sector faces amplified risks. The core issue isn’t usually the technology itself. It’s rarely a bad algorithm or insufficient computing power. More often, projects are undermined by structural realities: they are built for committees, risk-averse sign-off processes, and multi-stakeholder consensus rather than the end user—the citizen.

This design-by-committee dynamic guarantees mediocrity in several predictable ways.

  1. Misaligned or Diluted Objectives from the Start

Public sector projects typically begin with broad, aspirational goals shaped in steering groups, workshops, and inter-agency consultations. By the time requirements are documented, the original problem (e.g., “reduce wait times for benefit applications”) has been watered down to accommodate every department’s priorities, legal constraints, and political sensitivities.

RAND Corporation research identifies misunderstanding or miscommunication of the core problem as a leading cause of AI failure. In government, this is exacerbated: what starts as a citizen-focused need becomes a lowest-common-denominator specification that satisfies internal stakeholders but fails to deliver meaningful impact for the public.

  1. Endless Layers of Governance and Approval

Committees multiply: project boards, risk registers, ethics panels, procurement teams, privacy impact assessments, and Cabinet-level oversight. Each layer adds scrutiny but rarely deep technical insight. Decisions drag on for months, pilots lose momentum, and the original urgency evaporates.

In New Zealand, historical public sector IT failures (such as large hospital information system abandonments costing millions) often trace back to similar governance overload—protracted procurement, shifting requirements, and fear of political embarrassment. AI projects inherit the same machinery: risk-averse cultures prioritise “no surprises” over bold experimentation, leading to safe-but-shallow solutions that never scale.

  1. Focus on Internal Compliance Over Citizen Outcomes

Public sector AI is frequently optimised for audit trails, explainability to auditors, and defensibility in OIA requests rather than usability or speed for citizens. Interfaces become clunky to accommodate every possible edge case raised in committee. Features get stripped out to minimise perceived risk.

The result? Systems that technically “work” but feel bureaucratic and impersonal—exactly what citizens already criticise about government services. As one OECD analysis notes, many government AI efforts remain trapped in pilots because implementation barriers (skills gaps, data silos, cultural resistance) aren’t addressed when the focus is inward.

  1. Data and Integration Challenges Amplified by Silos

Public data is notoriously fragmented—legacy systems, departmental silos, inconsistent formats. AI needs clean, integrated, high-quality data to thrive, but committee-driven projects often defer tough decisions on data sharing or modernisation. Instead, they settle for narrow pilots using whatever data is easiest to access.

Research consistently shows poor data quality and availability as top failure drivers (e.g., Forbes/MIT Sloan on 95% enterprise failure linked to data issues; similar patterns in public sector from Mathtech and OpenText analyses). In government, inter-agency politics and privacy fears make integration even harder, dooming projects before they start.

  1. Resistance to Iteration and Real User Feedback

Private-sector AI succeeds through rapid iteration: build, test with users, refine. Public sector projects, shaped by committee consensus, resist change once scoped. Citizen testing is often tokenistic or late-stage, and negative feedback triggers more reviews rather than quick fixes.

The outcome is systems designed for sign-off, not adoption—mediocre by design.

A Path Forward: Citizen-Centred Design in the Public Sector

Fixing this doesn’t require abandoning governance—it requires reorienting it.

  • Start with the citizen problem, not the committee wishlist. Use small, cross-functional teams empowered to define narrow, high-impact scopes.
  • Prioritise rapid pilots with real users early and often, treating feedback as essential rather than a risk.
  • Invest in foundational data modernisation and cross-agency standards before chasing shiny AI tools.
  • Build “minimum lovable products” that deliver visible value quickly, building momentum and trust.
  • Shift governance from endless sign-off to lightweight, outcome-focused assurance.

New Zealand’s public sector has strengths—pragmatism, a relatively small scale for testing, and growing AI guidance from MBIE and the Government Chief Digital Office. But until projects are truly designed around citizens rather than committees, many will continue to underperform.

The cost isn’t just financial. It’s lost opportunities to make government services faster, fairer, and more responsive at a time when public trust is fragile.

#PublicSectorAI #GovernmentInnovation #AIGovernance #NewZealandPublicService #DigitalTransformation #MinistryOfInsights

Visited 7 times, 7 visit(s) today
Last modified: February 9, 2026
Close