Responsible AI Policy

AI should improve decision quality without removing human accountability.

This Responsible AI Policy explains how Ministry of Insights uses AI-supported analysis, simulation and synthesis in decision intelligence work, while keeping human judgement, evidence discipline, privacy and accountability at the centre.

Plain-English summary

Responsible AI means useful, limited, transparent and human-governed.

MOI uses AI to strengthen analysis, not to automate judgement. The goal is to help leaders see the decision system more clearly before they approve, fund, defend or implement a consequential choice.

This page is a publication-ready policy draft. It should be reviewed against your legal, contractual and client requirements before being published as a formal policy.

Human-ledHumans remain accountable.

AI can assist analysis and simulation, but people retain responsibility for judgement, advice and final decisions.

Evidence-ledOutputs must be traceable.

AI-supported findings should connect back to evidence, assumptions, source material and decision conditions.

ProportionateUse AI where it adds value.

AI should be used when it improves insight, speed, comparison or scenario testing, not for theatre.

PrivateInformation should be protected.

Client, stakeholder and personal information should be handled under MOI’s privacy and confidentiality practices.

Policy detail

Ministry of Insights Responsible AI Policy

Last updated: May 2026
02

Role of AI in MOI work

AI may be used to support analysis, synthesis, comparison, scenario generation, risk identification, stakeholder mapping, evidence review and drafting. AI is not used as an autonomous decision-maker.

AI-supported outputs are treated as inputs to human judgement, not final conclusions.

03

Human accountability

MOI remains responsible for the professional judgement applied to its work. Clients remain responsible for their own organisational decisions, approvals and commitments.

  • AI does not approve recommendations.
  • AI does not replace leadership judgement.
  • AI does not remove the need for evidence, context or governance.
  • AI-supported analysis must be reviewed, interpreted and challenged by a human.
04

Evidence, traceability and uncertainty

MOI aims to separate evidence from assumption, source material from interpretation, and confidence from uncertainty. Where AI contributes to analysis, the relevant evidence, assumptions and limitations should remain visible enough to support decision-quality review.

Where information is incomplete, contested or uncertain, that uncertainty should be named rather than hidden behind confident language.

05

Privacy and confidentiality

MOI may work with sensitive organisational, operational, stakeholder or personal information. AI-supported tools must be used in a way that is proportionate to the task and aligned with confidentiality expectations.

Personal information and client-provided material should be handled in accordance with the Privacy Policy and any agreed engagement terms.

06

Data minimisation

MOI aims to use only the information reasonably needed for the analysis, simulation or decision-support task. Where possible, information should be reduced, summarised, anonymised or separated before AI-supported analysis is used.

07

Bias, context and limitations

AI systems can reflect bias, incomplete information, inaccurate patterns or overconfident reasoning. MOI treats AI output as something to be tested, not automatically trusted.

  • Outputs should be reviewed against the evidence base.
  • Local, organisational and stakeholder context should be considered.
  • Important decisions should not rely on a single AI-generated answer.
  • Alternative interpretations and risk scenarios should be considered where appropriate.
08

Use in scenario testing and simulation

MOI uses AI-supported simulation to explore possible decision pathways, stakeholder responses, implementation friction, operational constraints and second-order effects.

Simulation is not prediction. It is a structured way to examine plausible outcomes, test assumptions and identify conditions that may affect decision quality.

09

Transparency with clients

Where AI-supported analysis is material to an engagement, MOI aims to explain the role AI played in the work at an appropriate level. This may include describing how AI assisted with synthesis, comparison, scenario testing or drafting.

MOI does not present AI-generated output as independent proof. It is interpreted within the evidence and decision context.

10

Responsible use boundaries

MOI will not knowingly use AI-supported work to mislead, fabricate evidence, impersonate stakeholders, conceal material uncertainty or create a false impression of independent verification.

Where the evidence is weak, incomplete or uncertain, the output should say so.

11

Human review before delivery

AI-supported work should be subject to human review before being used in client-facing outputs. The review should consider accuracy, relevance, tone, evidence alignment, privacy, confidentiality, bias and decision usefulness.

12

Relationship to the wider Changeable ecosystem

MOI works alongside Changeable for AI implementation, process improvement and delivery, and Zero to AI for individual AI capability building. The same basic principle applies across the ecosystem: AI should support better human work, not disguise weak judgement.

13

Questions and concerns

For questions about responsible AI use, privacy, confidentiality or information handling, contact Ministry of Insights through the Contact page, email hello@changeable.co.nz, or call 0800 437 675.

Governance in practice

Responsible AI is a practice, not a badge.

The value of AI in MOI work is not speed alone. It is the ability to compare evidence, surface assumptions and test scenarios while keeping human accountability visible.

Frame

Define the decision question, evidence base, intended use and boundaries before using AI-supported analysis.

Test

Use AI to challenge assumptions, compare interpretations and explore plausible decision pathways.

Review

Human review remains required for accuracy, context, privacy, confidentiality and decision relevance.

Explain

Where AI materially contributes to analysis, explain its role and keep limitations visible.

Questions about responsible AI use?

Contact MOI before sharing highly sensitive information or commissioning work where AI use, privacy, confidentiality or stakeholder trust need to be carefully managed.

Contact Ministry of Insights for questions about responsible AI use.
Review the Privacy Policy for information handling and confidentiality principles.
Explore the MOI Lab system to understand how decision work is structured.
For practical AI implementation, see Changeable.
Visited 1 times, 1 visit(s) today
Close