AI can assist analysis and simulation, but people retain responsibility for judgement, advice and final decisions.
AI should improve decision quality without removing human accountability.
This Responsible AI Policy explains how Ministry of Insights uses AI-supported analysis, simulation and synthesis in decision intelligence work, while keeping human judgement, evidence discipline, privacy and accountability at the centre.
Responsible AI means useful, limited, transparent and human-governed.
MOI uses AI to strengthen analysis, not to automate judgement. The goal is to help leaders see the decision system more clearly before they approve, fund, defend or implement a consequential choice.
This page is a publication-ready policy draft. It should be reviewed against your legal, contractual and client requirements before being published as a formal policy.
AI-supported findings should connect back to evidence, assumptions, source material and decision conditions.
AI should be used when it improves insight, speed, comparison or scenario testing, not for theatre.
Client, stakeholder and personal information should be handled under MOI’s privacy and confidentiality practices.
Ministry of Insights Responsible AI Policy
Purpose of this policy
This policy explains how Ministry of Insights uses AI-supported tools in its decision intelligence, AI simulation, advisory and assurance work. It applies to work delivered through the MOI Lab system, including Civic Lab, Insights Lab, Engage Lab, Change Lab, Consult Lab and Decision Assurance Lab.
Role of AI in MOI work
AI may be used to support analysis, synthesis, comparison, scenario generation, risk identification, stakeholder mapping, evidence review and drafting. AI is not used as an autonomous decision-maker.
AI-supported outputs are treated as inputs to human judgement, not final conclusions.
Human accountability
MOI remains responsible for the professional judgement applied to its work. Clients remain responsible for their own organisational decisions, approvals and commitments.
- AI does not approve recommendations.
- AI does not replace leadership judgement.
- AI does not remove the need for evidence, context or governance.
- AI-supported analysis must be reviewed, interpreted and challenged by a human.
Evidence, traceability and uncertainty
MOI aims to separate evidence from assumption, source material from interpretation, and confidence from uncertainty. Where AI contributes to analysis, the relevant evidence, assumptions and limitations should remain visible enough to support decision-quality review.
Where information is incomplete, contested or uncertain, that uncertainty should be named rather than hidden behind confident language.
Privacy and confidentiality
MOI may work with sensitive organisational, operational, stakeholder or personal information. AI-supported tools must be used in a way that is proportionate to the task and aligned with confidentiality expectations.
Personal information and client-provided material should be handled in accordance with the Privacy Policy and any agreed engagement terms.
Data minimisation
MOI aims to use only the information reasonably needed for the analysis, simulation or decision-support task. Where possible, information should be reduced, summarised, anonymised or separated before AI-supported analysis is used.
Bias, context and limitations
AI systems can reflect bias, incomplete information, inaccurate patterns or overconfident reasoning. MOI treats AI output as something to be tested, not automatically trusted.
- Outputs should be reviewed against the evidence base.
- Local, organisational and stakeholder context should be considered.
- Important decisions should not rely on a single AI-generated answer.
- Alternative interpretations and risk scenarios should be considered where appropriate.
Use in scenario testing and simulation
MOI uses AI-supported simulation to explore possible decision pathways, stakeholder responses, implementation friction, operational constraints and second-order effects.
Simulation is not prediction. It is a structured way to examine plausible outcomes, test assumptions and identify conditions that may affect decision quality.
Transparency with clients
Where AI-supported analysis is material to an engagement, MOI aims to explain the role AI played in the work at an appropriate level. This may include describing how AI assisted with synthesis, comparison, scenario testing or drafting.
MOI does not present AI-generated output as independent proof. It is interpreted within the evidence and decision context.
Responsible use boundaries
MOI will not knowingly use AI-supported work to mislead, fabricate evidence, impersonate stakeholders, conceal material uncertainty or create a false impression of independent verification.
Where the evidence is weak, incomplete or uncertain, the output should say so.
Human review before delivery
AI-supported work should be subject to human review before being used in client-facing outputs. The review should consider accuracy, relevance, tone, evidence alignment, privacy, confidentiality, bias and decision usefulness.
Relationship to the wider Changeable ecosystem
MOI works alongside Changeable for AI implementation, process improvement and delivery, and Zero to AI for individual AI capability building. The same basic principle applies across the ecosystem: AI should support better human work, not disguise weak judgement.
Questions and concerns
For questions about responsible AI use, privacy, confidentiality or information handling, contact Ministry of Insights through the Contact page, email hello@changeable.co.nz, or call 0800 437 675.
Responsible AI is a practice, not a badge.
The value of AI in MOI work is not speed alone. It is the ability to compare evidence, surface assumptions and test scenarios while keeping human accountability visible.
Define the decision question, evidence base, intended use and boundaries before using AI-supported analysis.
Use AI to challenge assumptions, compare interpretations and explore plausible decision pathways.
Human review remains required for accuracy, context, privacy, confidentiality and decision relevance.
Where AI materially contributes to analysis, explain its role and keep limitations visible.
Questions about responsible AI use?
Contact MOI before sharing highly sensitive information or commissioning work where AI use, privacy, confidentiality or stakeholder trust need to be carefully managed.
Ministry of Insights
- hello@changeable.co.nz
- New Zealand
- Phone 0800 437675
- Monday – Friday, 8:00 am – 5:00 pm
Resources
- Case Studies
- Lab Download
- Free Resources
- News and Insights
Copyright © 2026 Ministry of Insights™. Powered by Changeable