Skip to content

Being aware of the risks and challenges of AI will help your organisation use it responsibly.

AI is an emerging, fast-changing field. Being aware of the following risks will help your organisation use it responsibly.

  • Authenticity: AI-generated content can feel generic and may not reflect your organisation's voice, relationships, or community knowledge. Always review and personalise any AI output before using it.
  • Transparency: Some funders, partners, and communities have expectations - or policies - around AI use. Being open about how you use AI builds trust. A position statement can be a good way to do this.
  • Uneven capability across staff: Team members will have different levels of comfort and experience with AI tools. Without shared guidance, this can create inconsistencies and put pressure on individuals - another reason a clear policy is valuable.
  • Bias: AI tools are trained on large datasets that may reflect existing biases. Outputs should always be reviewed critically, particularly when they relate to communities that have historically been marginalised.
  • Environmental impact: AI tools consume significant energy and water resources. For example, The International Energy Agency reports that a request made through ChatGPT, an AI-based virtual assistant, consumes ten times the electricity of a Google Search. AI’s expansion is driving higher water use, emissions, and e-waste, raising sustainability concerns. This should be factored into decisions about how and how often you use them.
  • Data privacy: Some AI tools store or use the information you input. Avoid entering confidential, personal, or sensitive data unless you have verified the tool's privacy settings. Check your organisation's policy before using any new tool, and be aware of any legal implications around data privacy.
  • Cultural and data sovereignty: Consider how AI use aligns with your obligations to Māori, Pacific and ethnic communities and those whose data you hold, in line with Te Tiriti principles.