top of page
background_dotted_6_edited.png

Navigating Our First AI Policy

Like many around the world, our team at Roots & Rivers has been navigating the incredible potential and complex questions surrounding the use of generative AI in our work. On one hand, these tools offer incredible opportunities for creativity and efficiency. On the other, we are deeply mindful of the social and environmental risks including perpetuating discrimination and societal biases, the erosion of privacy, its implications on the ownership of data, and the significant energy, water, and land use.


The first question for us, like many, was if we should engage with AI. We soon realized that AI is here to stay, and that removing ourselves from the conversation forfeits our influence in how AI is used. It became clear that AI is a tool, something that can be (and has already been) used for good and bad. We want to be a part of the conversation where AI is used with intention and integrity to support our human-centred work.


Our journey to formalize an approach to this technology began with people. During a learning week, team member Austin Lui engaged in conversations with ecosystem members and peers to understand their approaches to AI to inform our emergent AI policy. This is our first step in charting a course. It’s a living document that outlines our commitment to using AI ethically, strategically, and in service of our mission.


Graphic with the Roots & Rivers logo at the top. Below, text reads: “The first question for us, like many, was if we should engage with AI. We soon realized that AI is here to stay, and that removing ourselves from the conversation forfeits our influence in how AI is used.” The background shows a desk with books, a laptop, and large plant leaves casting shadows.

Acknowledging the Hard Questions

Before we could define our principles, we had to sit with the uncomfortable truths. A responsible AI policy must be grounded in an honest acknowledgment of its real-world impacts. We are actively grappling with:


  • Magnification of Bias: AI models are trained on vast internet datasets, which contain multitudes of human biases. Without critical human oversight, these tools can easily perpetuate harmful stereotypes, misrepresent communities, and further entrench systemic inequities.


  • Data Privacy & Intellectual Property: The very nature of large language models raises complex questions about how data is sourced, used, and protected, as well as the ownership of data and creative works.


  • Environmental Impact: The massive data centers that power these tools consume enormous amounts of energy and water. We recognize that every query contributes to this environmental footprint, and we have a responsibility to be mindful of our use.


These are not small issues, and they don’t have easy answers. Drafting our policy is our attempt to navigate them with care.


Our Principles in Response

In direct response to these challenges, we have anchored our policy in a set of core principles:


  • Human Connection Remains Central: Our primary commitment is to people. We will only use AI in ways that support, rather than replace, the empathy and relationships foundational to our work. The goal is to free up our capacity for deeper human connection.


  • Trust is Built Through Transparency: We are committed to being open about how we use AI. Our policy bakes in transparency and consent at every stage, ensuring our clients and community partners can make meaningful choices.


  • Bias and Harm are Actively Mitigated: We will not treat AI as neutral. We commit to critically reviewing AI-generated content through an equity lens, challenging its outputs, and taking active steps to prevent the perpetuation of harm.


An Invitation to Grapple With Us


We are in the very early days of this journey. Our policy will need to evolve as the technology and our understanding of it matures. We are approaching this work with humility and a deep desire to learn from others.


How is your organization navigating this new landscape? What have you learned? What are you still grappling with? We believe that by learning together, we can all help shape a future where technology serves to deepen our humanity, not diminish it.


bottom of page