Responsible and Cautious AI Use Policy March 2026
Background
With the growing use of AI in the workplace, the organization seeks to use AI in a way that expands staff efficiency and capacity, while promoting transparency and trust among its stakeholders.
This policy defines how WaterISAC uses Artificial Intelligence (AI) responsibly, cautiously, and ethically in support of its mission. AI should enhance human work, increase efficiency, and improve impact while protecting people, communities, and trust.
WaterISAC supports the thoughtful use of artificial intelligence (AI) to enhance collaboration, streamline workflows, and improve service. Staff may use pre-approved AI tools for tasks such as summarizing and analyzing data, researching information, and generating ideas, provided these tools are used transparently and responsibly. Questions about pre-approved tools should be raised with either a supervisor, the Managing Director or the CEO.
All AI-generated content must be reviewed for accuracy, appropriateness, and compliance with copyright and other applicable laws. When using AI tools, WaterISAC staff which include contractors are expected to adhere to ethical guidelines, which include:
- Avoiding plagiarism;
- Ensuring proper attribution;
- Respecting intellectual property rights;
- Safeguarding sensitive data of the organization and WaterISAC members;
- Avoiding biased or discriminatory outputs; and
- Applying critical thinking skills and professional expertise.
WaterISAC encourages staff to stay updated about AI developments and to raise questions or concerns as part of our commitment to continuous learning and innovation. Concerns should be directed to a supervisor, the Managing Director or the CEO.Responsible and Cautious AI Use Policy March 2026
Scope
This policy applies to all staff, volunteers, contractors, and partners who use, manage, or make decisions informed by AI tools on behalf of the organization. It covers all AI-enabled technologies, including generative AI, automated decision-support, and data-driven systems including the use of AI note-taking technology used by our members. AI note-takers will be denied access to WaterISAC meetings / webinars. Members will be informed when they join meetings and webinars.
Key Definitions
- Generative AI: AI systems that create new content—such as text, images, audio, video, or code—based on patterns learned from existing data. Generative AI outputs require human review, as they may be inaccurate or biased.
- Automated Decision-Support: AI or algorithmic systems that analyze information and provide recommendations, predictions, scores, or rankings to assist human decision-making. These systems do not make final decisions but inform them. Human judgment and oversight are always required, especially when decisions affect individuals or communities.
- Data-Driven Systems: Systems that use data analysis, statistical models, or machine learning to identify patterns, trends, or insights that support operations, planning, reporting, or evaluation. These systems rely on the quality and appropriateness of the data used and must be designed to avoid misuse, bias, or privacy risks.
Core Principles
- Human Responsibility: AI supports human judgment; it does not replace it. Humans remain fully responsible for decisions, actions, and outcomes influenced by AI. AI outputs must always be reviewed before use.
- Mission Alignment: AI may only be used in ways that align with WaterISAC’s mission, values, and commitment to serving its members and the community ethically and respectfully.
- Safety and Accuracy: AI outputs may be incomplete, biased, or incorrect. All AI-generated content must be checked for accuracy, appropriateness, and potential harm before being shared or acted upon.
- Fairness and Equity: AI must not be used in ways that discriminate against or disadvantage individuals or groups. Care must be taken to identify and reduce bias in data, outputs, and decisions.
- Transparency: The organization will be transparent about meaningful uses of AI, especially when AI affects communications, services, or decisions involving members, vendors, sponsors, partners, staff or board members.Responsible and Cautious AI Use Policy March 2026
- Privacy and Dignity: Personal, confidential, or sensitive information about members, vendors, sponsors, partners, staff or board members. must not be entered into AI tools unless explicitly approved by the Managing Director or CEO and adequately protected. Data minimization must always be practiced.
Acceptable Use
AI may be used to:
- Assist with research, drafting, translation, summarization, and administrative tasks.
- Support program planning, and operational efficiency with human oversight.
- Automate low-risk, repetitive tasks when safeguards are in place.
Unacceptable Use
AI must not be used to:
- Make final decisions that significantly affect members (e.g., eligibility, services, employment) without human review.
- Generate misleading, deceptive, or harmful content.
- Replace empathy, human judgment, or community engagement.
- Violate privacy, confidentiality, or intellectual property rights.
- Process, store, or analyze member-to-member communications or interactions. During member meetings, the use of any AI notetaking tools is prohibited. Meeting participants are expected to rely on traditional methods of notetaking to ensure the confidentiality and security of the discussion. AI note-takers will be denied access to WaterISAC meetings / webinars. Members will be informed when they join meetings and webinars.
- Personal, confidential, or sensitive information about members, vendors, sponsors, partners, staff or board members. may not be included unless explicitly approved and appropriately protected.
AI Segregation and Data Boundaries
- AI tools must be kept logically and operationally segregated from core member systems, databases, and internal communications platforms.
- Member-to-member communications (including messages, forums, support groups, or peer interactions) must not be entered into AI systems.
- Sensitive information must never be used to train, prompt, or refine AI tools.
Only approved, low-risk data may be used with AI, and access must be limited to authorized users. Requests for such usage must be approved by the Managing Director or CEO and are only granted to specific staff for the specific uses outlined in the request.
Exceptions to the Policy
In certain circumstances, the use of AI tools may be permitted with prior approval from the managing Director or CEO. Such exceptions will be evaluated on a case-by-case basis, considering factors such as the nature of the request, the sensitivity of the information discussed, accessibility related issues, and the potential benefits of using AI tools.
Communication of the Policy
The policy will be communicated to all employees, contractors, volunteers, and stakeholders through official channels.
Oversight and Accountability
- Each AI use must have a responsible staff member or team.
- High-risk or novel AI uses require Managing Director or CEO approval.
- Concerns, errors, or misuse must be reported promptly to a supervisor, Managing Director or the CEO.
Review
This policy will be reviewed periodically to reflect evolving technology, legal requirements, and best practices in responsible AI use. Any amendments to the policy will be communicated to all relevant parties in a timely manner.
Conclusion
This policy on use of AI tools is a measure aimed at protecting sensitive information, maintaining the integrity of records, and fostering a secure communication environment. By adhering to this policy, WaterISAC aims to promote and adhere to a culture of trust and reliability, where privacy and accuracy are preserved.

