Internal AI Usage Policy

Applies toEffective date
Employees and contractors4th May 2026

This policy may be updated in the future, in which case the updated date will also be shown here.


1. Purpose

This policy sets out how artificial intelligence (AI) tools and systems may be used responsibly on behalf of the Rust Foundation by employees of the Rust Foundation and individuals directly contracted by the Rust Foundation. This policy does not apply to individuals contributing to the Rust Project, which has its own governance structure, projects in the Rust Innovation Lab, or the broader ecosystem. Other projects and organisations are, however, welcome to adopt and adapt this policy for their own use, should they so choose. 

Our goal is to encourage innovation and productivity while protecting our users, contributors, community, and the integrity of our open source projects. We want to enable employees and contractors to understand what is, and is not, acceptable, and enable the wider community to understand and have trust in how we operate. 

This policy reflects our commitment to:

  • Open source principles
  • Transparency and accountability
  • User trust and safety
  • Legal and ethical responsibility

2. Scope

This policy applies to:

  • All staff and contractors
  • Anyone acting on behalf of the Rust Foundation in a professional capacity (interns/board members, etc.)
  • All AI tools, including but not limited to:
    • Generative AI (e.g., code, text, images)
    • AI-assisted development tools (e.g., Claude Code)
    • Machine learning models used internally or released publicly

3. Guiding Principles

Our use of AI must align with the following principles:

  1. Openness: AI use should not undermine open source licensing or contributor rights.
  2. Transparency: Where AI materially contributes to outputs, this should be disclosed clearly and concisely*.
  3. Human Responsibility: Humans remain accountable for decisions, code, and outputs.
  4. Safety and Quality: AI-generated outputs must meet our internal expectations of high-quality, secure, and reliable standards of work.
  5. Respect for Users and Contributors: AI must not be used in ways that mislead or disadvantage others, both inside the Rust Foundation and in the wider community.
  6. Respect in Collaboration: Staff and Contractors must respect the AI usage policies of the communities they contribute to.
  7. Respect for Staff and Contractor working preferences: use of AI is not mandated, and individuals are free to make their own decisions on which, if any, AI tools to use, unless indicated by management of the Rust Foundation that such a tool should not be used.

*Materially, in this case, refers to incidences where the AI fundamentally generated, structured, and/or substantively influenced the output, beyond just editorial. See Section 8 for further details.

4. Acceptable Uses of AI

AI tools may be used to:

  • Assist with writing or refactoring code
  • Support documentation, comments, or examples
  • Help with debugging, testing ideas, or code review preparation
  • Summarise issues, pull requests, or discussions
  • Improve accessibility (e.g. clearer documentation, translations where appropriate)
  • Conduct desk-based research.
  • Support the drafting of documentation, including for the purposes of communications, operations, and outreach functions.
  • Support minute-taking and record keeping where appropriate and when consent of all participating parties is confirmed.

All such uses are subject to:

  • Human review
  • Compliance with licensing
  • Compliance with Rust Foundation Security and Data Protection Policies
  • Interaction with policies that govern external collaborations
  • The standards outlined in this policy

Where this policy conflicts with those of external collaborators, agreement on AI usage should be confirmed in partnership with all parties prior to the effective collaboration start date. 

5. Prohibited Uses of AI

AI tools must not be used to:

  • Introduce code that violates open source licenses or copyright
  • Submit AI-generated code or content without review or understanding
  • Misrepresent AI-generated work as solely human-authored where disclosure is required
  • Process personal data without explicit permission
  • Generate malicious code, exploits, or intentionally insecure functionality, with the exception of relevant role-related tasks (e.g., for security purposes)
  • Make automated decisions that affect users or contributors without human oversight

6. Licensing and Intellectual Property

  • All AI‑assisted contributions must comply with the relevant, existing, open source licenses (e.g., Apache 2.0 and MIT).
  • Staff and Contractors are responsible for ensuring they have the permission to submit AI‑generated content.
  • AI tools that impose restrictive terms (e.g., ownership claims or training rights over outputs) must not be used for Rust Foundation contributions unless approved by Rust Foundation management.

Where uncertainty exists, staff and contractors should seek guidance from their line manager before use.

7. Security and Privacy

  • Do not input sensitive, confidential, or personal data into AI tools.
  • Secrets, credentials, private keys, and internal security information must never be shared with external AI systems.
  • AI‑generated code must undergo at least the same security review as human‑written code.

8. Transparency and Disclosure

  • Significant AI involvement in code, documentation, or design decisions should be disclosed in commit messages, pull requests, or documentation where appropriate. Individual departments may determine their own processes for disclosure. 
  • For user‑facing features that rely on AI, documentation must clearly explain:
    • That AI is being used
    • Its intended purpose and limitations
    • Any relevant risks or constraints

9. Agentic AI Use

AI agents are AI tools that take actions on behalf of the user (browsing the web, executing code, managing files, sending emails, etc.), and introduce different risks than generative AI tools alone. 

When using agentic AI tools, Rust Foundation employees must: 

  • Grant agents access only to the tools required for the task at hand. 
  • Refrain from granting broad access to software systems that contain sensitive information or credentials. 
  • Retain explicit human approval for irreversible actions. 
  • Treat third-party input on agentic tasks as untrusted. Agents that read emails, browse the internet, or process external documents can be manipulated by malicious content across sources. 
  • Report unintended agentic behavior to their line managers. 

Agentic AI tools should not be used while editing critical infrastructure, unless a discussion with, and approval by, a senior manager has taken place. 

All agentic tools must go through the same approval process as listed for other AI tools and are subject to all other sections of this policy.

10. Governance and Review

  • This policy will be reviewed periodically as AI technologies, regulations, and community expectations evolve.
  • While the Rust Foundation will generally accommodate individual preferences in the use of specific AI tools, it reserves the right to restrict or withdraw approval for specific AI tools.
  • Concerns or violations should be reported to the employee’s line manager.

11. Enforcement

Failure to comply with this policy may result in:

  • Removal of access or privileges
  • Performance management processes
  • In the most serious cases, termination of employment

12. Policy Values Statement

AI is a tool, not a substitute for responsibility. At the Rust Foundation, we believe AI should amplify human collaboration, not replace it, and should be used only to strengthen our shared open source ecosystem.