Your engineers are using Copilot. Your sales team is in ChatGPT. Someone in marketing signed up for an AI writing tool last week and nobody asked whether customer data could go into it. If that sounds familiar, you are not alone. Most companies are using AI at scale and governing it not at all.
An AI governance policy is the document that changes that. It defines how your company uses, evaluates, and controls AI tools and systems: who can use what, with which data, under whose oversight, and what happens when something goes wrong. It is not a ban on AI. It is the framework that lets you use it without the liability.
The frameworks that require or directly depend on it:
- EU AI Act: mandatory risk management and human oversight for high-risk AI deployers, obligations phasing in from 2025
- ISO 42001: the dedicated AI management system standard; your AI governance policy is its core documentary foundation
- GDPR: automated decision-making (Art. 22), data processor agreements (Art. 28), and DPIAs (Art. 35) all require documented AI governance
- HIPAA: where AI tools process protected health information, your security standards apply to them too
By the end of this guide, you will know exactly what goes in an AI governance policy, how to build one your team will actually follow, and where it fits in your compliance obligations.
Here is what I will cover:
- What an AI governance policy is and how it differs from adjacent concepts
- Why the urgency has shifted in 2025 (EU AI Act, shadow AI, auditor scrutiny)
- Who needs one and at what level of detail
- What to include, with a complete free template you can adapt immediately
- How to build and roll it out step by step
- How it maps to the EU AI Act, ISO 42001, and GDPR
- The specific failure modes that cause these policies to stop working
What Is an AI Governance Policy?
Someone on your team signs up for an AI writing tool, pastes a customer email thread into it to get help drafting a reply, and nobody had any rules about that. Was it fine? Was it a data breach? Was the vendor processing that data under a valid agreement? Without a policy, you have no way to answer those questions.
An AI governance policy is the formal document that defines how your organisation procures, deploys, monitors, and controls AI systems. It covers which tools are allowed, with which data, by whom, under what conditions, and with what oversight requirements. It is not a blanket ban. It is a framework for using AI responsibly and being able to demonstrate that use to regulators, customers, and auditors.
AI Governance and Policy: How the Terms Are Used
A few terms you will see used interchangeably:
- AI policy vs AI governance policy: same thing, different names. Use whichever is clearer for your audience.
- AI governance vs AI ethics: ethics is the values your company holds about AI. Governance is the operational controls that put those values into practice. You need both, but they are not the same document.
- AI governance policy vs AI strategy: your strategy is the roadmap for what AI will help you achieve. Your governance policy is the rules of the road for how you use it safely.
AI Policy and Governance: Where It Sits in Your Compliance Stack
An AI governance policy does not stand alone. It connects to:
- Your data classification policy, which defines what data can and cannot go into AI tools
- Your vendor risk management policy, because AI tool vendors are third-party data processors
- Your incident response policy, because an AI-related data breach is still a data breach
Under ISO 42001, the AI governance policy is a core component of your AI management system. Under GDPR, it operationalises your obligations around automated decision-making and processor agreements. Under the EU AI Act, it is the documentary starting point for demonstrating compliance.
When you get to implementation, you will also want your risk management policy and vendor risk management policy to reference this document explicitly.
Who Owns an AI Governance Policy?
Typically the CISO, CTO, or DPO depending on how your team is structured. For small companies, one named AI owner is enough. What matters is that one person is responsible for keeping the policy current, managing the approved tool list, and reviewing incidents. Diffuse ownership means the policy goes stale.
Who Does This Policy Apply To?
Everyone: full-time employees, contractors, freelancers, and third parties who use AI on the company’s behalf or with the company’s data. If someone has access to your systems and access to an AI tool, this policy applies to them.
What Counts as AI Under This Policy?
This is where most policies have a gap. When people think “AI tools,” they picture ChatGPT, Copilot, and whatever got announced last week. But AI is already embedded in tools your team uses every day: the writing assistant in your project management tool, the smart features in your CRM, the autocomplete in your IDE, the AI-powered filters in your HR platform.
A policy that only covers standalone AI products will miss most of your actual AI exposure. Your definition needs to capture AI-powered features inside other software, not just dedicated AI applications.
Why AI Safety Policy and Governance Can’t Wait Until After Your First Incident
Most governance conversations happen after something goes wrong. A contractor pasted a client’s PII into ChatGPT to summarise meeting notes. A sales rep put a deal memo into an LLM to draft a follow-up email. By the time anyone found out, the data had already been processed by a vendor with no data processing agreement in place.
The uncomfortable truth is that this is not an edge case. It is what happens every day in companies that have AI tools everywhere and governance nowhere.
Three forces have changed the stakes in 2025:
Shadow AI is already inside your company. Employees are using multiple AI tools without IT’s knowledge. Without a policy and an approved tool list, you do not have a map of where your data is going. You are not governing AI; you are hoping nothing goes wrong.
The EU AI Act has teeth now. Starting in 2025, obligations under the Act are phasing in. Companies deploying high-risk AI in the EU face mandatory governance requirements. Not best-practice guidance. Legal obligations with enforceable penalties.
Auditors are asking. ISO 42001 audits, GDPR audits, and increasingly enterprise vendor questionnaires now include questions about AI tool inventory and governance. Companies without a documented policy are flagging.
The risk is not theoretical. A data breach caused by an employee feeding personal health information into an unapproved LLM can simultaneously trigger HIPAA penalties, GDPR fines, and a 72-hour breach notification obligation. That is not a compliance problem. That is an existential one.
AI Governance Policy 2025: What Has Changed
2024 was the year companies started using AI everywhere. 2025 is the year regulators started holding them accountable.
The EU AI Act’s General Purpose AI model obligations came into effect in August 2025. High-risk AI system obligations phase in from August 2026. ISO 42001, published in 2023, is now appearing in enterprise vendor questionnaires and customer due diligence requests.
The practical implication: if you are deploying AI, or if your team is using AI tools with company or customer data, the regulatory clock is running. A policy written today is not early. It is already slightly late.
Do You Need a Formal AI Governance Policy?
The honest answer for most people reading this is yes. But the right level of formality depends on what you are doing with AI and what your obligations are.
You definitely need one if:
- You are subject to the EU AI Act: deploying AI that affects EU residents, especially in high-risk categories such as recruitment, credit, medical, education, law enforcement, or critical infrastructure
- You process personal data with AI tools: GDPR requires a lawful basis and documented controls for any AI-assisted processing of personal data
- You are pursuing ISO 42001: a documented AI governance policy is a foundational requirement of the standard
- You are pursuing HIPAA compliance and any AI tools could touch protected health information
- Your team uses AI tools that process customer data or proprietary code
- You sell to enterprise buyers who ask about AI governance in vendor security assessments
You can start lighter if:
- You are very early stage, fewer than five people, no compliance obligations, and no customer data in any AI tool
Even then: a one-page acceptable use statement is better than nothing. “What data can go into which tools” is a question you need to have answered in writing before an incident forces you to answer it in retrospect.
Building Your AI Governance Policy Framework: What to Cover
Most policies fail for one of two reasons. Either they are too vague (“use AI responsibly”) to be enforceable, or they are so rigid that the team ignores them and uses AI tools anyway. The goal is a policy specific enough to mean something and practical enough that people can follow it.
Here is what every AI governance policy needs to cover:
| Policy section | What to include |
|---|---|
| Purpose | Why this policy exists: the risks it addresses, the regulatory obligations it supports |
| Scope | Which AI systems are covered, which users, which business activities. Be explicit about embedded AI features, not just standalone tools. |
| Definitions | AI system, high-risk AI, approved AI tool, personal data in the AI context, agentic AI |
| Roles and responsibilities | AI Policy Owner, IT/Security, Legal/DPO, Department Heads, all employees |
| Approved AI tools | How tools get reviewed and approved; where the approved list lives; how to request a new tool |
| Prohibited uses | Specific banned uses. “Do not input customer PII into tools without a DPA” is enforceable. “Use AI responsibly” is not. |
| Data handling | What data can and cannot go into AI tools, mapped to your data classification policy |
| Transparency and disclosure | When AI-generated output must be disclosed externally or to affected individuals |
| Human oversight | Which decisions require a human in the loop before an AI output is acted upon |
| AI risk assessment | When a formal risk assessment is required before deploying a new AI system |
| Incident reporting | What counts as an AI incident, how to report it, who investigates, timelines |
| Training | Who completes AI governance training, how often, what it covers |
| Enforcement | Specific consequences for specific violations, not just generic disciplinary language |
| Review cadence | Annual minimum, plus triggers: new class of AI tool, regulatory change, incident |
A coverage gap most policies have: most AI governance policies were written for generative AI tools where a human prompts and reviews output before anything happens. Agentic AI, AI that takes real-world actions autonomously (sending emails, running code, calling APIs), is a different risk profile. I will cover how to handle it in a dedicated section below.
Free AI Governance Policy Template
This is a complete, usable policy. Fill in the bracketed fields and adapt the prohibited uses and oversight sections to your company’s actual AI usage.
AI Governance Policy Examples: What These Policies Look Like in Practice
Two patterns you will see most often:
The approved-tool-list model is the most common for startups and mid-size companies. The policy centres on a maintained list of approved AI tools. Employees can use anything on the list; anything not on the list needs to go through a review before use. Simple to explain, simple to enforce, practical to maintain.
The principles-based model is more common in larger organisations. The policy sets out principles and evaluation criteria instead of a specific list. Employees assess tools against those criteria. This gives more flexibility but is much harder to enforce and audit.
For most companies, the approved-tool-list model is the right starting point. You can layer in principles-based guidance as you mature.
Sample AI Governance Policy
[Company Name] AI Governance Policy
Policy Owner: [CISO / CTO / DPO] Approved by: [CEO / Board] Effective Date: [Date] Last Reviewed: [Date] Next Review: [Date — recommend annually, or after any material regulatory change or AI-related incident] Version: 1.0
1. Purpose
This policy defines how [Company Name] governs the use, procurement, and oversight of artificial intelligence tools and systems. Its purpose is to ensure AI is used effectively, safely, and in compliance with applicable laws and standards, including the EU AI Act, GDPR, and ISO 42001.
2. Scope
This policy applies to all employees, contractors, and third parties who use, procure, build, or deploy AI systems on behalf of [Company Name]. It covers all AI tools, including generative AI (large language models, image and code generators), automated decision-making systems, and AI-powered features embedded in third-party software.
| AI system type | In scope |
|---|---|
| Standalone generative AI tools (ChatGPT, Claude, Gemini, Copilot) | Yes |
| AI features embedded in existing SaaS tools (Notion AI, Grammarly, Salesforce Einstein) | Yes |
| Internally built or custom-deployed AI models | Yes |
| Agentic AI tools that take autonomous real-world actions | Yes |
| Rules-based automation with no machine learning component | No |
3. Definitions
| Term | Definition |
|---|---|
| AI system | Any tool, feature, or service using machine learning, natural language processing, generative AI, or automated decision-making |
| Approved AI tool | An AI system formally reviewed and added to the company’s approved AI tool list |
| High-risk AI | An AI system used in decisions affecting health, safety, employment, finance, or legal rights, as defined in the EU AI Act Annex III |
| Personal data | Any information that identifies or can identify a natural person, as defined under GDPR Art. 4 |
| Agentic AI | An AI system that takes actions autonomously on behalf of a user (browsing, sending emails, calling APIs, committing code) without requiring human approval for each action |
4. Roles and Responsibilities
| Role | Responsibility |
|---|---|
| AI Policy Owner ([Name/Title]) | Maintains the policy; manages the approved tool list; reviews AI incidents; commissions risk assessments |
| IT / Security | Technical review of new AI tools; data handling and security assessment; vendor contract review |
| Legal / DPO | Privacy and regulatory review; GDPR DPIA for high-risk AI; EU AI Act compliance classification |
| Department Heads | Ensure team compliance; submit new AI tool requests for review; report suspected violations |
| All Employees | Follow this policy; complete required training; report AI incidents and concerns |
5. Approved AI Tools
All AI tools used for work purposes must be reviewed and approved by [AI Policy Owner] before use. Employees may not use unapproved AI tools with company data or in connection with their work duties.
The approved AI tool list is maintained by [AI Policy Owner] and is available at [location / link]. It is reviewed quarterly. To request approval for a new tool, submit a request to [email / form]. Requests will be reviewed within [5] business days.
6. Prohibited Uses
The following uses of AI are prohibited:
- Using unapproved AI tools for any work-related purpose
- Inputting personal data (names, email addresses, health records, financial data, location data) into any AI tool not explicitly approved for personal data processing
- Inputting confidential company information (source code, deal terms, client data, financial forecasts, proprietary research) into AI tools without a current, signed data processing agreement covering that data type
- Using AI to make final hiring, termination, disciplinary, or performance evaluation decisions without documented human review
- Publishing or distributing AI-generated content externally without human review and, where required by law or contract, disclosure
- Using personal AI accounts or subscriptions to process company or customer data
- Deploying agentic AI systems with real-world action permissions (sending emails, making purchases, committing code, calling external APIs) without prior written approval from [AI Policy Owner] and defined human oversight controls
7. Data Handling
When using approved AI tools, employees must:
- Follow the [Data Classification Policy] to determine what data may be used with AI tools
- Not exceed the data permissions explicitly granted for each approved tool: a tool approved for internal content drafting is not approved for processing customer personal data
- Ensure any AI tool processing personal data has a current, signed data processing agreement that covers the relevant data type and processing purpose
- Verify that the tool’s data residency requirements are compatible with applicable regulations before inputting any regulated data
8. Transparency and Disclosure
- AI-generated content included in client deliverables, marketing materials, or external communications must be disclosed where required by law, client agreement, or platform terms
- AI-assisted decisions that materially affect individuals must be documented and explainable on request, in compliance with GDPR Art. 22
- Employees must not present AI-generated work as original human work in contexts where that distinction is material
9. Human Oversight
A human must review and approve AI outputs before they are acted upon in the following areas:
- Hiring, performance evaluation, and termination decisions
- Medical or clinical recommendations
- Legal and compliance determinations
- Financial decisions above [threshold]
- Safety-critical operations
- Any application classified as high-risk under the EU AI Act
10. AI Risk Assessment
Before deploying a new AI system, [AI Policy Owner] must conduct a documented risk assessment covering: data privacy implications, information security risks, potential for biased or discriminatory outputs, vendor contractual protections, and EU AI Act risk classification.
| Risk factor | Assessment required |
|---|---|
| Personal data processing | DPA in place; DPIA if high-risk or large-scale processing |
| EU AI Act high-risk category | Conformity assessment; technical documentation |
| Agentic AI with autonomous action permissions | Action scope review; human oversight controls documented |
| Third-party vendor | Vendor security review; DPA signed |
| Internally built or custom model | Full risk assessment; ongoing monitoring plan |
11. Incident Reporting
Employees must report AI-related incidents to [AI Policy Owner / security@company.com] within [2] business days of becoming aware. An AI incident includes: accidental disclosure of personal or confidential data via an AI tool, unexpected or harmful AI outputs, use of an unapproved tool with company data, and any regulatory breach.
Incidents will be logged, investigated, and reviewed under the [Incident Response Policy]. Incidents constituting a personal data breach will be reported to the relevant supervisory authority within 72 hours as required by GDPR Art. 33.
12. Training
All employees must complete AI governance training within 30 days of joining and annually thereafter. Training covers: what this policy requires, how to identify AI tools needing approval, what data may not be used with AI, and how to report an incident. Training completion is tracked by [HR / IT]. Department heads are responsible for ensuring their teams complete training on schedule.
13. Enforcement
| Violation type | Consequence |
|---|---|
| Unintentional first breach with no data exposure (e.g. using an unapproved tool with internal non-sensitive data) | Verbal warning; mandatory retraining |
| Repeated or deliberate breach | Written warning; formal disciplinary process |
| Breach causing data exposure or regulatory violation | Immediate investigation; potential termination; regulatory reporting where required |
| Contractor or third-party breach | Notification per contract terms; potential contract termination |
14. Exceptions
Exceptions to this policy must be submitted in writing to [AI Policy Owner] with: the specific rule to be excepted, the business reason, an alternative control, the risk accepted, an expiry date (maximum 12 months), and the name of the risk owner. Approved exceptions are logged and reviewed at the next annual policy review.
15. Review Cadence
This policy will be reviewed annually and triggered for out-of-cycle review by:
- A significant change in the company’s AI tool usage or risk profile
- A material regulatory development (new EU AI Act obligations, GDPR guidance on AI, ISO 42001 updates)
- An AI-related security or privacy incident
- A significant change in the company’s data processing activities
The AI Policy Owner is responsible for initiating the review and obtaining re-approval from [CEO / Board]. All employees will be notified of material changes.
Version History
| Version | Date | Author | Summary of changes |
|---|---|---|---|
| 1.0 | [Date] | [Name] | Initial version |
How to Build and Roll Out an AI Governance Policy
The most common mistake is writing the policy first and discovering afterwards that it does not match how the team actually uses AI. You end up with a document that prohibits tools people have been using for months and approves things nobody uses. It gets ignored on day one.
Start with an inventory, not a document.
Audit what AI is already in use. Ask every department. Check expense reports for AI tool subscriptions. Check browser extensions, IDE plugins, and SaaS integrations. You will find tools nobody thought to mention. This is your baseline: you cannot govern what you have not mapped.
Classify by risk. Which tools touch personal data? Which are integrated into customer-facing systems? Which are influencing decisions that affect people? High-risk tools need more controls than internal drafting assistants.
Assign an AI Policy Owner. One named person, accountable for maintaining the policy and the approved tool list. Without clear ownership, the policy will be out of date within a year.
Draft the policy. Use the template above. Adapt the prohibited uses and oversight sections to your actual risk landscape. A healthcare company has different AI risks than a marketing agency.
Legal and privacy review. If you have a DPO or in-house counsel, they need to see this before it is published. GDPR obligations around automated decision-making and EU AI Act classifications are easy to miss without legal eyes.
Communicate and train. A policy nobody has read is not a control. Send it to the whole team. Explain the key rules in plain language. Require acknowledgement.
Build the approved tool list before you publish. If you publish the policy with no approved tools, you have just banned everything. Employees will ignore the policy and keep using what they were already using. Build the list first.
Collect evidence. Training completion records, signed acknowledgements, tool approval records, risk assessments. These are what a GDPR inquiry or ISO 42001 review will ask for.
Set a review date. AI regulation is moving faster than almost any other compliance area. A policy written today needs a review date within 12 months. Annual is the minimum; semi-annual is better while the regulatory landscape is still settling.
AI Governance Policy Best Practices
A few things that separate policies that work from ones that sit in a folder:
Keep the prohibited uses list specific. “Do not input customer PII into tools without a DPA” is a rule someone can follow. “Use AI responsibly” is not a rule.
Separate the policy from the procedure. The policy states the rules. The procedure describes how to request a tool, how to report an incident, where the approved list lives. Keep them separate: procedures can update without going back to the board for a new policy approval.
Make approval easy. If the review process for a new AI tool takes two weeks and involves three sign-offs, employees will use it without asking. A lightweight, fast review process is more effective than a rigorous one that nobody uses.
AI Governance Policy and Compliance: EU AI Act, ISO 42001, and GDPR
An AI governance policy does not exist in isolation. Here is how it connects to the frameworks that matter most.
| Framework | Relevant requirements | How the AI governance policy helps |
|---|---|---|
| EU AI Act | Art. 9 (risk management system), Art. 10 (data governance), Art. 14 (human oversight), Art. 16 (obligations for high-risk AI deployers) | Directly satisfies risk management and human oversight requirements. The policy is the documentary foundation for EU AI Act compliance; without it, you cannot demonstrate the system is in place. |
| ISO 42001 | Cl. 5 (leadership and AI policy), Cl. 6 (planning), Cl. 8 (operation), Cl. 9 (performance evaluation) | ISO 42001 is structured around a documented AI policy and clear objectives. This is that document. Without it, you cannot claim conformance with the standard. |
| GDPR | Art. 22 (automated decision-making), Art. 28 (processor agreements), Art. 35 (DPIA) | Defines when DPIAs are required for AI systems; mandates DPAs with AI vendors; documents your legal basis for AI-assisted processing; governs profiling and automated decisions that affect individuals. |
| HIPAA | § 164.306 (security standards), § 164.312 (technical safeguards) | Prevents protected health information from entering unapproved AI tools; extends your security rule to AI-assisted processing; defines AI incident reporting obligations for covered entities. |
On the EU AI Act specifically: if you deploy AI in any of the high-risk categories (recruitment screening, credit scoring, medical AI, educational assessment, law enforcement, critical infrastructure), mandatory obligations including conformity assessments and technical documentation phase in from August 2026. An AI governance policy is the prerequisite. You cannot demonstrate compliance without it, but it is not sufficient on its own.
On ISO 42001: this is the standard specifically built for AI governance, published in 2023. If you are pursuing ISO 27001 and want AI-specific coverage, ISO 42001 is the right path. It is designed to sit alongside ISO 27001, not replace it, and it maps directly to EU AI Act requirements in a way that general information security controls do not.
Governing Agentic AI: When Your Policy Needs to Go Further
Someone on your team sets up an AI agent to draft and send outreach emails on their behalf. Nobody asked whether that was in scope. It was not in the policy, because the policy was written for chatbots, not agents.
Most AI governance policies today have this gap. They were designed for generative AI tools where a human prompts, reviews the output, and then decides what to do with it. Agentic AI — AI that takes real-world actions autonomously (browsing, sending emails, writing and committing code, calling external APIs, booking things) — is a fundamentally different risk profile.
The difference is timing. With a chatbot, the human is the last step before anything happens in the world. With an agent, the action happens first.
If your team is experimenting with AI agents, Copilot Agents, Claude Projects in agent mode, n8n AI workflows, or custom-built agent pipelines, your policy needs to cover them explicitly.
Here is what to address:
| Question | What your policy needs to answer |
|---|---|
| What actions can an agent take without human approval? | Define a specific whitelist of permitted autonomous actions vs. actions requiring prior sign-off |
| What data can an agent access? | Scope data access explicitly. Agents should not have broader permissions than the human who deployed them. |
| Who is accountable if an agent takes a wrong action? | Every deployed agent needs a named human owner accountable for its behaviour |
| How are agent actions logged? | Require audit trails for all agent actions with external effects: emails sent, API calls made, code committed |
| What is the approval process for deploying a new agent? | Higher bar than standard AI tool approval: agents with external action permissions need explicit sign-off from the AI Policy Owner |
The gap between “we have an AI governance policy” and “our policy actually covers agents” is where the next generation of AI incidents will happen. You can close it now with a one-page appendix to your existing policy, before it becomes something you are explaining to a regulator.
Where AI Governance Policies Usually Break Down
I have seen well-intentioned AI governance policies fail in predictable ways. Most of them share one thing: the document was created, but the governance was not.
No tool inventory before writing the policy. The policy was drafted in a vacuum. It does not reflect how AI is actually used in the company. Employees follow the rules they know and ignore the ones that do not apply to anything they do. You end up with a perfectly written policy and zero actual governance.
Vague prohibitions. “Do not use AI irresponsibly” tells nobody anything. A prohibition needs to be specific enough that an employee can immediately tell whether something is in or out. If you have to interpret the policy to apply it, you will apply it inconsistently every time.
No approved tool path. If getting a new AI tool approved is slow or unclear, employees will use it anyway. Shadow AI is not usually malicious. It is the path of least resistance. Make the review process fast enough that asking is easier than ignoring.
Ignoring embedded AI. The writing assistant in Notion, the smart autocomplete in your IDE, the AI features in your CRM, the auto-scheduling in your calendar tool. If your policy treats AI as a separate category of product rather than something embedded in the software your team uses every day, it will miss most of your actual exposure.
No training. A policy employees have not read is not a control. Training does not need to be long. It needs to explain the three or four rules people will actually encounter: what the approved tool list is, what data cannot go into AI tools, and how to report something that looks wrong.
Agentic AI not covered. Policies written for chatbots do not cover AI agents. If your team is using or experimenting with autonomous AI workflows, your policy has a gap right now.
Treating it as a one-time task. AI regulation is evolving faster than any other compliance area right now. The EU AI Act’s obligations are phasing in through 2026. ISO 42001 is new and still being interpreted in practice. A policy written in early 2024 needs a significant review in 2025. Annual at minimum; semi-annual is better while the landscape is still settling.
AI Governance at Every Stage: From Startup to Scale-Up
The right level of governance depends on what you are doing with AI and what your compliance obligations are. You do not need enterprise-grade bureaucracy to start. You do need something written down.
For Startups and Small Teams
If you are under 20 people with no formal compliance requirements, a lightweight policy is entirely appropriate. What you need:
- A one-page acceptable use statement covering approved tools, prohibited data inputs, and how to request approval for a new tool
- One named AI owner, probably the CTO
- One clear rule everyone understands: no customer or personal data in unapproved tools
- An annual review reminder
Keep it simple enough that everyone reads it. A five-page policy nobody reads is worse than a one-page policy everyone understands.
For Growing Companies Pursuing ISO 42001 or GDPR Compliance
As you scale and take on compliance obligations, the policy needs to grow with you:
- Full policy covering all sections in the template above
- Formal tool approval workflow and a maintained approved list
- Training completion tracking and employee acknowledgements
- Risk assessments for new AI deployments
- DPIA process for AI systems processing personal data
- Annual review with documented sign-off
This is the level most companies reading this guide are working towards.
For Larger Companies with EU AI Act Exposure
If you are deploying AI in high-risk categories:
- AI risk classification aligned with EU AI Act Annex III categories
- Conformity assessment process for high-risk AI systems
- Technical documentation as required by the Act
- Board or committee-level oversight of the AI governance programme
- A dedicated AI risk register
- Separate procedures for agentic AI deployment
The EU AI Act’s high-risk provisions come with documentation requirements that go beyond a governance policy. The policy is the foundation; the conformity assessment and technical file are built on top of it.
Managing Your AI Governance Policy with ComplyJet
The hardest part of AI governance is not writing the policy. It is keeping it maintained, keeping evidence that it is actually being followed, and keeping it mapped to the right frameworks as your compliance programme grows.
ComplyJet handles the operational side: policy creation from customisable templates, employee acknowledgement tracking so you have a record of who has read and accepted the policy, automated review reminders so it does not go stale, and control mapping to EU AI Act, ISO 42001, GDPR, and HIPAA.
When an auditor or enterprise customer asks how you govern AI use, the answer is not the policy document alone. It is the evidence trail: who acknowledged it, when it was last reviewed, what tools went through approval, what incidents were logged. That is what ComplyJet maintains alongside the policy itself.
FAQs
What does an AI governance policy cover?
An AI governance policy is the formal document that defines how your organisation uses, evaluates, and controls AI tools and systems. It covers which tools are approved, what data may be used with AI, who is accountable for AI decisions, which situations require human oversight, and what happens when something goes wrong. It is not a ban on AI. It is the framework that lets you use it responsibly and demonstrate that use when it is asked about.
How do I develop an AI governance policy?
Start with an inventory of AI tools already in use, not a blank document. Once you know what you are governing, classify the tools by risk level and assign a named AI Policy Owner.
Then draft the policy using the template in this guide as a starting point. Get legal or privacy review if you have GDPR or EU AI Act exposure, communicate it to the team, collect acknowledgements, and set an annual review date. The most common mistake is writing the policy before the inventory.
How do I create an AI agent governance policy?
Agentic AI, tools that take real-world actions autonomously, needs specific controls beyond a standard AI governance policy. Your policy needs to address: what actions an agent can take without human approval, what data it can access, who is accountable for its behaviour, how its actions are logged, and what the approval process is for deploying a new agent.
Start with a one-page appendix to your existing AI governance policy covering these five questions before you try to build a standalone document.
Why does policy-only AI governance fail at scale?
A document without enforcement is not a control. Policy-only AI governance breaks down for predictable reasons: no tool approval path means shadow AI goes underground; no training means employees do not know the rules; no inventory means the policy does not reflect reality. At scale, governance requires process (how tools get approved), tooling (how evidence is collected), and regular review (how the policy stays current). The document is where it starts, not where it ends.
Does my AI governance policy need to cover the EU AI Act?
If you deploy AI systems in the EU AI Act’s high-risk categories and those systems affect EU residents, yes, and the policy is only the beginning. The Act requires conformity assessments and technical documentation on top of a governance framework.
An AI governance policy covering risk management and human oversight is the necessary foundation for all of it. Worth knowing: ISO 42001 is the dedicated AI management system standard that maps directly to EU AI Act requirements. If you want structured governance that satisfies both, ISO 42001 is the framework to build towards.
Related Policies
Data Classification Policy: defines what data can and cannot go into AI tools. A prerequisite for writing any AI data handling rules that are actually enforceable.
Information Security Policy: the parent policy that AI governance sits under in most compliance frameworks. Your AI governance policy operationalises the AI-specific aspects of your broader information security commitments.






