Skip to content
GDPR Obligations calendar_today Updated: 7 April 2026 schedule 6 min read

How to Create an AI Acceptable Use Policy for Your Business

Your employees are already using AI tools. An internal AI acceptable use policy sets clear rules for responsible use, protects personal data, and keeps your business GDPR-compliant. Here is a practical guide to building one.

summarize Key Takeaways
  • check_circle Your employees are already using AI tools - with or without your permission. A policy gives them clarity and protects your business.
  • check_circle The GDPR requires you to document AI tool usage, have processing agreements in place, and inform data subjects.
  • check_circle Sensitive personal data (Article 9) and confidential business data should never go into AI tools without strict safeguards.
  • check_circle Start simple. A two-page policy that your team actually reads is better than a 30-page document that nobody follows.

Your team is already using AI - the question is whether you know about it

Here is the reality: your employees are using AI tools. They paste customer emails into ChatGPT to draft replies. They use Copilot to summarize meeting notes. They feed data into AI-powered tools to speed up their work. Most of them mean well - they want to be productive.

The problem is not that they use AI. The problem is that without clear guidelines, they have no idea what is safe to enter and what is not. One wrong paste - a customer complaint with personal details, a CV, an internal document with financial data - and you have a GDPR issue on your hands.

An AI acceptable use policy solves this. It tells your team what is allowed, what is not, and what to do when something goes wrong. It does not have to be a legal masterpiece. It has to be clear, practical, and enforceable.

What the GDPR requires when your business uses AI tools

Before diving into the policy structure, understand what the GDPR actually demands:

  • Documentation. Every AI tool that processes personal data must be recorded in your processing register. What data goes in? For what purpose? What is the legal basis?
  • Processing agreements. If an AI provider processes personal data on your behalf, you need a Data Processing Agreement (DPA). Most enterprise versions of ChatGPT, Copilot, and Claude offer these - free versions typically do not.
  • Transparency. If you use AI to process personal data of customers, employees, or other individuals, you must inform them. Your privacy policy should mention this.
  • Data minimization. Only enter the personal data that is strictly necessary. Better yet, anonymize data before entering it into any AI tool.
  • Transfer safeguards. Most AI providers process data on US servers. That is a transfer to a third country under the GDPR, which requires appropriate safeguards.

Our companion article on AI tools and privacy covers the legal background in more depth. This article focuses on the practical policy you need internally.

The 10 sections your AI policy should cover

1. Scope - who does this apply to?

Make it explicit: the policy applies to everyone who does work for your organisation. Employees, freelancers, interns, temporary workers, contractors. If they use AI tools for any work-related task, the policy applies to them.

Be specific about what counts as an “AI tool” - not just ChatGPT, but also AI features in existing software like smart compose in email, AI summaries in your CRM, or AI-powered transcription services.

2. Approved vs. unapproved tools

Maintain a clear list of approved AI tools. For each tool, document:

  • The tool name and provider
  • Which subscription tier is approved (enterprise vs. free matters for GDPR compliance)
  • Whether a DPA is in place
  • What the tool may be used for
  • Any restrictions on data input

Any AI tool not on the approved list is off limits for work purposes. This is not about being restrictive - it is about knowing which tools handle your data and under what terms.

Review this list quarterly. New AI tools appear constantly, and employees will ask to use them.

3. Prohibited data - what should NEVER go into AI tools

This is the most critical section. Be very specific about what employees must never enter into any AI tool:

  • Sensitive personal data (Article 9 GDPR): racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, health data, sexual orientation
  • Confidential business data: financial statements, strategic plans, merger or acquisition details, investor communications
  • NDA-protected materials: anything covered by non-disclosure agreements with clients or partners
  • Intellectual property: proprietary code, trade secrets, unpublished patents, product designs
  • HR records: performance reviews, disciplinary files, salary details, workplace accident reports, dispute documentation

Make it easy to remember: if the data is personal, confidential, or sensitive, it does not go into an AI tool.

4. Permitted use cases - what IS allowed

Balance the restrictions with clear examples of what employees can do. This prevents the policy from feeling like a blanket ban:

  • Drafting generic text (marketing copy, blog outlines, email templates) that contains no personal data
  • Brainstorming ideas, structuring arguments, or getting writing feedback
  • Translating non-confidential, non-personal content
  • Summarizing publicly available information
  • Generating code snippets for non-sensitive internal tools
  • Creating presentation outlines based on public information

The key principle: if the input contains no personal data and no confidential business information, most approved AI tools are fine to use.

5. Human review requirement

All AI-generated content must be reviewed by a human before it is used, sent, or published. No exceptions.

AI tools make mistakes. They produce incorrect information confidently. They can generate biased content. They sometimes reproduce copyrighted material. Your employees need to verify AI output for accuracy, bias, and appropriateness before it leaves their desk.

This is especially critical for external communications, customer-facing content, and any decisions that affect individuals.

6. Automated decision-making rules

When AI is used to make or support decisions about people - screening job applicants, evaluating employee performance, credit scoring, customer profiling - Article 22 of the GDPR applies.

Your policy should state clearly: any use of AI for automated decision-making about individuals requires prior approval from management and, where necessary, your privacy coordinator or legal advisor. A Data Protection Impact Assessment (DPIA) may be required. Human intervention must always be guaranteed.

7. Governance and responsibility

Specify who owns the policy:

  • Management is ultimately responsible for compliance
  • A designated privacy coordinator (or DPO if you have one) oversees implementation and handles questions
  • Team leads ensure their teams follow the policy in practice
  • Every employee is personally responsible for following the rules

Also clarify the consequences: violations of the AI policy are treated like any other breach of company policy and may lead to disciplinary measures.

8. Training and awareness

A policy nobody reads is useless. Include in your policy:

  • All new employees receive AI policy training during onboarding
  • Annual refresher training for all staff
  • Updates communicated whenever the policy or approved tools list changes
  • A clear contact point for questions (“not sure if you can use a tool? Ask your privacy coordinator before you start”)

Training does not have to be elaborate. A 30-minute session with practical examples and a Q&A is more effective than a two-hour lecture.

9. Incident handling

What happens when someone accidentally enters personal data into an AI tool? Your policy needs a clear incident procedure:

  1. Stop using the tool for that data immediately
  2. Report the incident to the privacy coordinator (within 24 hours)
  3. Document what data was entered, which tool was used, and when it happened
  4. Assess the risk - can the data be deleted? Was training data enabled? What is the potential impact on the data subjects?
  5. Follow your data breach procedure if needed (notification to the supervisory authority within 72 hours, notification to data subjects if high risk)

Make reporting easy and blame-free. If employees are afraid of punishment, they will hide mistakes instead of reporting them - making the situation worse.

10. Review cycle

The AI landscape changes fast. Your policy should be reviewed:

  • At minimum annually
  • Whenever a new AI tool is adopted
  • After any incident that reveals a gap
  • When relevant regulations change (such as the EU AI Act implementation)

Assign a specific person or team responsible for the review and document the review date in the policy itself.

Practical tips: getting started

Start simple. A clear two-page document is better than a comprehensive 20-page policy nobody reads. You can expand later.

Involve your team. Ask employees which AI tools they already use. You might be surprised. Building the policy together creates buy-in.

Use real examples. “Do not enter sensitive data” is vague. “Do not paste a customer complaint email into ChatGPT” is concrete.

Make it accessible. Publish the policy where employees can find it - your intranet, shared drive, or employee handbook. Not buried in a SharePoint folder nobody opens.

Iterate. Your first version will not be perfect. Review it after three months, gather feedback, and adjust.

How GDPRWise helps

GDPRWise makes it easier to integrate AI tool usage into your broader GDPR compliance:

  • Processing register: Capture each AI tool as a processor, documenting what data it processes, the legal basis, and whether a DPA is in place
  • Staff privacy policy: Reference your AI acceptable use policy within the employee privacy documentation that GDPRWise helps you generate
  • Website scan: Detect AI-powered third-party services running on your website that you may not have documented yet

Having an AI policy is one piece of the puzzle. Making sure it connects to your processing register, your privacy statements, and your overall GDPR file is what makes it work in practice.

auto_awesome Get your GDPR file in order?

GDPRWise scans your website, detects processing activities and third parties, and helps you build your complete GDPR file - including your processing register with all AI tools documented.

GW
GDPRWise Editorial

This article was written by the GDPRWise team and reviewed by our privacy experts. We regularly review our content for accuracy and legal correctness.