Skip to content
Rights & Requests calendar_today Updated: 11 April 2026 schedule 6 min read

Automated Decision-Making and Profiling: What Are the Rules?

Article 22 GDPR gives individuals the right not to be subject to decisions based solely on automated processing. This article explains when the rules apply, the exceptions, and what your business needs to do in practice.

summarize Key Takeaways
  • check_circle People have the right not to be subject to decisions based solely on automated processing that produce legal or significant effects
  • check_circle Profiling alone is not prohibited - it is the solely automated decision that triggers Article 22
  • check_circle Even when automated decisions are allowed, you must offer safeguards like human intervention
  • check_circle If you use AI tools to make decisions about people, you likely need a human in the loop

What is automated decision-making?

Automated decision-making is when a system makes a decision about a person without any meaningful human involvement. Think of software that automatically rejects a loan application based on a credit score, or an algorithm that filters out job candidates before anyone reviews their CV.

Article 22 GDPR gives individuals the right not to be subject to a decision based solely on automated processing - including profiling - that produces legal effects or similarly significant effects concerning them.

The key words are “solely” and “legal or similarly significant effects.” Both conditions must be met for Article 22 to apply.

Profiling vs automated decision-making

These two concepts are related but different.

Profiling is the automated analysis of personal data to evaluate certain aspects of a person - their work performance, economic situation, health, personal preferences, reliability, behaviour, or location.

Automated decision-making is acting on that analysis without human intervention.

Profiling alone is not prohibited under Article 22. You can use analytics to segment your customers or score leads. The restriction kicks in when you use that profiling to make a decision that has legal or significant effects - and no human is meaningfully involved.

When does Article 22 apply?

Article 22 applies when all three conditions are met:

  1. The decision is based solely on automated processing (no meaningful human review)
  2. The processing includes profiling or automated analysis
  3. The decision produces legal effects or similarly significant effects
ScenarioSolely automated?Significant effect?Article 22 applies?
Loan application auto-rejected by credit scoring algorithmYesYes - denied access to creditYes
AI screens CVs and auto-rejects candidatesYesYes - denied job opportunityYes
Insurance premium set entirely by risk profiling algorithmYesYes - financial impactYes
Fraud detection auto-blocks a bank accountYesYes - denied access to fundsYes
Product recommendation engine suggests itemsYesNo - no legal or significant effectNo
AI screens CVs, but a recruiter makes the final hiring decisionNoN/A - human in the loopNo
Content personalisation on a websiteYesNo - no significant effectNo
ChatGPT drafts a letter that a person reviews and sendsNoN/A - human makes the decisionNo

Three exceptions where automated decisions are allowed

Even when Article 22 would normally apply, automated decision-making is permitted in three cases:

1. Necessary for a contract

The automated decision is necessary to enter into or perform a contract with the individual. For example, an instant credit decision for an online purchase where manual review would make the service impractical.

2. Authorised by law

EU or member state law explicitly allows the automated decision-making. The law must include suitable safeguards for the individual’s rights.

The individual has given explicit consent to the automated decision-making. This must be specific, informed, and freely given - not buried in general terms and conditions.

Required safeguards - even with exceptions

Even when one of the three exceptions applies, you must still provide these safeguards:

  • Right to human intervention - the individual can ask for a person to review the decision
  • Right to express their point of view - they can explain their situation
  • Right to contest the decision - they can challenge the outcome

You also cannot use automated decision-making based on special categories of data (health, ethnicity, political opinions, etc.) unless you have explicit consent or a substantial public interest basis with appropriate safeguards.

When is a DPIA required?

A Data Protection Impact Assessment (DPIA) is required when automated decision-making creates a high risk. This typically includes:

  • Systematic profiling with significant effects on individuals
  • Large-scale automated processing of personal data
  • Combining datasets in ways individuals would not reasonably expect
  • Processing sensitive data through automated systems

If you are using AI tools to evaluate, score, or categorise people, a DPIA is almost certainly required.

What this means for your business in practice

If you run an SME and use AI tools or automated systems, here is what to check:

Step 1: Map your automated processes

List every tool or system that makes decisions about individuals. Include hiring tools, credit checks, fraud detection, customer scoring, and any AI-powered automation.

Step 2: Check if a human is meaningfully involved

A human “in the loop” only counts if they genuinely review the decision and have the authority to change it. Rubber-stamping an algorithm’s output is not meaningful human review.

Step 3: Assess the effects

Does the automated process produce legal effects (denied a contract, terminated a service) or similarly significant effects (financial impact, denied an opportunity)? If yes and no human is involved, Article 22 applies.

Step 4: Implement safeguards

For any process where Article 22 applies:

  • Add genuine human review before final decisions
  • Create a process for individuals to contest decisions
  • Document your legal basis (contract, law, or explicit consent)
  • Inform individuals that automated decision-making is taking place
  • Include this information in your privacy policy

Step 5: Consider a DPIA

If the processing involves profiling with significant effects, conduct a DPIA before you start.

Common mistakes

  • Assuming “a human clicks approve” is meaningful review - the person must actually assess the case, not just confirm the system’s recommendation
  • Forgetting to inform people - your privacy policy must explain automated decision-making, the logic involved, and the potential consequences
  • Using AI tools without considering Article 22 - if an AI tool makes decisions about people for you, the GDPR obligations still apply to your business
  • Ignoring profiling transparency - even when Article 22 does not apply, you still need to be transparent about profiling in your privacy policy under Articles 13 and 14
auto_awesome Check your automated processes

GDPRWise scans your privacy setup and flags where automated decision-making rules may apply to your business.

GW
GDPRWise Editorial

This article was written by the GDPRWise team and reviewed by our privacy experts. We regularly review our content for accuracy and legal correctness.