GDPR Compliance Guide for Using AI Chatbots at Work
In February 2026, the UK Upper Tribunal warned that putting client documents into ChatGPT breaches client confidentiality and waives legal privilege. The solicitor involved was reported to two regulatory bodies. He had been uploading client emails into ChatGPT to improve his drafts.
He is not unusual. Employees across every industry are pasting personal data into AI chatbots daily. Most don't realise they're creating a data processing activity that falls squarely under GDPR.
What GDPR says about sending data to AI chatbots
Under GDPR, personal data is any information relating to an identified or identifiable natural person. When an employee types a client's name, email address, medical condition, or financial details into ChatGPT, Claude, or Gemini, that constitutes processing of personal data under Article 4(2).
The AI provider receives, stores, and processes that data on their servers. Under Article 6, this processing requires a lawful basis. The most commonly cited bases are consent (Article 6(1)(a)) and legitimate interest (Article 6(1)(f)).
The problem: in most cases, neither basis has been established. The data subject (the person whose data was typed into the chatbot) has not given consent. And the organisation has not conducted the balancing test required for legitimate interest.
The specific risks under GDPR
There are five areas where AI chatbot use creates GDPR exposure.
1. No lawful basis for processing
Every time an employee sends personal data to an AI provider, the organisation needs a lawful basis. Most organisations have not assessed this. They have no record of the processing activity in their ROPA (Record of Processing Activities), no legitimate interest assessment, and no consent mechanism.
2. Inadequate data processing agreements
Under Article 28, organisations must have a data processing agreement (DPA) with any processor handling personal data on their behalf. While OpenAI, Anthropic, and Google offer DPAs for their enterprise tiers, most employees use free or personal accounts that may not be covered by the organisation's DPA.
3. International data transfers
ChatGPT processes data on servers in the United States. Under Chapter V of GDPR, transferring personal data outside the EEA requires adequate safeguards. While the EU-US Data Privacy Framework provides a mechanism, organisations must verify that the specific AI provider participates and that the transfer is covered.
4. Data subject rights
Under Articles 15-22, data subjects have rights including access, rectification, erasure, and objection. If an employee has sent a client's personal data to ChatGPT, how does the organisation respond to a subject access request? The data now exists on OpenAI's servers, potentially used for model training, with limited ability to retrieve or delete it.
5. Data protection impact assessments
Article 35 requires a Data Protection Impact Assessment (DPIA) for processing likely to result in a high risk to individuals. The systematic use of AI chatbots with personal data would typically trigger this requirement. Most organisations have not conducted a DPIA for employee AI chatbot use.
What the regulators are saying
Regulatory enforcement around AI and personal data is accelerating in 2026.
The European Data Protection Board announced that its Coordinated Enforcement Action for 2026 will focus on transparency and information obligations under Articles 12, 13, and 14. This directly affects organisations that process personal data through AI tools without adequate transparency to data subjects.
Italy's Garante was the first EU regulator to take action against ChatGPT in 2023, temporarily banning the service over GDPR concerns including lack of age verification and insufficient legal basis for data processing.
The Dutch DPA fined Experian €2.7 million in late 2025 for improperly using personal data and failing to adequately inform individuals. This enforcement trend applies equally to organisations that send personal data to AI providers without proper safeguards.
Cumulative GDPR fines since May 2018 have reached €5.88 billion across 2,245 recorded penalties. The cost of non-compliance continues to rise.
Practical approaches for DPOs and compliance teams
There are four main approaches organisations take, ranging from restrictive to pragmatic.
Approach 1: Ban AI chatbot use entirely
Some organisations prohibit all use of public AI chatbots. This eliminates GDPR risk but also eliminates productivity gains. In practice, bans are difficult to enforce and often lead to shadow AI use where employees use personal devices or accounts.
Approach 2: Use enterprise AI tiers with DPAs
OpenAI (ChatGPT Enterprise), Anthropic (Claude for Business), and Google (Gemini Enterprise) offer enterprise plans with data processing agreements, no training on customer data, and enhanced security controls. This is the cleanest approach from a GDPR perspective, but costs $20-30 per user per month and requires IT deployment.
Approach 3: Create an acceptable use policy
Many organisations issue guidelines instructing employees to "not paste personal data into AI chatbots." This approach is cheap to implement but relies entirely on human discipline. Research shows 73% of employees paste sensitive data into AI chatbots without realising it. Policies that depend on people "being careful" fail consistently.
Approach 4: Use PII masking at the browser level
PII masking tools detect personal data before it reaches the AI provider and replace it with safe placeholders. The AI receives [PERSON_A] instead of the real name. The data subject's personal data never leaves the employee's device.
This approach eliminates the GDPR exposure at the source. If the AI provider never receives personal data, there is no processing to regulate. No lawful basis is needed because no personal data is transmitted. No DPA is required because no personal data reaches the processor. No international transfer occurs because the data stays in the browser.
PiiBlocker by PiiBlock is a free Chrome extension that implements this approach. It detects 15+ types of PII using NER-based detection and regex pattern matching. Critical data like credit cards, SSNs, and API keys are auto-masked. Soft PII like names and addresses are flagged for the user to decide. All processing runs locally in the browser with zero data collection. PiiBlock does not operate any servers.
When the AI responds using placeholders, PiiBlocker swaps them back to real values so the employee sees their original data. The workflow is seamless and requires no training beyond a one-time install.
Building a GDPR-compliant AI use framework
For organisations that want to enable AI chatbot use while maintaining GDPR compliance, here is a practical framework.
Step 1: Assess current AI use. Survey employees to understand how and where they use AI chatbots. Identify which departments handle personal data regularly (HR, legal, healthcare, finance, customer service).
Step 2: Update your ROPA. Add AI chatbot use as a processing activity. Document the categories of personal data involved, the legal basis, the processors (AI providers), and the safeguards in place.
Step 3: Conduct a DPIA. Assess the risks of AI chatbot use with personal data. Document the likelihood and severity of harm. Identify mitigating measures.
Step 4: Implement technical controls. Deploy PII masking at the browser level so personal data is caught before it reaches the AI provider. This is the most effective mitigation because it removes the personal data from the processing chain entirely.
Step 5: Issue clear guidance. Provide employees with an AI acceptable use policy that explains what data can and cannot be shared, which tools are approved, and what safeguards are in place.
Step 6: Review and audit regularly. AI tools evolve rapidly. Review your framework quarterly to ensure it keeps pace with new platforms, updated terms of service, and regulatory guidance.
The EU AI Act adds another layer
The EU AI Act, with its August 2, 2026 compliance deadline for high-risk systems, creates additional obligations. The EDPB's April 2025 report clarified that large language models rarely achieve anonymisation standards. Controllers deploying third-party LLMs must conduct comprehensive legitimate interest assessments.
For DPOs, this means AI chatbot use is not just a GDPR concern. It sits at the intersection of data protection, AI governance, and employment law. A coordinated approach is essential.
The bottom line for DPOs
Employees are already using AI chatbots with personal data. The question is not whether to allow it, but how to make it compliant.
The most practical approach for most organisations is a combination of enterprise AI tiers for heavy users, PII masking tools for everyone else, and clear policies that set expectations. This gives employees the productivity benefits of AI while keeping personal data under control.
The organisations that act now will be ahead of the enforcement curve. Those that wait are accumulating compliance debt with every prompt their employees send.
PiiBlocker is a free Chrome extension that masks personal data before it reaches AI chatbots. 100% local processing, no servers, no data collection. Install from Chrome Web Store →