How to Protect Your Privacy When Using ChatGPT, Claude, and Gemini
In early 2026, a UK solicitor faced a regulatory probe after admitting he uploaded client emails and Home Office decision letters into ChatGPT to summarize them. The Upper Tribunal warned that putting client documents into ChatGPT breaches client confidentiality and waives legal privilege. In May 2025, a Utah lawyer was disciplined for using ChatGPT to draft a court brief that contained fabricated case citations.
These aren't isolated incidents. Lawyers, doctors, HR teams, and developers paste sensitive data into AI chatbots every day because they're fast, useful, and feel private. They aren't.
What actually happens to your data when you send a prompt
When you type into ChatGPT, Claude, or Gemini, your prompt is transmitted to the provider's servers. It is processed, stored, and depending on your settings, may be used to improve future models.
Here's what each provider does by default:
ChatGPT (OpenAI): Prompts are stored for up to 30 days for abuse monitoring. Unless you disable it in settings, your conversations may be used to train future models. Even with training opted out, OpenAI retains data for safety purposes.
Claude (Anthropic): Conversations are stored for safety and security purposes. Anthropic states it does not use free-tier conversations for model training by default, but data is still transmitted to and processed on their servers.
Gemini (Google): Conversations may be reviewed by human annotators and used to improve Google's AI products. Data is stored for up to 3 years by default.
The core issue is the same across all three. Your data leaves your device and lands on someone else's server the moment you hit send.
What's actually at risk
Most people underestimate what they're sharing. A single prompt often contains multiple types of personally identifiable information without the user realizing it.
Healthcare workers share patient symptoms, medications, and diagnoses when asking AI to help with clinical notes. Lawyers paste client names, case details, and confidential communications when drafting documents. Developers include API keys, database credentials, and server configurations when debugging code. HR professionals share employee names, salaries, and performance data when creating reports. See real-world examples across industries.
A 2024 study found that 73% of employees paste sensitive data into AI chatbots without realizing the risk. The data they share includes names, email addresses, phone numbers, financial information, and proprietary business details.
The "just be careful" approach doesn't work
The standard advice is to review your prompts before sending them. In practice, this fails for three reasons.
Speed defeats caution. AI chatbots are useful because they're fast. Manually scanning every prompt for PII before sending defeats the purpose. People stop checking after the first few uses.
PII is invisible to the untrained eye. Most people don't recognize a credit card number embedded in a transaction log, or an API key in a code snippet, or a National Insurance number in an employee record. PII comes in over 15 different formats, and humans consistently miss at least some of them.
Copy-paste is the biggest risk. The most dangerous prompts aren't typed. They're pasted from documents, spreadsheets, and emails. A single paste can contain dozens of PII items that the user never consciously registered.
How to actually protect your data
There are several approaches, ranging from manual to automated.
1. Disable training in your AI provider's settings
This is the minimum step everyone should take.
- ChatGPT: Settings → Data Controls → "Improve the model for everyone" → Turn off
- Claude: Your conversations are not used for training by default on the free tier
- Gemini: Activity Controls → Turn off Gemini Apps Activity
This reduces the risk of your data being used for training. But your prompts are still sent to and stored on their servers.
2. Use temporary or anonymous chats
ChatGPT offers a "Temporary Chat" mode that doesn't save conversations. Gemini offers similar controls. This limits how long your data is retained but doesn't prevent it from being transmitted and processed.
3. Manually redact PII before sending
You can manually replace sensitive data with placeholders before pasting into an AI chatbot. Replace "Sarah Johnson" with "[NAME_1]" and "4111 1111 1111 1111" with "[CARD_1]". This is effective but extremely tedious and easy to miss items.
4. Use a PII masking tool
PII masking tools automate the redaction process. They scan your input for personally identifiable information and replace it with placeholders before the prompt reaches the AI provider.
PrivacyShield by PiiBlock is a free Chrome extension that does this automatically. It detects 15+ types of PII including credit card numbers, Social Security numbers, API keys, names, addresses, phone numbers, email addresses, and medical conditions. It uses NER-based detection and regex pattern matching to catch what manual review misses. Critical data like credit cards and SSNs are auto-masked. Soft PII like names and addresses are flagged for you to decide.
The key difference from manual redaction: PrivacyShield processes everything locally in your browser. Your real data never leaves your device. PiiBlock does not operate any servers and cannot access user data by design.
When the AI responds using placeholder tokens, PrivacyShield swaps them back to your real data so the conversation reads naturally.
5. Use an enterprise DLP solution
For organizations with large teams, enterprise Data Loss Prevention tools like Strac, Nightfall, or Microsoft Purview can monitor and block sensitive data in AI interactions. These typically cost $10 to $50+ per user per month and require IT administration.
What about GDPR and compliance?
If you're in the EU or handling EU residents' data, using AI chatbots with personal data raises GDPR concerns. Sending personal data to a third-party AI provider constitutes data processing, which requires a legal basis. This is typically consent or legitimate interest.
For organizations, the safest approach is to ensure personal data is masked or anonymized before it reaches any AI provider. This eliminates the GDPR exposure entirely since the AI provider never receives the actual personal data.
PrivacyShield's local processing architecture means no personal data is transmitted to PiiBlock either. This makes it compatible with GDPR, CCPA, and other data protection frameworks without additional compliance overhead.
The bottom line
AI chatbots are powerful tools. But every prompt you send is data you're handing to a third party. The practical options are:
- Minimum: Disable training and use temporary chats
- Better: Use a PII masking tool that catches what you miss
- Enterprise: Deploy a full DLP solution with policy enforcement
For most individual users, freelancers, and small teams, a browser-based PII masking tool offers the best balance of protection and convenience. You get full AI capability without exposing your personal data.
PrivacyShield is a free Chrome extension that masks personal data before it reaches AI chatbots. 100% local processing, no servers, no data collection. Install from Chrome Web Store →