How oversharing with AI could increase your risk of being scammed
If you use an AI chatbot such as ChatGPT or Claude to write a complaint email, negotiate a rent increase, or polish your LinkedIn profile, it could be tempting to include an entire conversation, screenshots, and personal details to get a more helpful response. But it’s important to be cautious about what you share. All that highly personal information is exactly what scammers use to make their tricks more convincing. Here’s what to look out for and how to stay safe.
What is “oversharing with AI”?
Oversharing with AI means typing sensitive or identifying details into the AI chatbot, such as your full name, phone number, address, date of birth, account numbers, passwords, one-time codes, screenshots of bank statements, medical records, legal documents, or private workplace information. It can also include topics that you are concerned about or curious about, but that you also would prefer to keep private.
It’s completely understandable to provide this type of information to an AI chatbot because the more details you share with it, the more likely it will give us a response that is useful or meaningful to us. The responses we may get from the AI chatbot may also feel highly personal, as if there is a real human being on the other end of the chat. But you may also be placing high-value personal data in more places than you realize—especially when privacy settings differ by tool, account type, or subscription level. What feels like a private conversation might not stay that way.
How do scammers create hyper-personalized scams?
Scammers don’t need to hack your accounts to build a profile on you. They gather bits and pieces about us from social media posts, data breaches, old emails, public records, and whatever details other people share in chats or documents about you. But when we provide even more personal information to an AI chatbot, that’s just more potential raw material a scammer might be able to get their hands on. Our personal information and inputs to an AI chatbot is often used to train the chatbot to get better, but that means it’s being stored somewhere that could be vulnerable. This already happened to millions of users of a popular AI chatbot in January 2026.
When scammers have this type of information, they can craft messages that seem incredibly legitimate and convincing and can even impersonate people or organizations you know and trust.
Picture this: you get a phone call saying, “Hello, I’m calling from your bank about your recent $247 transaction at [the specific store you mentioned during your AI conversation],” or receive an email that says, “I noticed you’re having [a particular issue you talked about]. I can help solve this right away.” Even people who are usually cautious can be deceived if a message contains details that seem too accurate for a stranger to know.
AI also lowers the cost for scammers to write convincing messages that lack the once tell-tale signs of a scam, such as misspelled words or poor grammar. AI also enables scammers to operate at scale and run multi-step conversations across multiple channels in their efforts to target victims. According to the National Council on Aging (NCOA), scammers are increasingly using information collected from online sources to make their messages seem like they’re written specifically for the recipient. Trend Micro also predicts that scams are becoming increasingly AI-driven and multi-channel, making these attacks more sophisticated than ever.
When do people commonly overshare with AI?
Below are the most common instances when people may be sharing more than they should with an AI chatbot:
- Email threads: Copy/pasting an entire email chain with your full signature, phone number, address, and employer details
- Screenshots: Posting images with visible account numbers, QR codes, barcodes, or ticket IDs
- Resume improvement: “Here’s my whole resume, LinkedIn profile, and references—make it better.”
- Dispute help: For example, asking the chatbot to help you dispute a charge and including transaction IDs, last four digits of your card, and bank statements
- Legal or medical forms: Requesting the chatbot to complete a form on your behalf and including date of birth, policy numbers, claim numbers, or social security number
- Work context: Pasting customer lists, internal documents, incident reports, or invoices that should be treated as confidential information
These examples may seem harmless when you’re just trying to get help. But once that information is in a prompt or chat history with an AI chatbot, it could be stored, accessed by others, or leaked in a data breach. Scammers actively look for these details to build convincing profiles and scams.
7 safety tips for safer chatting with an AI chatbot
Here are seven simple safety tips to help you get the help you need without putting yourself at unnecessary risk:
- Remove identifiers before pasting. Replace names, phone numbers, addresses, dates of birth, account numbers, order numbers, and policy numbers with placeholders like “[NAME]” or “[ACCOUNT].”
- Never share secrets. Don’t type passwords, recovery codes, one-time verification codes, or API keys into any AI tool—ever.
- Summarize instead of pasting. Paraphrase the situation in your own words. Paste only the minimum excerpt needed for context.
- Replace specifics with placeholders. Use “[BANK NAME],” “[MERCHANT],” “[CITY],” or “[EMPLOYER]” instead of real details.
- Watch your screenshots and attachments. Crop or blur sensitive fields before uploading. Check for account numbers, barcodes, QR codes, or personal information in the background.
- Treat free text boxes like public forums. Check your settings and assume anything you type could be stored or shared unless you’ve confirmed otherwise.
- Opt out of training the chatbot. You can decline to have the data you input to an AI chatbot used to train the chatbot. While this option is not universal, major chatbots such as ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), and Copilot (Microsoft) typically provide opt-out settings.
A Note on Privacy Settings
Privacy settings vary by platform. Here’s how to check yours:
- ChatGPT: You can control whether your chats are used to improve OpenAI’s models by going to Settings > Data Controls settings and toggling off Improve the model for everyone.
- Claude: You can control whether your conversations are used to train Anthropic’s AI models by going to Settings > Privacy and toggling off Help improve Claude. When turned off, your data is retained for 30 days instead of up to five years.
- Google Gemini: Go to your account profile > settings > Gemini apps activity. Toggle Keep Activity on or off to control whether your chats are saved and used to improve Google’s services.
- Microsoft 365 Copilot: You can control whether your conversations are used for model training by going to Settings > Privacy. Toggle Model training on text and Model training on voice on or off.
Protect yourself with Trend Micro ScamCheck
With the increasing number and sophistication of personalized, AI-driven scams, staying one step ahead is more crucial than ever. Trend Micro ScamCheck is built to catch these kinds of scams: it analyzes and flags scam patterns in real time. You can check if something is a scam in real time, including suspicious texts, links, phone numbers, or even screenshots of your private chats.
Getting help from AI can be incredibly useful, but don’t pay for convenience with unnecessary privacy risks! The habit to build is simple: minimize what you share, and verify before you trust. You’re already ahead by knowing what to watch for.
