They Already Have Your Data. Here’s What That Actually Means for You.
One of the most common reasons people give me for not using AI is privacy. They don’t want to “give their data away” and they don’t trust the companies handling it.
So first let me just say that I get it and those concerns are valid. But here’s what really needs to be said: that ship sailed a long time ago. Years ago. Possibly decades.
If you’ve EVER typed your Social Security number into a website or applied for a credit card online, your data left your hands a long time ago. If you carry a smartphone right now, it’s still leaving. And the idea that avoiding AI is somehow protecting you is one of the biggest misconceptions keeping people stuck right now.
The Reality Nobody Wants to Sit With
There are over 4,000 data brokerage companies operating worldwide. It’s an industry worth over $250 billion and their entire business model is collecting your personal information and selling it to other companies.
One company alone, Acxiom (now LiveRamp), holds profiles on 2.5 billion people with up to 11,000 data points per person. Think about those numbers for a second. That’s insane right? Those data points include name, address, income range, political leanings, health conditions, what you buy and how often. They know your family members and your estimated net worth. And all of it is sitting in a database you’ve never seen, being sold to companies you’ve never heard of.
That’s ONE company out of thousands.
98% of Americans have personal data exposed by at least 35 different data brokers. The average person (aka YOU) has around 300 pieces of information publicly accessible right now and available for purchase or sometimes for free, on sites most people don’t even know exist.
Your Phone
A Vanderbilt University study found that a stationary Android phone with only Chrome running in the background sent location data to Google 340 times in a single day. That’s a phone just sitting on your nightstand doing nothing, and it’s STILL pinging your location every few minutes.
When they tested a phone during normal daily use, that number jumped to 450 location transmissions per day (roughly ninety per hour). And a New York Times investigation found that some apps logged a user’s location up to 14,000 times in a single day.
Your phone knows where you sleep and where you work. It knows what time you leave in the morning, what route you take, and whether you stopped at the gym or skipped it. It tracked the doctor you visited last Tuesday and how long you were in the office. Your phone knows MORE about your daily life than most of the people in it (and that’s not an exaggeration).
82% of iOS apps collect private user data. The location data industry alone is a $12 billion market. Companies openly advertise that they’ve collected location data from 25% of all US adults. And 62% of Americans believe it is straight up impossible to go through daily life without companies collecting their data.
They’re right. It is.
The Part People Miss About AI Specifically
When someone tells me they won’t use AI because of data privacy, I know they haven’t really thought it through or they just don’t have the information. Because I know they will type their credit card number into an online store, share their location with a food delivery app, give their Social Security number to a tax filing website, let their phone track them 450 times a day, and use social media platforms that literally sell their behavioral data to advertisers. But typing a business question into ChatGPT? That’s where they draw the line.
I use AI every single day knowing everything I just told you about data collection. I paste business strategy into Claude, draft content, plan launches, build systems. I do it with my training toggles turned off, my settings managed, and my eyes wide open. Because I did the math. The data was already out there. The only thing I’d gain by avoiding AI is falling behind.
For a lot of people, “privacy” is not actually the issue. It’s the reason they give, and it sounds responsible and informed. Nobody argues with someone who says they’re concerned about data privacy. But underneath that concern, the real thing happening is fear. Fear of the technology itself or fear of finding out how far behind they already are. And privacy becomes the permission slip to not engage, because it feels safe and it sounds smart.
I can’t even count how many times I’ve had this exact conversation. Someone tells me they’re worried about their data. I ask what phone they use. iPhone. Do they have location services on? Yes. Do they use social media? Yes. Have they ever filed taxes online? Obviously. So I ask, genuinely, what specifically about AI feels like the bigger risk? Because faced with all of that they don’t actually have an answer. Because it was never actually about the data.
The idea that AI is the thing putting your privacy at risk is like worrying about someone reading your diary after you’ve already published it on Substack.
So What Do You Actually Do?
The next part is for you if you feel a bit paralyzed but you know AI matters and that you need to learn it.
Go into ChatGPT. Settings, Data Controls, turn off “Improve the model for everyone.” In Claude, find “Help improve Claude” and toggle it off. In Gemini, look for “Gemini Apps Activity.” Takes about two minutes across all three. Do it once and move on.
Don’t paste client Social Security numbers or bank account details into AI prompts. That’s common sense, the same common sense you’d apply to any tool. And here’s the thing, your Social Security number is actually readily available to anyone that really wants it, but I wouldn’t be typing it online into ANY website if you are really concerned about data protection.
If you run a business, consider the paid or enterprise tiers. They have stronger data governance controls and typically don’t use your inputs for training. Frequently review what your AI tools have stored in their memory settings and clean out anything you don’t want there.
Those are smart, practical steps that take a few minutes. Do them and then get to work.
Because the cost of ignoring AI is compounding. The person next to you who started learning six months ago is producing in two hours what takes you eight. And that gap doesn’t stay the same. It gets wider every single month.
The people who will look back on this time and regret something? It won’t be that they used AI. It’ll be the months they spent deciding whether or not to start.



