Your AI Conversations Aren't as Private as You Think

Most AI platforms are quietly collecting your conversations, building inferences about you, and feeding that data into training runs and ad profiles, often by default, and with opt-outs buried where few users will find them. This post breaks down what's actually happening to your data, why the regulatory landscape leaves users largely unprotected, and how Weave is built from the ground up to keep your conversations genuinely private.

Privacy

Mar 1, 2026

Most people treat AI chat like a private journal. They ask about health concerns, paste in work documents, strategize about business moves, and talk through things they'd never say in a public forum. It feels intimate, like it's just you and a chatbot thinking out loud together.

But that sense of privacy is largely an illusion, and it's worth understanding why before you type something or hand over documents you'd rather keep private.


The Business Model Behind the Curtain

AI is expensive to build and expensive to run. The companies offering "free" or nearly-free AI access have to recoup those costs somehow, and the primary mechanism is data (your data).

A Stanford study analyzing privacy policies from six major AI developers found that all six collect and use chat data to train their models by default. Some allow users to opt out; some don't. Even when companies claim to de-identify data, retention periods can be long or effectively indefinite. And in multi-product ecosystems like Google or Meta, your chat history doesn't sit in isolation; it can be cross-referenced with your search history, purchase behavior, location, and more to build a remarkably detailed picture of who you are.

The opt-out options, when they exist, are rarely surfaced prominently. They tend to live somewhere deep in settings menus or legal documents that almost nobody reads. The default, designed deliberately, is for your conversations to flow into the machine.


It's Not Just What You Say, It's What Algorithms Can Infer

Here's where it gets more uncomfortable. The risk isn't limited to what you explicitly type. Modern AI systems can infer a lot from even routine queries, and those inferences can have real downstream consequences.

Ask for low-sugar dinner ideas, and an algorithm might categorize you as health-vulnerable. Ask about managing anxiety at work, and you've potentially handed a data broker a signal about your mental health. These inferences don't stay contained, they can feed into ad targeting, insurance risk modeling, and cross-platform behavioral profiles in ways that are almost impossible to trace after the fact.

In professional settings, the risks compound further. Employees routinely paste in proprietary code, client information, internal strategy documents, and draft communications. Even if the major providers don't "remember" the exact content verbatim, the patterns extracted from that data can shape future model outputs in ways companies never intended or consented to.


The Regulatory Gap

There is currently no comprehensive federal AI privacy law in the United States. What exists instead is a patchwork of state-level regulations, varying dramatically in scope and enforcement, which means the protections available to you depend heavily on where you happen to live.

Federal regulators like the FTC have been clear that AI is not exempt from existing consumer protection law. But guidance and enforcement are very different things, and the industry has moved far faster than the regulatory frameworks designed to govern it. For now, the practical reality is that users bear most of the responsibility for understanding what they're agreeing to.

One group notably absent from these protections: children and teenagers. Age verification is inconsistent across major platforms, and the legal capacity of minors to provide meaningful consent to data collection is genuinely contested. The Stanford research flagged this as a particular area of concern, with some companies training on teen data when users opt in, without always making clear what that means in practice.


What Weave Does Differently

This is the part we built Weave around, as a foundational design decision.

The core principle is simple: your conversations are yours, and they should stay that way. Every chat, every document, every workspace you create in Weave is stored locally on your device. It doesn't get uploaded to our servers, it doesn't get backed up to the cloud, and it doesn't get fed into any training run. We're not trying to build a data business on top of your private thinking.

What we do collect is minimal by design: an email address for signing in, anonymized analytics that help us understand which features are actually useful (with IP collection disabled and all text content masked), and error reporting to help us fix bugs. Analytics and error reporting can both be turned off entirely from Settings > Privacy if you'd prefer. No targeted ads, no behavioral profiling, no selling your information to third parties, ever.

When you use cloud AI models through Weave (Google, Anthropic, or OpenAI), your prompts do pass through those providers' systems for processing (that's unavoidable when you're using their models). But we've enabled all available privacy settings on those connections, and crucially, your identity is never shared with them. That's meaningfully safer than interacting with those platforms directly, where your conversations are linked directly to your account/identity and used to build training datasets or advertising profiles. If you want the strongest possible privacy, Google, Anthropic, and OpenAI can be disabled entirely from settings, leaving only local models and GreenPT — our preferred provider, which returns real energy and emissions data and has privacy-forward practices.

You can also delete your account completely, at any time, in a few clicks.


A Question Worth Sitting With

None of this requires you to stop using AI. These tools are genuinely useful, and the goal isn't to make you paranoid about every query.

But it's worth developing some habits of awareness. Think about what you're pasting in before you paste it. Know whether the platform you're using defaults to collecting your data. Consider whether a locally-run model or a privacy-first application might be a better fit for your more sensitive work.

The companies building these systems made deliberate choices about how to handle your data. You're allowed to make deliberate choices too, about which tools you trust, and what you share with them.