
How We Measure AI's Carbon Footprint
Every message you send has an environmental cost. For most people, that cost is completely invisible, and that's a problem we need to fix.
Sustainability
Feb 26, 2026

Every message you send has an environmental cost. For most people, that cost is completely invisible, and that's a problem we need to fix.
Sustainability
Feb 26, 2026
Click here for the Full Methodology Report
We live in an age where carbon footprint calculators exist for almost everything. There are apps to track the emissions from your flights, your diet, your commute, your shopping. The sustainability community has spent decades building the tools and frameworks to make the invisible visible, to put a number on things that used to feel unmeasurable.
And yet, most people using AI every day (the ChatGPT queries, the Gemini summaries, the Claude drafts) have absolutely no idea what any of it costs in terms of energy, water, or CO2 equivalents (CO2e).
Weave wants to close that gap AND provide users better options.
The big cloud providers aren't shy about their sustainability pledges. Google talks about trying to achieve 24/7 carbon-free energy. Microsoft has a carbon negative by 2030 commitment. These commitments are real, and some of them represent genuine progress.
But a corporate sustainability pledge and the actual carbon cost of your specific API call are two very different things. Renewable Energy Certificates (the mechanism most providers use to claim "green" electricity) don't guarantee that the server processing your request was running on clean power at that moment. Annual REC matching is better than nothing, but it's not the same as clean electricity.
More frustrating still: none of the major cloud AI providers currently tell you what your requests actually cost. There's no emissions figures, no dashboard showing you the energy breakdown by model. Just a bill in dollars, a trail of "tokens," and a sustainability page somewhere on the marketing website.
If you've spent any time in carbon accounting, supply chain emissions work, or environmental disclosure advocacy, this will feel familiar. It's the same pattern that took decades to shift in aviation, food, and manufacturing. The data exists somewhere inside these companies. It's just not flowing to the end consumer, and as a result the end consumer has no idea how carbon intensive their use of AI really is.
Before we talk about solutions, it helps to understand the problem. Three things determine the carbon cost of an AI inference request:
How much energy the model uses. Larger, more capable models consume more power. A lot more, in some cases. Models that use "extended thinking" (where the AI works through many internal reasoning steps before responding) can use 30 to 70 times more energy than a standard model for the same apparent output.
What hardware it runs on. Cloud servers, local laptops, and specialized AI chips all have very different efficiency profiles. Running a model locally on a modern Apple Silicon chip is surprisingly efficient; up to 20 times more energy-efficient per token than routing the same request through a large cloud provider.
Where the electricity comes from. The same model, running the same workload, can have a carbon footprint that differs by a factor of 30 depending on where in the world the electricity comes from. This is largely due to the mix of energy sources, with grids powered by renewable energy having a significantly lower impact.
Most tools (if they measure anything at all) only think about one of these dimensions. The full picture requires all three.
As we were building Weave (a desktop application for interacting with AI models), we wanted to make these costs visible rather than pretending they don't exist. So we built a sustainability tracking system into the application and published the full methodology behind it.
Here's what that looks like in practice:
For inference that runs locally on the device, we don't estimate, we measure. Apple Silicon chips and Windows equivalents expose real hardware energy counters that let us read the actual electricity consumed by the GPU, CPU, and memory during each inference request. Measured joules, converted to kilowatt-hours, multiplied by the carbon intensity of the user's local electricity grid.
Our favorite cloud provider, GreenPT, actually returns real energy and emissions data with every API response, which is the gold standard we'd love to see the whole industry move toward.
The other major cloud providers we provide access to (Google, Anthropic, OpenAI) do not provide such metrics, and even obfuscate the data necessary for better measurements. So we estimate, using token counts, model-specific energy factors calibrated against published research and pricing differentials, and provider-level carbon intensity figures.
All of this (and much more) is disclosed in our Full Methodology Report.
Training emissions aren't counted. The carbon cost of training an AI model (the months of compute on thousands of GPUs) is a large part of a model's lifetime emissions. We don't include it yet, because amortizing training emissions across individual inference requests requires data that most providers don't disclose.
Cloud estimates for OpenAI, Anthropic, and Google are exactly that - estimates. We don't know which Azure data center served your ChatGPT API call, which AWS/GCP data center served your Claude question, or which GCP data center served your Gemini request. We don't know the exact hardware configuration, the cooling efficiency, or the real-time grid mix at that location. Our cloud energy factors are informed approximations.
Annual grid averages miss real-time variation. Grids are cleaner during sunny afternoons and dirtier during winter evenings. Our area specific annual averages don't capture that. Real-time grid intensity APIs exist, and integrating them is a planned improvement.
Our provider sustainability assumptions rely partly on their own claims. Google's very low carbon intensity factor reflects their commitment to 24/7 carbon-free energy matching and their reported ~90% carbon-free energy. We've applied a residual factor to account for the remaining non-renewable portion. But we're ultimately trusting their reported figures to some degree.
If you've made it this far, here's the only ask: get a little curious about your own AI usage.
What models are you defaulting to? Do you know whether your go-to AI tool runs on renewable energy? Have you ever thought about the difference between running a model locally versus sending it to a cloud data centre powered by natural gas?
You don't need to overhaul anything. But knowing tends to change behavior at the margins, and at the scale AI is growing, the margins matter.