AI doesn’t literally drink water (ChatGPT doesn’t have a water bottle permanently in hand 😅). It’s mainly data centers and electricity production that consume it to cool servers and power computations.
For a simple text query, we now see huge variations in AI water consumption, from 0.26 mL for a median text prompt on Gemini to about 45 mL for a 400-token response on Mistral, and up to the equivalent of a 500 mL bottle for 10 to 50 responses in the broad estimate from the University of California, Riverside study.
On a global scale, the true total remains poorly understood, but projections range from hundreds of billions of liters to several billion cubic meters depending on the scope considered.
The good news is that we can significantly reduce this pressure (we’ll see how below).
AI water consumption has become a sensitive topic for a simple reason. AI seems immaterial, yet it relies on very physical machines. These machines heat up. To keep them within an acceptable temperature range, data center operators use air, water, sometimes both, and especially a lot of electricity. Some of this water evaporates on-site. Another portion is consumed further upstream, in the electrical system that powers the servers.
The problem is that public debate often mixes several things together. We confuse water withdrawn and water actually consumed. We also mix direct use, indirect use, model training, inference, hardware manufacturing, and sometimes even the user’s device. As a result, we see figures that seem incompatible. In reality, they rarely refer to the same thing. The numbers change depending on the scope considered.
Why Does AI Consume Water?
There are two main areas of consumption:
- Direct consumption, at the data center site. Water is used for cooling. Some circulates in a loop, while another portion ends up evaporated and is no longer locally available.
- Indirect consumption, upstream. Producing the electricity needed for data centers also consumes water, especially depending on the energy mix and the cooling technologies of power plants.
This distinction is crucial. A study published in Nature Sustainability on AI servers deployed in the United States estimates that the indirect portion represents 71% of the total, compared to 29% for direct use. In other words, looking only at water evaporated on-site gives an incomplete picture. It’s sometimes half the issue, sometimes less.
Another often overlooked point: location and timing matter. The same AI task doesn’t have the same water cost depending on the weather, cooling technology, region, season, and the electrical grid used at that specific moment. This is one of the strong conclusions from the University of California, Riverside work, which emphasizes the spatial and temporal diversity of water efficiency.
How Much Water for a Single AI Query?
There is no single universal figure. There are serious numbers, but they don’t measure the same thing.
| Source | Order of Magnitude | What Is Measured | What It Means |
|---|---|---|---|
| Google (Gemini Apps) | 0.26 mL per median text prompt | Large-scale production measurement, May 2025 | A very low estimate, from detailed internal measurement |
| Mistral (Le Chat) | 45 mL for a 400-token response | Marginal inference impact, excluding user device | A broader life cycle estimate than just the server room |
| UC Riverside | 500 mL for about 10 to 50 responses | Broad estimate of water consumption for a large model | An order of magnitude that includes a more ambitious scope |
Google’s figure is currently one of the most precise for real-world text use at very large scale. In its 2025 paper, the company indicates that a median text prompt in Gemini Apps consumes 0.24 Wh and 0.26 mL of water, about five drops. That’s very little for a single use. But Google also notes that the overall impact remains significant once you scale to hundreds of millions of queries.
Mistral provides another, higher benchmark. In its environmental audit published in July 2025, the company reports 45 mL of water for using its Le Chat assistant on a 400-token response. Again, the scope is different. The study includes upstream impacts and takes a life cycle analysis approach.
The best-known estimate among the general public remains that from the University of California, Riverside. The authors explain that a large model like GPT-3 can consume the equivalent of a 500 mL bottle for about 10 to 50 responses, depending on where and when it’s executed. This isn’t an absurd figure. It’s a broad figure, and it should be read as such.
The right reflex, therefore, isn’t to choose one figure over the others. The right reflex is to ask: what exactly are we talking about?
Per Query, Per Image, Per Video, Per Training: What Can We Honestly Say?
Text
For text, we now have several serious benchmarks. A simple query can drop to a few tenths of a milliliter in an optimized setting, rise to a few dozen milliliters in a life cycle approach, or reach a much broader estimate if we take more comprehensive assumptions. So we need to speak in ranges, not in single truths.
Image
For images, serious literature still lags behind actual use. We have recent studies on the energy consumed by image generation, with massive variations depending on the model, resolution, and architecture. A 2025 study on 17 models shows up to 46 times difference between models, and consumption increases ranging from 1.3 to 4.7 times when resolution is increased. However, there is still no single, widely accepted figure directly measured in milliliters of water per generated image.
The practical conclusion is simple. An AI image consumes more than a text query, sometimes significantly more, and even more when you multiply attempts, style variations, high resolutions, and edits. The real cost doesn’t come from just one image. It comes from the complete series of attempts the user launches to get a “good” one.
Video
For video, we need to be even more cautious. We don’t currently have a proper, robust, stable, and widely cited reference figure in milliliters of water per generated video. However, research on video diffusion models repeats the same finding: these models suffer from a much higher computational cost than image models, with significant latency and sometimes tens of minutes to generate a few seconds of video on high-end GPUs. In other words, video isn’t just “a slightly heavier image.” It’s a much higher water cost.
So we can give a logical order without inventing false figures. A short AI video consumes probably much more water than an AI image, because it requires much more computation and machine time. But the exact order still depends too much on the model, number of frames, duration, resolution, number of iterations, and execution location to announce a single figure without lying.
Training
Training large models remains the most spectacular stage. The University of California, Riverside study estimates that training GPT-3 in Microsoft’s U.S. data centers could directly evaporate 700,000 liters of fresh water, and rise to 5.4 million liters total when including the rest of the scope. That’s immense. But it’s not the end of the story, because once the model is deployed at very large scale, inference can also end up weighing very heavily.
How Much Water Does AI Consume Worldwide?
The most honest answer is this: no one currently has a complete and perfectly reliable global meter. The lack of transparency still weighs heavily. A study published in npj Clean Water already noted that fewer than one-third of data center operators measured their water consumption. As long as this baseline remains incomplete, all global totals should be read as estimates, not as certified accounts.
Despite this limitation, several works provide a credible order of magnitude. In an article published in early 2026 in Patterns, Alex de Vries-Gao explains that the water footprint of AI systems could be in the range of 312 to 765 billion liters in 2025, a level comparable to global annual bottled water consumption. This is a high-visibility estimate, useful for thinking about the scale of the phenomenon.
For its part, the University of California, Riverside study already projected that global AI demand could represent 4.2 to 6.6 billion cubic meters of water withdrawn in 2027. This projection doesn’t measure exactly the same thing as the previous estimate, but it points in the same direction. The growth of AI uses can transform a peripheral issue into a major one.
In other words, the debate is no longer “does AI consume water?” The debate is “how fast is this consumption growing, and in which regions is it becoming a real local problem?”
What Could Happen by 2030, Then 2050
Recent projections are frank. A Nature Sustainability study published in late 2025 estimates that the deployment of AI servers in the United States could generate an annual water footprint between 731 and 1,125 million cubic meters between 2024 and 2030, depending on the expansion rate.
The study adds that best practices can reduce the water footprint by up to 86%, which shows two things at once. First, the raw trajectory is very strong. Second, this trajectory is not fixed.
In the longer term, a 2025 article in the Journal of Cleaner Production proposes scenarios extending to 2050. Its conclusion is clear: without mitigation, global water consumption related to AI-driven data centers could be multiplied by more than seven by mid-century. Again, the message isn’t that “everything is decided.” The message is that we still have levers, but we need to act early.
The future will mainly depend on five variables:
- Adoption speed
- Chip efficiency
- Type of cooling
- Energy mix
- And where data centers are built
As soon as one of these variables changes, the entire footprint changes too (sometimes very quickly).
How to Use Much Less Water While Continuing to Use AI?
Strictly speaking, no. Unlimited water doesn’t exist. However, we can aim for near-zero pressure on local drinking water. That’s already very different.
The first path is to avoid evaporation. Microsoft highlights direct-to-chip cooling in a closed loop, with zero water evaporated in the circuit, and a reported gain of more than 125 million liters per site per year. This type of design doesn’t make AI free for the planet, but it can significantly reduce the need for fresh water on-site.
The second path is to stop using drinking water when it’s not necessary. Microsoft also explains that in Quincy, Washington, a water reuse system reduced its drinking water use in the region by 97%, while returning 1.5 million cubic meters per year to the community’s drinking water needs.
The third path is to better choose locations and techniques. Google indicates that its water strategy combines alternative sources, climate-adapted cooling decisions, technologies like direct-to-chip cooling, and resource replenishment projects. At its Belgian site in Saint-Ghislain, the company explains it increased cooling water reuse to four cycles instead of two.
The fourth path is water compensation, often called water positive. Google says it replenished 64% of its fresh water consumption in 2024, compared to 18% in 2023. This is useful, but we need to stay clear-headed. Replenishing water elsewhere doesn’t automatically eliminate local stress where the data center is currently drawing. Offsetting helps. It doesn’t replace actual conservation.
The fifth path, more discreet yet decisive, is software efficiency. Smaller models, better schedulers, routing workloads to centers and times that are least water-intensive, reducing idle hardware, software-driven thermal optimization. These are less visible gains, but often very cost-effective.
Why a Simpler Interface Can Also Reduce Water Footprint
Part of AI’s cost comes from “usage noise.” The user doesn’t know how to formulate their request, they circle around the result, they multiply attempts. An interface that guides better, with pre-prompt buttons, clear modes, simple workflows, can reduce this noise. We cut unnecessary back-and-forth. We get the right format faster. We also avoid launching image or video generation when text would suffice.
In this logic, an interface designed for people who don’t master complex prompts has a concrete benefit. It simplifies access to AI, of course. But it can also make usage more efficient. This is exactly the value of a service like Nation AI, which offers guided modes, ready-to-use actions, and an experience that’s simpler to use for audiences who get lost with traditional tools (especially seniors, but not only).
It’s not a magic wand. If someone generates 200 images, the interface won’t change the physics. However, for all everyday uses—email, summary, homework help, text humanization, formatting, quick response—better-guided AI can reduce the number of unnecessary attempts. And there, the effect becomes concrete.
FAQ
Does a ChatGPT Query Really Consume 500 mL of Water?
Not exactly “one query.” The 500 mL figure comes from the University of California, Riverside study, which refers to about 10 to 50 responses for a large model, depending on location and timing. Other more recent measurements give much lower values for a single text prompt, such as 0.26 mL at Google or 45 mL for a 400-token response at Mistral.
Why Do the Numbers Vary So Much?
Because they don’t cover the same scope. Some studies measure production use, others do a broader life cycle analysis. Some mainly count direct water from the data center, others add electricity, hardware, or usage assumptions.
Do AI Images Consume More Water Than Text?
Very likely yes, but we still lack a standard figure in mL per image. Serious studies already show large energy variations between image models, and a sharp increase when resolution increases. So it’s reasonable to say that an image often costs more than a simple text query, especially if the user relaunches several versions.
What About Video?
Video is even more demanding, but there’s still no simple and robust reference figure in liters per video. Research mainly describes models with high computational cost, with sometimes long generation times even on powerful GPUs.
Can We Make AI Almost Water-Neutral?
We can significantly reduce pressure on local drinking water with closed circuits, more efficient cooling, recycled water, better-located centers, and more efficient models. However, talking about total neutrality would be premature. There will always be a material, energy, and local cost to manage.
What’s the Simplest Action for a User?
Make fewer unnecessary attempts. A clear, well-framed request is often better than five hesitant retries. And whenever text is enough, it’s better to avoid switching to image or video.
