Recent research has unveiled that artificial intelligence systems, particularly OpenAI’s ChatGPT, consume significant amounts of water during user interactions. In fact, for every brief conversation, AI can use up to 500 milliliters of water, equivalent to a standard water bottle. This consumption also applies to tasks such as drafting a 100-word email. The study examined the water used not only for cooling the servers but also the water utilized at power plants that generate the electricity powering these data centers.
Understanding the resource demands of AI is essential, according to Leo S. Lo, Dean of Libraries and Professor of Education at the University of Virginia. He emphasizes that recognizing the infrastructure and societal choices related to AI is crucial. While many view AI as a burden on resources, comprehending its actual impact can better inform choices that promote both innovation and sustainability.
Two Water Streams Behind AI Operations
AI’s water usage can be categorized into two primary streams. The first is the on-site cooling required for servers, which generate substantial heat. Many facilities use evaporative cooling systems—large misters that spray water over hot surfaces—to regulate temperatures. Although effective, this method draws water from local sources like rivers or reservoirs, potentially straining these supplies.
The second stream encompasses the water used by power plants that supply electricity to data centers. Various types of power generation, including coal, gas, nuclear, and even hydropower, require large amounts of water for cooling and steam cycles. In contrast, renewable energy sources such as wind turbines and solar panels utilize minimal water, aside from routine maintenance cleaning.
Water consumption differs greatly based on location and climate. For instance, a data center situated in cool, humid Iceland can operate efficiently with minimal water use, while one located in Arizona during the summer is likely to rely heavily on water-intensive cooling methods. Research from the University of Massachusetts Amherst found that water use can be halved in winter compared to summer, indicating the impact of seasonal demand on resource consumption.
Innovative Solutions and Practical Calculations
New technologies are emerging that could mitigate water usage in AI operations. One promising method involves immersion cooling, which submerges servers in non-conductive fluids like synthetic oils to minimize evaporation. Additionally, a new design from Microsoft claims to eliminate water usage entirely for cooling by using a closed-loop system that circulates a special liquid across computer chips.
Determining the water footprint of AI responses can be achieved with a straightforward three-step approach. First, researchers should look for credible data on the energy consumption of AI models. For example, a medium-length response from GPT-5 typically uses around 19.3 watt-hours.
Next, by applying an estimated water usage of 1.3 to 2.0 milliliters per watt-hour, one can calculate the water required for each AI interaction. For instance, a response from GPT-5 could use about 39 milliliters of water based on the higher estimate, while a response from GPT-4o may require around 3.5 milliliters.
Comparatively, OpenAI reports processing around 2.5 billion prompts daily. While the water consumed per query may seem small, the cumulative effect is substantial. For context, Americans use approximately 34 billion liters of water daily for residential lawn watering alone, underscoring the scale of water use across various sectors.
Recognizing and addressing the water consumption associated with AI technologies is vital. By implementing effective strategies such as optimizing cooling systems and selecting locations with abundant water resources, companies can reduce their environmental impact. Transparency in reporting resource use will also allow stakeholders to make informed decisions, ultimately fostering a balance between technological advancement and sustainability.
