Is AI wasting water? What new research reveals about ChatGPT   

Huge amounts of water are required to…train the giant AIs

It is no secret that artificial intelligence in general is energy intensive, as training large language models (LLMs) such as OpenAI and Microsoft ‘s famous ChatGPT as well as Google ‘s Bard requires large amounts of energy. A new study, however, reveals that huge amounts of water are required inside everything . 

As Futurism reports , teams of researchers from the University of Arlington in Texas and Riverside in Colorado have pre-published a study titled ” Making AI Less Thirsty ,” which examines the environmental impact of AI training, which not only requires large quantities of electricity but also hundreds of thousands of tons of water to cool the data centers that house it. 

The research sheds light on how much water is being wasted to cool OpenAI and Google’s data centers . They discovered that training ChatGPT-3 alone (we’re now on ChatGPT-4) consumed about 700,000 liters of water (about 185,000 gallons), which they calculated is equivalent to the amount of water needed to cool a nuclear reactor . 

“While a 500 ml bottle of water may not sound like much, the total water footprint is still extremely large, considering the billions of users of ChatGPT ,” the scientific team says in its research.

The scientists said that companies like AI tech giants ” can, and should, take social responsibility and lead by example by addressing their own water footprint. This will be a first step in quenching the unquenchable ‘thirst.’ of artificial intelligence “.

Typically, to cool Microsoft’s data centers in the US during ChatGPT-3 training, enough water was used to produce 370 BMW cars or 320 Tesla electric vehicles . If they had trained the model on the company’s data centers in Asia, which are even larger, those numbers would have  tripled .

One of the most interesting findings is that ChatGPT consumes about half a liter of water for a simple conversation of about 20-50 questions and answers, the researchers note. And all these items are for the pre-previous version of ChatGPT as we are now in the highly evolved ChatGPT-4 . 

Finally, it should be noted that their research is not yet peer-reviewed, but you can find it by clicking here .