
Image by Freepik
AI Growth Sparks Local Water Crisis In Georgia
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
As AI grows, Georgia residents like Beverly Morris face water shortages, blaming nearby data centers that consume millions of gallons to stay cool.
In a rush? Here are the quick facts:
- Meta denies its data center affected local groundwater conditions.
- Data centers can use millions of gallons of water on hot days.
- Georgia’s climate makes it attractive to water-dependent tech developers.
The BBC reports the story of Beverly Morris, a woman who retired in 2016 into a peaceful rural house in Fayette County, Georgia, before discovering that within 10 years she would need to carry water buckets for flushing her toilet. This is the result of living just 400 yards from a large data center run by Meta.
“I can’t live in my home with half of my home functioning and no water,” Morris told the BBC. “I can’t drink the water,” she added.
She blames the data center construction near her residence for her private well becoming contaminated with sediment, which now produces hazy water and unstable pipes. The independent study conducted by Meta reports that there are no adverse effects on groundwater, but Morris continues to doubt their findings. “I’m afraid to drink the water,” she said to The BBC. “Am I worried about it? Yes,” she added.
The United States sees a growing number of large data centers being constructed to support cloud storage services and AI tools such as ChatGPT. The construction of these facilities results in significant water consumption costs. On hot days, a single center can consume millions of gallons of water to cool servers.
“These are very hot processors,” said Mark Mills of the National Center for Energy Analytics, as reported by the BBC. “It takes a lot of water to cool them down,” he added.
Research into the environmental impact of AI-generated messages shows that even small digital actions carry energy costs . Sending just one AI-assisted email per week over a year can consume around 7.5 kWh, about the same as what nine homes use in an hour. While this may seem minor, experts warn that such habits contribute to a larger issue. The data centers powering AI already account for an estimated 2% of global electricity use, a number expected to grow rapidly as AI becomes more embedded in daily life.
Further research shows that tech companies often withhold exact figures of AI data centers energy use. These facilities can consume as much power as tens of thousands of homes, placing serious strain on local grids.
The state of Georgia, with its humid climate, has become a leading location for data center development. The rapid development of data centers has raised worries about water contamination and resource exhaustion. “It shouldn’t be that colour,” said George Diets, a local volunteer, after collecting a murky water sample downstream from another center under construction, as reported by The BBC.

Photo by Saúl Bucio on Unsplash
U.S. Judge Sanctions More Attorneys Over Inaccurate AI-Generated Court Filings
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
A U.S. federal judge fined two attorneys $3,000 on Monday for using artificial intelligence to generate inaccurate information. The lawyers, who were representing MyPillow CEO Mike Lindell in a defamation case, submitted court documents containing fake legal citations produced by AI tools.
In a rush? Here are the quick facts:
- A U.S. Federal judge charged two attorneys $3,000 for filing a legal document generated by AI tools.
- The lawyers submitted court filings containing over two dozen mistakes, including hallucinations generated by AI models.
- Judges around the world have raised concerns about lawyers’ use of AI tools.
According to NPR , Judge Nina Y. Wang of the U.S. District Court in Denver ruled that attorneys Christopher Kachouroff and Jennifer DeMaster violated the court rules by filing a document with more than two dozen mistakes. She ordered them to pay what she described as a “reasonably light sum.”
“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” wrote judge Wang in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”
The erroneous filing was part of the Lindell case, in which a jury found that the businessman and conspiracy theorist had defamed a former employee. Lindell was ordered to pay $2.3 million in damages last month.
The use of AI is not illegal in the United States, but the judge considered that the two lawyers violated a federal rule requiring lawyers to verify the accuracy of their filings and ensure that their claims are grounded in law.
It is not an isolated case. Judges in the U.S. and abroad are increasingly raising concerns about the use of AI in legal proceedings. In May, another U.S. judge fined two law firms $31,000 for fake AI-generated legal citations.
And just a few weeks ago, the UK courts warned British lawyers against using AI technology such as ChatGPT for its fake-generated citations. The warning came after a lawyer submitted 18 false cases in an £89 million lawsuit—an act that may lead to criminal charges.