CMSC
-0.4028
Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector.
With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI.
US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government.
"The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement.
"Australia's investment in AI safety makes it a natural partner for responsible AI development."
The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment".
Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books.
Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain.
Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly".
- Energy-intensive -
New data centres -- warehouse facilities that store files and power AI tools -- are springing up worldwide.
But there are increasing fears about the environmental impact of hulking data hubs.
Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries.
Australia last week adopted new rules governing the operation of data centres.
Tech companies must show how they will source renewable energy and minimise their emissions.
"As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state.
Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems.
But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance.
Washington has since described Anthropic's tools as an "unacceptable risk to national security".
The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.
J.P.Cortez--TFWP