Love it or loathe it, Artificial Intelligence (AI) is here to stay – and it’s getting harder to ignore. From how we generate and consume content to how industries operate and innovate, AI is reshaping the world around us. It’s making waves across sectors from media to science, transforming not just the headlines, but the way we live and work.
Each month, the Fire on the Hill AI Digest breaks down the biggest stories in AI, exploring their broader impact on brands, industries, and everyday life.
Big tech keeps getting bigger
There is no end in sight for the AI-related infrastructure splurge this year, with Bridgewater co-chief investment officer Greg Jensen predicting $650 billion will be spent in 2026. Alphabet, Amazon, Microsoft and Meta are all spending big to scale up data center capacity to meet growing demand for the technology, with investment rising from $410 billion in 2025, reports Reuters.
However, do the tech giants risk further inflating the AI bubble? In a letter to clients, Jensen said the AI boom has entered a “more dangerous phase,” marked by exponentially rising investments in physical infrastructure and growing reliance on outside capital. “Compute demand continues to significantly outpace supply, driving hyperscalers to invest even more rapidly to try to someday get ahead of the demand.”
The scale of spending, Jensen said, is creating significant downside risks if anything went wrong. “It is no longer possible for AI leaders to satisfy their investors’ expectations without creating existential risks to other sectors like software,” he added.
Risky business
Elsewhere, TIME reports Anthropic, which has sought to position itself as the most safety-conscious of the top research labs, is dropping a key safety pledge.
In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the safety measures were adequate. For years, its leaders touted that promise, a central pillar of their Responsible Scaling Policy (RSP), as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.
But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise not to release AI models unless Anthropic could guarantee proper risk mitigations in advance.
The new version of the policy, which TIME reviewed, includes commitments to be more transparent about the safety risks of AI, including making additional disclosures about how Anthropic models fare in safety testing. It commits to matching or surpassing the safety efforts of competitors.
In other news, Anthropic refused to bow to demands from the US Department of Defense (DoD) over how its technology is used. Chief executive, Dario Amodei, said his company would rather not work with the Pentagon than agree to uses of its tech that may “undermine, rather than defend, democratic values.”
His comments come two days after a meeting with US Secretary of Defense, Pete Hegseth, over demands that Anthropic accept “any lawful use” of its tools. It ended with a threat to remove Anthropic from the Department of Defense supply chain entirely.
“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei said.
Disruption possible
Finally, a wave of think pieces have argued the long-anticipated wave of disruption promised with the launch of ChatGPT is finally arriving.
In an essay for the New York Times, technologist Paul Ford discussed the use of Claude Code from Anthropic, and said: “Before, AI coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible. I spent an entire session of therapy talking about it.”
Also causing waves was Matt Shumer in a LinkedIn piece called ‘Something Big Is Happening’. The essay argued AI has just crossed a profound threshold, with models which can now autonomously build complex software, make nuanced decisions and even help improve themselves. Shumer says this moment is akin to early February 2020 just before Covid-19 upended everything, and he warns that the rapid progress of AI could soon disrupt almost all knowledge-work jobs.
He urges people to start using advanced AI seriously now, not dismiss it based on older, weaker versions, and to adapt before the change becomes too large to ignore.