Anthropic is scaling its deal with Google to use up to one million of the tech giant’s artificial intelligence chips — valued at tens of billions of dollars — as the startup races ahead with efforts to develop its own AI systems in a competitive market.
Under the agreement announced on Thursday, Anthropic will have access to more than 1 gigawatt of computing capacity, available in 2026, to train future versions of its AI model Claude on Google’s homegrown tensor processing units, or TPUs, which until now had only been used in-house.
Anthropic said it selected TPUs because of their cost performance and efficiency, as well as its prior experience training and serving the Claude models using the processor.
The deal is the clearest indicator yet of insatiable chip demand in the AI industry, as companies race to develop technology that can equal or exceed human intelligence.
Alphabet’s Google, which owns TPUs that can be rented on its own cloud service as an alternative to Nvidia chips when Nvidia is running short of supplies, will provide more cloud computing services to Anthropic.
Competitor OpenAI recently announced dozens of deals that its chief executive said together may cost more than $1 trillion to secure around 26 gigawatts of computing capacity, the equivalent output required to power some 20 million U.S. homes. A single gigawatt of compute can cost around $50 billion, according to industry executives.
OpenAI, the creator of ChatGPT, is a user of Nvidia’s graphics processing units and AMD’s AI chips to help handle its growing demand.
Reuters reported exclusively earlier in October about Anthropic’s plans to more than double, and potentially almost triple, its annualised revenue run rate next year on the back of strong enterprise product uptake.
The startup focuses on AI safety and developing models for enterprise use cases. Its frameworks have buttressed a frenzy of vibe coding startups like Cursor.
