Anthropic will train its Claude chatbot using up to one million of Google’s artificial intelligence chips in a deal worth tens of billions of dollars. The agreement highlights the massive computing power needed by AI companies for training and running advanced models.
Google Expands TPU Access
According to Reuters, Google will provide additional cloud computing services to Anthropic through its tensor processing units, or TPUs. The chips were historically used only inside Google but are now available to outside customers through Google Cloud.
Anthropic chose Google’s TPUs because of their price and efficiency. The company already has experience training and running its models with these processors. Google, which also backs Anthropic financially, stands to benefit as it opens up its chip technology to more external customers.
Racing for Computing Resources
AI developers are signing multi-billion-dollar agreements to secure infrastructure quickly. The demand reflects the enormous computing needs for training models and running continuous inference in generative AI.
Revenue Growth Expected
Reuters reported earlier in October that Anthropic projects its annualized revenue could more than double or nearly triple next year. The growth would come from rapid adoption of its enterprise products.
The deal shows how AI startups are forming closer partnerships with cloud providers to access the specialized hardware needed for advanced models. Google’s TPUs compete with chips from Nvidia and other manufacturers in the growing market for AI training and deployment.
Anthropic’s Claude chatbot competes with offerings from OpenAI, Google, and other AI companies. The startup needs massive computing power to keep improving its models and stay competitive in the fast-moving AI landscape.