Amazon Web Services and OpenAI announced a multi-year partnership worth $38 billion. The deal gives OpenAI immediate access to AWS infrastructure for its AI workloads. According to AWS, the agreement will grow over the next seven years and provide OpenAI with hundreds of thousands of NVIDIA GPUs.
Massive Compute Capacity for AI Models
OpenAI will use Amazon EC2 UltraServers featuring both GB200 and GB300 chips. The infrastructure can scale to tens of millions of CPUs for advanced generative AI workloads. AWS plans to deploy all capacity before the end of 2026. The company can expand further into 2027 and beyond.
The setup uses a sophisticated design to maximize AI processing efficiency. NVIDIA GPUs cluster on the same network via Amazon EC2 UltraServers. This configuration enables low-latency performance across interconnected systems. The clusters will support various workloads, from serving ChatGPT inference to training next generation models.
Industry Leaders Comment on Partnership
Sam Altman, OpenAI co-founder and CEO, said scaling frontier AI requires massive, reliable compute. He noted the partnership strengthens the broad compute ecosystem for advanced AI. Matt Garman, AWS CEO, said the breadth and immediate availability of optimized compute shows why AWS can support OpenAI’s vast AI workloads.
Expanding Previous Collaboration
The partnership builds on earlier work between the two companies. OpenAI open weight foundation models became available on Amazon Bedrock earlier this year. OpenAI quickly became one of the most popular model providers in Amazon Bedrock. Thousands of customers now use their models for agentic workflows, coding, scientific analysis, and mathematical problem-solving.
AWS has experience running large-scale AI infrastructure securely and reliably. The company operates clusters with more than 500,000 chips. OpenAI will start utilizing AWS compute immediately as part of this partnership. The rapid advancement of AI technology has created unprecedented demand for computing power.