Amazon Web Services and OpenAI have inked a multi-year strategic deal that provides the artificial intelligence company with access to significant cloud computing infrastructure to operate and scale its AI workloads.
Under the seven-year, $38 billion deal, OpenAI will tap AWS computing resources that include hundreds of thousands of the latest Nvidia graphics processing units, extending to tens of millions of central processing units as needed to scale what the companies call "agentic workloads."
AWS said that it has extensive experience operating large-scale AI infrastructure securely and reliably, including clusters of over 500,000 chips, and its infrastructure capabilities coupled with OpenAI's generative AI development will help to power millions of ChatGPT users.
The rapid advancement of AI technology has created unprecedented demand for computing power. According to the announcement, frontier model providers seek to increase their systems' intelligence; thus, they are increasingly turning to AWS for performance, scale and security capabilities.
As part of this deal, OpenAI will immediately start using AWS compute, with all capacity targeted for deployment before the end of 2026 and potential expansion continuing into 2027 and beyond.
The infrastructure AWS is building for OpenAI has advanced architectural design, optimized for maximum efficiency in processing AI workloads. Clustering Nvidia GPUs, both GB200s and GB300s, over Amazon EC2 UltraServers on the same network enables lowlatency performance across interconnected systems. This will enable OpenAI to run its workloads efficiently with high performance.
The clusters are designed to serve various workloads, from serving inference for ChatGPT up to the training of next-generation models, with flexibility adapting to OpenAI's evolving requirements.
"Scaling frontier AI requires massive, reliable compute," said Sam Altman, OpenAI co-founder and CEO. "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone."
Matt Garman, chief executive of AWS, said: "As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads."
The announcement extends a partnership between the companies to provide AI technology to organizations globally. Earlier this year, OpenAI foundation models became available on Amazon Bedrock, bringing additional model options to millions of AWS customers.
OpenAI has become one of the most popular publicly available model providers in Amazon Bedrock, used by thousands of customers for agentic workflows, coding, scientific analyses, mathematical problem solving, and more, including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics and Verana Health.
The $38 billion figure is one of the largest cloud computing deals announced publicly and reflects both the substantial infrastructure needs for training and running advanced AI models and the confidence of OpenAI in sustained demand for its services.
The partnership also brings AWS a high-profile customer that showcases its capabilities at scale while securing substantial revenue over the duration of the agreement. For OpenAI, this arrangement provides reassurance that necessary computing resources are available as it develops increasingly capable AI systems needing ever-greater processing power.
The timing is important because OpenAI is vying with competitors such as Google, Anthropic, and others working on large language models and applications of AI. Access to reliable, scalable infrastructure represents one of the key competitive factors in this market.
The agreement also reflects OpenAI's strategy of partnering with multiple cloud providers, rather than relying on one vendor exclusively. It maintains relationships with Microsoft—its largest investor—while now substantially expanding its AWS footprint.
What industry observers do point out, however, is that the immense capital commitments for AI infrastructure create barriers to entry in favor of well-funded companies, potentially at the expense of smaller competitors who cannot garner such resources.
Whether the computing power AWS is planning to deliver will be enough for OpenAI's ambitions-especially as the company works on what it calls "artificial general intelligence"-remains unclear. AI researchers are divided on whether current scaling techniques will continue to pay dividends or if deeper architectural breakthroughs might be in store.
A seven-year-long partnership indicates that both companies believe in the continuous demand for AI computing resources, although technology landscapes can change drastically during this period of time.


