Microsoft’s Satya Nadella appeared on the Dwarkesh podcast and there were a few interesting insights. Firstly, the data center projects Microsoft walked away from earlier in the year were specialized capacity for OpenAI’s AI training and inference workloads, rather than general-purpose Azure expansion. On recent earnings calls, Satya has continuously been highlighting the importance of flexibility in Azure’s infrastructure, so that their cloud can service a wide variety of workloads for their long tail of customers. Data centers that are overly reliant on a single customer, e.g. OpenAI, would risk underutilization if future growth rates of this one key customer disappoint. Thus, while OpenAI has been signing massive contracts in recent months based on strongly forecasted growth rates, Microsoft is limiting their risk by focusing on a wide variety of enterprise and AI customers.
Secondly, when it comes to running workloads in the cloud, Nadella described Azure as the “customer’s portal to the entire AI stack”. Microsoft’s strategy is to provide high margin web services in their cloud—like storage, databases, networking, security, and further apps—while they don’t need to own the raw GPU for every job. Nadella explains:
“Azure is the portal... For pure GPU workloads—like massive training runs or inference at scale—we can burst to neoclouds. It’s fine for us because now we have line of sight to demand. If they bring their capacity into our marketplace, that customer uses the neocloud for compute, but pulls storage, databases, all the rest from Azure. That’s a great win-win. They get specialized capacity fast, and we capture the full workflow.”
We see this as a sound strategy. Azure has a wide portfolio of web services in the cloud with its only competitors being Amazon AWS and Google Cloud Platform. At this stage, as future AI growth rates remain hard to predict, Microsoft can leverage capacity from a variety of neoclouds, control their capex expansion and remain free cash flow positive. The more attractive and sustainable AI workloads can selectively be powered by the Azure platform, with excess capacity being outsourced to neoclouds. In the future, more of those types of workloads can always be brought in-house again, with bursty AI demand being offloaded to neoclouds.
Thirdly, while OpenAI’s apps can be powered by other clouds, the APIs will be exclusively on Azure. This is an important point as the latter part of the business will likely be very lucrative – loads of other software and apps connect to OpenAI’s APIs to send their ChatGPT queries over JSON. If OpenAI’s consumer-facing app at some stage can disrupt Google Search, obviously it could be even more lucrative, but Nadella is happy for those workloads to also be powered by other clouds. The reason is likely that Microsoft doesn’t want to send their capex levels to stratospheric heights at this stage as future AI growth rates remain hard to predict. So, Nadella is striking a balance here, Azure can still accelerate growth and achieve highly attractive 30-40% growth rates, while the company focuses on selectively expanding capacity where there should be long term sustainable demand.
Fourthly, Nadella explicitly confirmed that Microsoft has full access to all of OpenAI’s IP, with one notable exception in consumer hardware i.e. any future OpenAI-branded devices. While no doubt Sam Altman will try to shield some of OpenAI’s innovations in algorithms to train and infer their models, Nadella implied here that Microsoft would have access to everything. This means it’s fairly likely that Microsoft will be able as well to build their own LLM capabilities in the long term, which Nadella referred to as MAI (Microsoft AI).
Finally, Nadella envisions their software products such as Office 365 and their Azure platform being utilized by AI agents. So, the business model should evolve from per-user licenses to per-agent provisioning which could balloon the TAM:
“Our business, which today is an end-user tools business, will become essentially an infrastructure business in support of agents doing work... It’s not per-user anymore, it’s per-agent.”
The only counter argument we’d make here is that a lot of these software products’ key functionality is also available via open-source competitors. While these open-source apps are typically lower quality, thanks to AI their code bases could be improved to increase functionality and reliability. Thus, while we can see a bull case for software as also agents might need to buy licenses or usage credits, there is also increased risk as writing code has become more commoditized. Doing coding on a regular basis, we estimate that one programmer’s productivity currently increases by 5 to 10x with the use of LLMs. And this will only continue to increase as now LLM agents are directly available in popular coding environments, so it’s not like you have to copy-paste code anymore from Claude’s web portal.
For subscribers, we will dive further into developments in AI. We will discuss CoreWeave, as well as some accounting tricks the company is playing. Next, we will give the outlook for AMD and, finally, we will highlight a smaller, hardcore engineering firm in robotic AI that we came across. We’ll also give an overview of our current buy list in the universe of stocks that we track.

