Marvell’s outlook for accelerated computing in the datacenter reached somewhat of a similar conclusion to the high growth rate TSMC is forecasting. Whereas the main market concern is for AI semi demand to hit an air pocket in the near future — as we’re currently in an unprecedented wave of AI capex investments — TSMC guided for their AI revenues to continue to grow in the coming years and with a 50% CAGR over the coming five years. Marvell by comparison estimated the AI datacenter compute market to grow at a 32% CAGR, the blue boxes in the chart below, but the company’s CEO did make the caveat that it could turn out to be higher than this. So the below should basically be seen as Marvell’s base case:
This is Marvell’s CEO discussing the outlook for accelerated computing in the datacenter:
“I was recently at a meeting where McKinsey shared their belief that these AI innovations will unlock something like $4.4 trillion annually in economic value. Additionally, insights gathered from a number of industry conversations suggest projections that are even more ambitious. So does this CapEx make sense? The answer is yes, and it will be financed through massive gains in productivity and efficiency. We can all debate about the size, but we do know that there's a multi-trillion dollar opportunity out there. So with that context, if you look at the technology investment cycle that's going to happen over the next 10 years, the CapEx being deployed makes a lot of sense.
We've already started to see the benefit of this AI cycle into Marvell's revenues. Last year, we were over $550 million in AI-related revenue or about 10% of our company. That's almost triple from the prior year, where it was about 3% of revenue. And now the $550 million last year was almost all connectivity, including optics and some switching, and that business will nearly double this year. And then if you layer on custom silicon, we see our AI revenue this year almost tripling again to be over $1.5 billion with about 2/3 being connectivity and 1/3 being custom compute. So AI will be close to 30% of Marvell's total revenue this year on consensus estimates. And that's going to continue to grow, we see $2.5 billion as a solid base case for next year with upside if the market grows faster.”
So we could be looking at a scenario where AI revenues will start contributing around one-third to the company’s revenues in the coming years, with custom silicon being the accelerator of AI revenue growth:
Custom Accelerators
Basically all AI training is currently happening on Nvidia GPUs and it remains very likely that this will remain to be the case for the foreseeable future. However, the AI inference market will be much larger and as much smaller GPU clusters are being deployed here, there is less of a need to run everything on Nvidia’s extremely comprehensive software and hardware ecosystem. This opens up the door to run more workloads on custom accelerators, something which the dominant hyperscalers are keen on as this lowers costs for them. Nvidia’s GPUs are not only very expensive but they’re also extremely versatile, so if you can deploy more narrowly focused accelerators, you get the cost benefits both on the manufacturing and the energy consumption sides.
We’ve previously theorized that a slowing down of innovation in the AI and semiconductor ecosystems increases the likelihood of this scenario playing out. As long as the pace of innovation in these fields is rapid, Nvidia is in an extremely strong position as they have a whopping $10 billion dollar plus annual R&D budget, combined with the necessary know-how and expertise to move swiftly and capture the opportunities from innovations. This is basically what the company has been doing over the last decade and this is only accelerating, with the company now moving to a one year GPU cadence.
However, if we see somewhat of a slower cadence of innovation in all these fields, this provides an opportunity for competitors which are currently behind to start narrowing the gap. Marvell’s view is that custom accelerators will gradually take share from Nvidia’s general purpose accelerators, and this certainly sounds like a plausible base case. Overall, the company reckons that custom accelerators can grow at a 45% CAGR. Note that despite some share losses, also Nvidia’s general purpose accelerators should remain a highly attractive growth market, the light blue boxes in the chart below:
This is Marvell’s CEO giving his view on custom silicon in the datacenter:
“The architectures of large cloud companies are completely different. They actually design and build their own individual data centers with domain-specific infrastructure optimized for their own applications. So every hyperscale data center today is building or planning to build their own compute silicon for a portion of their workloads. We're strategically engaged with every customer and there's a tremendous amount of design activity right now across all these customers. Some of these have multiple SKUs per application for different performance reasons. This type of business is also multigenerational in nature, so when you're working on the current version, you're also typically working on the next version. It's not just one AI chip for one customer, but every 4 years, that's the cadence.
We've shared previously that we had won two sockets for two different customers. The first socket is an AI training accelerator for a U.S.-based hyperscaler. Its customers use the chip and their AI clusters, and it's ramping incredibly fast. We're planning to ramp the AI inferencing accelerator next year. So given all this, we now have multiple years of visibility on this particular program and we expect revenue to continue in the next generation as well. The second customer design is an ARM CPU for a second U.S. hyperscaler. This will be deployed in their general cloud computing platform as well as in their internal AI infrastructure. And we've won a third U.S.-based hyperscale customer for AI. It's for an AI accelerator and it's in design now, and the customer wants to take it to production in 2026.”
Note that he talked about a four years cadence, no doubt that with increased investments they will be able to speed things up, but Nvidia is on a one year cadence. Basically Nvidia will leverage all of TSMC’s process technology and packaging improvements, and be the first to do it. At the same time they will be optimizing accelerator designs to run the latest AI algorithms, optimizing the software which runs the accelerators, and optimizing their entire AI datacenter ecosystem, from the links to the switches, optics, cooling, etcetera. As said, not many in the world will be able to compete with this.
The first AI training and inference accelerators that Marvell developed were for Amazon. Obviously this is a highly attractive client as AWS remains the largest public cloud. The new customer has been speculated to be Meta, and needless to say this is a player with a massive scale as well and who is now positioning itself as one of the leading players in the LLM race with Llama-3.
Advanced semiconductor design is an attractive industry for investors as it possesses high barriers to entry. To develop a leading edge chip, R&D spend can total over one billion dollars. For example, Nvidia spent $10 billion to develop the entire new Blackwell platform. Not only does a state of the art GPU these days requires 200 billion transistors, but due to area limitations it isn’t possible to place these on a single die. The solution is an extremely high bandwidth die-to-die (D2D) interconnect, making the system think that the two connected dies are actually one chip.
And due to the high memory requirements of LLMs, we need similarly a high bandwidth and low latency solution for memory access, which is gained by placing HBM DRAM stacks within the same system, surrounding the core accelerators. Lastly, you need state of the art SerDes and I/O to handle the data flow in and out of the GPU module. Even if you can do all of this, you’ve only provided a GPU, but the customer needs an entire datacenter cluster.
It’s clear that having to design all of this on TSMC’s most advanced process technologies, only a limited amount of players will be able to compete in this industry. Marvell has a reasonably large R&D budget of $1.8 billion, something with which you can design the above chip, although not at all an Nvidia Blackwell platform. However, in the area of custom silicon an amount like this is sufficient, and the CEO also highlighted a number of other advantages which will make it hard for smaller players to compete with Marvell:
“Our customers want to know that their key partners have sufficient R&D scale and commitment to this market long term. The benefit of this partnership is multifaceted. As an extended part of their R&D team, we're working hand-in-hand with our customers to co-architect their next-generation data centers. And by having the strategic position on the custom compute side, we gained unique insights into the next-generation architecture requirements. Not just for the custom compute but for all the connectivity, the higher layer switching and our customers' overall plans for their next generation AI architectures. So this gives Marvell a significant advantage over our competition.
Let me tell you now how we invest to win in this market. First, you need to have an immense amount of IP and technical capability, you need to operate with the leading-edge process node. Building on the success of our 5-nanometer and 3-nanometer portfolio, we're now aggressively investing at 2-nanometer. Our SerDes technology is world-class, and that's why every single hyperscale data center operator today relies on it. It goes beyond IP, we have best-in-class packaging technology, electro optics technology, analog capabilities and we focus on meeting customer needs for low-power design, seamless interoperability and more.”
Overall, custom accelerators should be a lucrative business for Marvell. The key risk is probably that as hyperscalers build up know-how and expertise over time, they might opt to take their semi design activities fully in-house. Apple and Huawei are two examples of companies coming from a non-semiconductor background who were able to built up world class semi design teams over time.
For premium subscribers, we’ll do a deep dive on Marvell’s datacenter business and strategy including:
Interconnect
Pluggable optics
Silicon photonics
DCI
Switching
And a financial analysis of the firm with thoughts on valuation