Outlook in Leading Edge Semis: Nvidia, ASML, TSMC and Cadence
An overview of the space post events
So semis had a rough week following ASML’s and TSMC’s results, basically TSMC guided down the foundry industry’s growth rate for this year and subsequently ASML announced a weak bookings number during the first quarter. However, the multi-year outlook remains strong. There were a number of interesting events in leading edge semis over the last few weeks and in this post we’ll walk through the most interesting findings.
Outlook for leading edge semis & AI
A common thread was that each of the companies making an appearance in this post remained very bullish on their long term outlook for AI. Despite Nvidia’s whopping revenue growth last year, Marvell sees accelerated computing in the datacenter growing at a 32% rate over the coming five years, the blue boxes in the chart below. Note also the fairly stagnant general purpose computing market in the datacenter i.e. CPUs, which clearly shows that all of the capex growth is going into AI accelerators:
This is Marvell’s CEO discussing datacenter capex for accelerators:
“I was recently at a meeting where McKinsey shared their belief that these AI innovations will unlock something like $4.4 trillion annually in economic value. Additionally, insights gathered from a number of industry conversations suggest projections that are even more ambitious. So does this CapEx make sense? The answer is yes, and it will be financed through massive gains in productivity and efficiency. We can all debate about the size, but we do know that there's a multi-trillion dollar opportunity out there. So with that context, if you look at the technology investment cycle that's going to happen over the next 10 years, the CapEx being deployed makes a lot of sense.”
This is ServiceNow’s Bill McDermott on the same topic:
“Every week that passes the impact of our own AI deployments continues to grow. Generative AI deflection rates have doubled for both our employees and customers, and they are improving each and every month. Software engineers are accepting 48% of text-to-code generation. These are meaningful productivity improvements, and it's only the beginning. That's why IDC estimates an $11 trillion impact from AI in the next 3 years. It's also why businesses will spend more than $0.5 trillion on Generative AI in 2027 according to IDC. So contrary to some opinions out there, we are witnessing the biggest enterprise software market opportunity in a generation.”
TSMC, who is really at the center of all of this, is seeing even stronger semi demand than Marvell and is forecasting their AI-related revenues to grow at a 50% CAGR. This the company’s CEO, CC Wei, giving their outlook on AI:
“The continued surge in AI-related demand supports our already strong conviction that structural demand for energy-efficient computing is accelerating in an intelligent and connected world. AI technology is evolving to use ever increasingly complex AI models, which needs to be supported by more powerful semiconductor hardware. No matter which approach is taken, it requires use of the most advanced semiconductor process technologies. Thus, the value of our technology position is increasing as customers rely on TSMC to provide the most advanced process and packaging technology at scale with a dependable and predictable cadence of technology offering.
We forecast the revenue contribution from several AI processors to more than double this year and account for low-teens percent of our total revenue in 2024. For the next 5 years, we forecast it to grow at a 50% CAGR and increase to higher than 20% of our revenue by 2028. Several AI processors are narrowly defined as GPUs, AI accelerators and CPUs performing training and inference functions, and do not include the networking edge or on-device AI. We expect several AI processors to be the strongest driver of our HPC platform growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.”
Tesla on their earnings call confirmed TSMC’s outlook and mentioned that they are planning to increase their Nvidia H100 installed base, purely for the training of models, by 143% this year:
It has now become also clear that Meta is becoming a serious competitor to OpenAI’s ChatGPT and that they are doing so via an open-source approach. As such, Zuck guided for AI capex to continue to increase in the coming years:
“In terms of the core AI model and intelligence that's powering Meta AI, I'm very pleased with how Llama 3 has come together so far. The 8 billion and 70 billion parameter models that we released are best-in-class for their scale. The 400-plus billion parameter model that we're still training seems on track to be industry leading on several benchmarks. And I expect that our models are just going to improve further from open source contributions. Overall, I view the results our teams have achieved here as another key milestone in showing that we have the talent, data and ability to scale infrastructure to build the world's leading AI models and services. And this leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world.”
Overall, it’s clear that we’re in the midst of a hyperscaler spending spree, with Alphabet’s and Microsoft’s capex up 91% and 66% respectively over the last year. Looking at the trends, Amazon’s AWS is likely to follow, and while Meta has been holding back for the moment, Zuck is now guiding for investment to increase over the coming years. You can also see in the chart below that Meta was coming out of a high investment phase and while they’ve been gearing up their investments in GPUs over the last year, this has been financed by cuts elsewhere e.g. the Metaverse.
The main market concern with Nvidia is that the company will hit an air pocket in demand at some stage after the current GPU capex boom. However, TSMC’s CEO has extremely good visibility on GPU demand due to the orders Nvidia is placing and subsequent commentary from both Tesla and Meta only seemed to confirm this. On the call, TSMC’s CEO was basically saying that their revenues from AI will grow at a 164% rate this year if you do the math:
Putting TSMC’s guide into Nvidia’s datacenter business — while assuming a 5% share loss to AMD’s MI300 — would result Nvidia growing overall revenues at 118% this year, whereas Wall Street is only at 83%:
The main bottleneck to satisfy AI accelerator demand over the past year has been CoWoS capacity. TSMC confirmed that they are working to more than double capacity this year but that it still won’t be enough. CC Wei explains:
“Let me say it again, the demand is very, very strong, and we have done our best where we put all the effort to increase the capacity. It's probably more than double this year as compared with last year. However, it's still not enough to meet the customers' demand, and we leverage our OSAT partners to complement TSMC's capacity to fulfill our customers' need. So in Arizona, we are happy to see Amkor's recent announcement to build an advanced packaging facility that's very close to our AZ fab. Actually, we are working with Amkor and try to support all our customers in AZ for their need.”
As a result of the above drivers, TSMC is planning to build three 2nm fabs, two in Taiwan and one in Arizona. As 2nm will become a four horse race with in addition to Samsung, now also Intel and Japan’s Rapidus planning to compete, one already knows that the long term outlook for ASML is very strong. Although we have to note that Rapidus will mostly be focused on serving small customers such as Jim Keller’s Tenstorrent.
The market was overly focused on ASML’s low quarterly bookings number, which can be extremely lumpy anyways. However, two points to make here. First, the previous quarter’s bookings number was extremely strong, it was basically a record quarter for ASML. Two, you can see on the chart below that this quarter’s memory orders were once again strong, at a level comparable to memory orders during the covid semi boom:
In fact, taking out the quarterly lumpiness from ASML’s orders by looking at them on a rolling half year basis, ASML actually had the strongest memory orders ever:
So it’s clear that ASML has already entered a new memory capex boom. The reason why this is interesting is that AI hasn’t been capacity constrained on the logic side, but HBM memory has been somewhat of a bottleneck. Basically SK Hynix was taking up more than 50% of orders but as from this year, US competitor Micron should start putting up a better fight now that its HBM3e is qualified. Even Micron commented that their HBM capacity is already almost completely sold out for the coming two years while SK hynix is working with customers to increase capacity in ‘25.
As the three large DRAM makers aim to grow their HBM businesses in the coming years, this is especially good news for ASML as HBM is 3 to 4x more tool intensive to manufacture compared to DRAM. Additionally, also more DRAM is required in AI servers and DRAM will again see a higher insertion of EUV tools at the next node. In conclusion, memory has a high exposure to AI-related demand and this is exactly what we’re seeing in ASML’s orders.
TSMC’s plan to dominate 2nm
So TSMC is not only planning to build three 2nm fabs but also three advanced fabs in Arizona. One 4nm fab, one 3nm, and now also one at 2nm. CC Wei discusses:
“Each of our fabs in Arizona will have a cleanroom area that is approximately double the size of a typical logical fab. We have made significant progress in our first fab, which has already entered engineering wafer production in April with the N4 process technology. We are well on track for volume production in the first half of 2025. Our second fab has been upgraded to utilize 2-nanometer technologies to support a strong AI-related demand in addition to the previously announced 3-nanometer. Recently the last steel construction beam was raised into place, and volume production is scheduled to begin in 2028. The third fab in Arizona using 2-nanometer will begin production by the end of the decade.
We are confident that once we begin volume production, we will be able to deliver the same level of manufacturing quality and reliability in each of our fabs in Arizona as from our fabs in Taiwan. We plan to manage and minimize the overseas cost gap by first, pricing strategically to reflect the value of geographic flexibility. Second, by working closely with governments to secure their support. And third, by leveraging our fundamental advantage of manufacturing technology leadership and our large scale manufacturing base, which no other manufacturer in this industry can match. Thus, even after factoring in the higher cost of overseas fabs, we are confident to deliver a long-term gross margin of 53% that we have committed to our shareholders. At the same time, TSMC will be the most efficient and cost-effective manufacturer in the regions that we operate.
Finally, I will talk about our N2 status. Our N2 technology leads our industry in addressing the insatiable need for energy-efficient computing, and almost all AI innovators are working with TSMC. We are observing a high level of customer interest and engagement at N2 and expect the number of new tape-outs from 2-nanometer technology in its first 2 years to be higher than both 3-nanometer and 5-nanometer in their first 2 years. Our 2-nanometer technology will adopt the nanosheet transistors structure and be the most advanced semiconductor technology in both density and energy efficiency. N2 technology development is progressing well with device performance and yield on track or ahead of plan. N2 is on track for volume production in 2025 with a ramp profile similar to N3. With our strategy of continuous enhancement, N2 and its derivative will further extend our technology leadership position and enable TSMC to capture the AI-related growth opportunities well into future.”
Historically the leading node was mostly taken up by Apple, but due to the intense competition in AI accelerators, clearly these are moving to the leading edge as well. This can be witnessed in Nvidia transitioning to a one year cadence in launching new AI datacenter accelerators. And TSMC confirmed this at their recent technology forum, mentioning that AI chip designers may become their first customers at 16A, beating Apple, while that ‘demand from AI chip companies has made A16 research and development faster than expected’.
On my calculations TSMC is probably installing around 180,000 wafers per month (WPM) of N2 capacity, which would be a 20% increase from N5. And these will also come at much higher ASPs. N5 was already TSMC’s largest node in terms of revenues, but N2 could make well over $50 billion a year:
On the downside, as process complexities continues to increase, TSMC has been more cautious on the gross margin:
“It is true that N3 is taking longer time to reach the corporate margin than the other nodes like N5 or N7. Before, it was like 8 to 10 quarters to reach the corporate margin but for N3, we think it will take about 10 to 12 quarters. And this is probably because N3 process complexity has increased, and also our corporate average gross margin has increased during the period. N2 is a very complex technology node so our customers, they also take a little bit longer time to prepare for the tape-out.”
TSMC guided for the N3 ramp to dilute gross margins by 3 to 4 percentage-points during the second half of this year, compared to 2 to 3 percentage-points in the first half. There were two other factors which each could impact the gross margin by around 1 point, higher electricity prices in Taiwan and converting some amount of tools from N5 to insert them at N3. So this would mean that gross margins could be heading back to 50% for a brief period.
In addition, after the N3 ramp we get the N2 ramp combined with the ramping of the Arizona fabs. so it is possible that gross margins will remain at these subdued levels in the coming years..
TSMC’s outperformance vs the semi industry
So TSMC guided down the growth rate for the semiconductor industry for the year, while maintaining their guidance for the company to grow at a 20%-plus rate. This is CC Wei again:
“Looking at the full year 2024, macroeconomic and geopolitical uncertainty persists, potentially further weighing on consumer sentiment and end-market demand. We thus expect the overall semiconductor market, excluding memory, to experience a more mild and gradual recovery in 2024. We lowered our forecast for the 2024 overall semiconductor market, excluding memory, to increase by approximately 10% year-over-year, while foundry industry growth is now forecast to be mid- to high-teens percent, both are coming off the steep inventory correction of 2023. Having said that, we continue to expect 2024 to be a healthy growth year for TSMC. We expect our business to grow quarter-over-quarter throughout 2024 and reaffirm our full year revenue to increase by low to mid-20% in U.S. dollar terms.”
While the company is more cautious on the industry overall, they are now expecting an even larger outperformance compared to the rest of the industry and the reason is simple: AI. Basically TSMC has all the orders from Nvidia and the other AI accelerators while other markets such as automotive are seeing a strong cyclical downturn.
Looking ahead to TSMC’s next quarter, despite some minor impact from the major earthquake in Taiwan at the start of the month, the company is projecting 30% growth in Q2 compared to last year’s revenues, which turned out to be the bottom of the semi cycle:
TSMC remains attractively valued for a company which is now in a new upcycle and which has a dominant position in the leading edge foundry market, with attractive revenue exposures to Nvidia and AI in general. If the company is able to generate a low to mid twenties top line growth rate for this year, this is a very cheap name on 19x PE. A multiple which only logically can be explained by its location, the island of Taiwan, being a geopolitical hotspot. Combined with US generals stating that they will blow up the fabs should China take control of the island.
Also TSMC’s key customer Nvidia remains very attractive valued at 32x forward EPS. This for a company which is currently dominating its industry and which is probably looking at 100% revenue growth for this year..
ASML to raise its long term guidance
With AI, more demand is moving to the leading edge node. Historically Apple was the main customer of TSMC’s leading process technology, but in the future AI accelerators are going to bring in a large chunk of demand as well. And these have very large die sizes, which drives up the demand for wafers and thus tools. Similarly in memory, the demand for HBM is skyrocketing due to the demand for AI:
As HBM’s die size is twice as large compared to DRAM, tool intensity goes up 3 to 4 times to process the same amount of wafers.
In addition, we have competition heating up in logic with four players now vying to be the leading edge foundry. Besides the two traditional competitors TSMC and Samsung, Intel and Japan’s Rapidus are now looking to take share.
When ASML previously guided its long term financial model, this was in a world without LLMs and the subsequent explosion of AI capex. Given that TSMC sees AI already contributing more than 20% of revenues by 2028 and that HBM will start contributing 20% to DRAM revenues already in 2024, ASML’s high demand scenario for 2030 revenues is now starting to look conservative, as it is only 15% above the base case scenario:
The high demand scenario works back to EUR 66 of EPS and putting that on a 35x multiple would result in an IRR or total annual return of 21%, taking into account capital distributions to shareholders. Given all of the above mentioned drivers, it’s likely that investors should start thinking about this as a base case.
A forward PE of 35x would be around the average of the last three years’ trading range:
Which seems justified given the roadmap in scaling, on which ASML is a key play:
Cadence’s new advanced emulation tool and stock nosedive
In the meanwhile, Cadence is planning to emulate chips with five times the number of transistors of Nvidia’s new Blackwell 100. This Cadence’s CEO on their new tool:
“So Z3 itself, we designed this advanced TSMC chip ourselves, and this is one of the biggest chips that TSMC makes. One rack will have more than 100 of these and then we can connect up to 16 racks together. So if you do that, you have thousands of chips emulating and these are all liquid cooled, connected by optical and InfiniBand interconnect. What it can do is emulate very large systems, very efficiently. Even Blackwell, which is the biggest chip in the world right now with 200 billion transistors, was emulated on fewer racks of Z2. So now with 16 racks of Z3, we can emulate chips which are 5x bigger than Blackwell.”
Cadence had a good quarter but the shares sold off due to a weaker Q2 guide, albeit the guidance for the full year was slightly raised. The company’s CFO on the guidance:
“First quarter bookings were a record for Q1, and we achieved a record Q1 backlog of approximately $6 billion. A good start to the year coupled with some impressive new product launches sets us up for strong growth momentum in the second half of 2024. Given the recent launch of our new hardware systems, we expect the shape of hardware revenue in 2024 to weigh more towards the second half as our team works to build inventory of the new system.”
Cadence was again vocal about clients using their new AI products in their software tools, something which we discussed previously here. So the outlook for the company is solid, while the hot money left the stock looking for a company which can beat in the coming months. Basically, it seems that customers are delaying their emulation purchases to the second half of the year so that they can go for the new hardware, makes sense. So with the hot money having left the stock, the multiple has now fallen back again to 43x which is not unattractive for a company with an extremely good business and which should be able to generate low to mid teens top line growth annually resulting in high teens EPS growth.
So I like each of these four semiconductor names here — Nvidia, TSMC, ASML, and Cadence — valuations are not looking stretched for what are arguably some of the best quality names in the semiconductor space and which have attractive exposures to AI. In the coming days, we’ll have a detailed look at Marvell’s strategy in the AI datacenter. So if you don’t want to miss that, make sure to hit subscribe!
If you enjoy research like this, hit the like and restack buttons, and subscribe if you haven’t done so yet. Also, please share a link to this post on social media or with others who might be interested, it will help the newsletter to grow, which is a good incentive to publish more.
I’m also regularly discussing tech stocks on my Twitter.
Disclaimer - This article is not a recommendation to buy or sell the mentioned securities, it is purely for informational purposes. While I’ve aimed to use accurate and reliable information in writing this, it can not be guaranteed that all information used is of this nature. Before making any investment, it is recommended to do your own due diligence.
Incredibly awesome semi review
Excellent coverage of the semiconductor landscape related to AI.