Advanced Micro Devices, Inc. (NASDAQ:AMD) is set to host its “Data Center and AI Technology Premier” today (June 13), where it is expected to provide updates on its upcoming CPU and GPU products. Following Nvidia Corporation’s (NVDA) spectacular success in its AI business, investors are likely to focus most on AMD’s upcoming data center AI chips: the MI300 series.
Although AMD will likely release more technical details about the MI300 series, it is less likely to provide concrete financial information-for instance, in terms of order backlog or average selling prices. As a result, investors will have to read between the lines to gauge how much the AI boom is likely to impact AMD’s financials. By paying close attention to some details, we should be able to better gauge AMD’s AI-related growth prospects in coming quarters. In this article, I discuss three key items that investors should follow.
The Timeline
Rumors are circulating about multiple variants in the MI300 series of chips. The MI300A is expected to be an APU that brings together a CPU and GPU on the same package, but there is also talk of a CPU-only MI300C, a GPU-only MI300X, and a smaller-sized MI300P.
In the last earnings call, CEO Lisa Su said that AMD “start[s] the ramp [of MI300 production] in the fourth quarter of our supercomputing wins as well as our early cloud AI wins.” The MI300 will first deploy to the El Capitan supercomputer, which is expected to clock in at 2 exaflops to become one of the fastest systems in the world. However, El Capitan will only use the MI300A variant of chips. We, therefore, only have a reasonable production timeline for this particular variant. We do not know if the other variants will launch and ramp concurrently or later in the generation, while AMD first irons out the kinks in the (extremely complicated) MI300A variant.
It would be useful for investors if AMD revealed more information about the (rumored) variants of the MI300 series and the expected timeframe of their launch and ramp. This information can then be used to more accurately estimate how the MI300 series will impact AMD’s financials in coming quarters.
The Workloads
AMD management has been quite optimistic about the upcoming MI300 chips, and investors will be on the lookout for partnership announcements with cloud providers. Management hinted at progress along these lines during AMD’s presentation at the J. P. Morgan Global Technology, Media, and Communications Conference:
And then on the MI300 front, we’re very pleased with the customer engagement and momentum. And we had said that engagements are at 3x today, what they might have been even at the beginning of the year as you think about this AI inflection point in the market. And we’re also happy with how MI250 has been progressing with Microsoft, in particular, as the sort of publicly announced customer for that product. And that’s sort of setting things up very nicely for MI300.
It is reasonable to expect that some partners will make the stage today. However, the mere existence of partnerships does not necessarily tell us much about their scope or content. How many chips can AMD actually expect to sell?
The answer will depend, in part, on workloads-investors should pay close attention to any mentions of specific workloads that the MI300 series is expected to handle. As we’ve seen with the Zen architecture’s gains in data centers, growth has to a significant extent occurred workload-by-workload. AMD’s market share has expanded as its CPU chips have become more competitive across a broader array of workloads. Can the MI300 help AMD chip away at GPU workloads in similar fashion?
Hopefully we will get some more information about the sorts of workloads that AMD’s customers are planning to deploy the MI300 chips for. Will these chips be limited primarily to supercomputers and traditional high-performance computing, or will they also begin to make inroads into the AI segment? How is their performance for AI training and inference? Are there certain workloads these chips are particularly proficient at? And so on. Some more details about these questions would help investors get a better sense of AMD’s progress at capturing more and more GPU workloads-and, in turn, AMD’s financial prospects in coming quarters.
The TCO Proposition
Finally, investors should keep a close eye on any information about the MI300’s performance per watt. Energy efficiency has been a point of emphasis for AMD for some time now, and AMD’s success on the energy efficiency front is one of the reasons that the Zen lineup has been able to wrest market share from Intel.
Improved energy efficiency is one of AMD’s aims with the MI300 series-particularly with the MI300A variant, which puts the CPU, GPU, and memory on the same package (and thus reduces the energy costs of data transmission between these parts). This could prove critical in the fight against Nvidia. Given Nvidia’s strong position in software, AMD needs a compelling reason why customers should choose its chips instead. Although AMD is working on improving its software capabilities, this is going to take time. In the interim, superior energy efficiency could potentially lower the total cost of ownership (TCO) enough to justify customers choosing AMD, despite the associated software challenges. Investors should therefore pay close attention to how this (multi-generational) competition is shaping up.
Conclusion
Predicting AMD’s AI prospects is likely to remain difficult for some time, as meaningful revenue contribution from the MI300 series isn’t expected until Q1 2024. Moreover, under current management, AMD’s traditional strategy has been to secure steady market share gains over multiple generations. This further suggests that it will take a considerable amount of time for a clear picture of AMD’s AI prospects to emerge for investors. For now, investors should pay attention to the smaller details in order to glean what information we can about the future.
Read the full article here