Company Management
Jean Hu – Executive Vice President, Chief Financial Officer, and Treasurer |
Lisa Su – Chair and Chief Executive Officer |
Mitch Haws – Head of Investor Relations |
Analysts
Aaron Rakers – Wells Fargo |
Chris Danely – Citi |
Christopher Rolland – SIG |
Harlan Sur – JP Morgan |
Harsh Kumar – Piper Sandler |
Joseph Moore – Morgan Stanley |
Matt Ramsay – TD Cowen |
Stacy Rasgon – Bernstein Research |
Timothy Arcuri – UBS |
Toshiya Hari – Goldman Sachs |
Vivek Arya – Bank of America Securities |
Operator
We will now be conducting a question-and-answer session. And the first question comes from the line of Matt Ramsay with TD Cowen. Please proceed with your question.
Matt Ramsay
Yes, good afternoon, and thanks for taking my questions, and congrats on the results. I guess, Lisa, my first question is around the Datacenter business. I think we’re all, across the industry, observing a shift in workload and spending patterns like maybe we’ve, arguably, never seen. And your company is in a great position to participate on both sides of that on the CPU strength, and obviously in the AI space. Last quarter, you had given us some metrics around potentially being able to grow your datacenter business by 50% in the second-half of the year versus the first-half. And maybe you could give us a little bit of an update on how you’re thinking about that milestone and the drivers of growth across CPU and accelerator for the back-half? Thanks.
Lisa Su
Yes, sure, Matt. Thanks for the question. So, you’re absolutely right. It’s a very dynamic market right now in the datacenter. We certainly see — let me go through some of the pieces. So, on the positive side, we certainly see that acceleration of AI demand. From our standpoint, we see it in a couple ways. We have a number of design wins in AI deployments as the CPU that goes with GPUs, as well as other accelerators. So, in the head nodes, we’ve seen that positive on the CPU side. We’ve also seen some strong interest in our MI250 accelerator, which is currently shipping right now. And we see very strong pull on the MI300 accelerators that are starting production in the fourth quarter.
So, those are the positive market dynamics as we go into the second-half of the year. We also see some of the softer cloud spend that is happening outside of AI as some of the cloud vendors are optimizing their CapEx. And enterprise, I would say is still on the weaker side. But with all that in place, we are expecting a large ramp in second-half for our Datacenter business, and weighted towards the fourth quarter. And we are still looking at a zip code of, let’s call it, 50% plus or minus second-half to first-half. So, it’s a big ramp, but when we look at all the components, I think that the customer pull is certainly there. And it’s exciting to be in this part of the industry.
Matt Ramsay
Thank you for that, Lisa. I guess as my follow-up, still sticking with the Datacenter business. Your company is aggressively trying to ramp both the hardware and the software side of the MI300 programs to support AI. There’s been some conflicting reports as to whether all of those deployments are time. I think you’ve, in the prepared script, said what you guys think about that. I guess my question is really around the software work and the hardware itself that you’re doing with your lead customers, maybe you could give a little bit about, firstly, how the customer feedback has been on the performance of the hardware itself?
And secondly, how you think the software work you’re doing with your lead customers will translate into other customer deployments as we work through next year? Thanks.
Lisa Su
Yes, sure, absolutely. So, if I give you just some color on how the customer engagements are going, there’s very strong customer interest across the board in our AI solutions, that includes, let’s call it, multiple tier 1 hyperscalers that we’re engaged with. It includes some large enterprises. And it also includes this new category of some of these AI-centric companies that are sort of very forward-looking in terms of how they’re deploying and building AI solutions. So, from that aperture, we made a lot of progress with our ROCm software stack.
I’m actually — there is a lot more to do, but I would say the progress that we’ve made has been significant. We’re getting lots of feedback from those lead customers. We’re seeing the benefits of the optimization, so working also on the higher-level model frameworks, the work that we’re doing with the PyTorch Foundation, the work that we’re doing with Onyx, with Triton. And the key is we’re getting significant real-time feedback from some of these lead customers. So, we’re learning at a very fast pace. In terms of the feedback on performance, a number of companies have now been able to look at MI250 across a broad range of workloads, and that’s a good translation as you go to MI300, and the feedback has been quite positive.
We have customers sampling either on our lab systems, they’re accessing the hardware, or sampling in their labs. And I would say, so far very positive. The pull is there. There is a lot of work to be done, but we feel very good about the progress of our overall AI solutions for the Datacenter.
Operator
And the next question comes from the line of Aaron Rakers with Wells Fargo. Please proceed with your question.
Aaron Rakers
Yes, thanks for taking the question. Just building on Matt’s comments or question, I just want to go back to the implied revenue for the Datacenter business for the back-half of the year. Jean, I think, last quarter, you had alluded to, for the full-year, the expectation is still growing 10% or double digits, I should say, for the full-year the Datacenter business, just confirming that. And what I’m really trying to ask is, given the guidance of flat year-over-year growth in Datacenter in 3Q, it would seem, if my math is correct, you’re implying a 50% or so increase sequentially into 4Q. I’m just trying to frame exactly how you’re thinking about the cadence of what 4Q looks like, underpinning that expectation?
Jean Hu
Hi, Aaron. Thanks for the question. I think as Lisa just mentioned earlier, it’s a very dynamic market. There are puts and takes. We have a tremendously strong momentum with our product portfolio, but there is continued softness in enterprise market, and also call it, the optimization is still ongoing. So, overall on balance, we think year-over-year it’s probably more like a high single-digit. It’s really strong ramp, not only in Q3, right, sequentially earnings double-digit — strong double-digit. And the Q4, of course we’re going to see continued sequential strong ramp.
Aaron Rakers
Yes, that’s helpful, Jean. And then just following up on that as well, how have you guys managed through, with that ramp in mind, the supply chain side? I know that your manufacturing partners talked about expanding their capacity significantly. Just curious of what you’re seeing as far as being able to fulfill that degree of demand as we look into, not just this quarter, but into 4Q?
Lisa Su
Yes, sure, Aaron. So, we have been really investing in our supply chain, the Datacenter growth is so strategic to us, that this has been part of the strategy. So, if you look at all aspects of the supply chain, from the wafers to the backend capacity, to some of the specific components that you need to do something of the class of MI300, we’ve worked with the entire supply chain. We feel that we have ample supply for an aggressive ramp in the fourth quarter and into 2024. But this is certainly one of the areas that we spent quite a bit of time to ensure that we do have that confidence.
Operator
And the next question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.
Toshiya Hari
Hi, thank you so much for taking the question. My first one is on the Datacenter business as well. And I just wanted to follow up on the Q3 to Q4 dynamic. And I do apologize if I missed this, but in the implied growth rate in Datacenter in Q4, can you speak to what percentage of supercomputing. I think there is a big project that’s slated to ship in Q4. And is there any contribution from the Instinct series outside of supercomputing as well or is it primary your server CPU franchise?
Lisa Su
Yes, sure, Toshiya. Thanks for the question. So, as Jean said, into the third quarter, we expect double-digit sequential growth in Datacenter, that’s primarily EPYC. So, that’s primarily the Zen 4, let’s call it the combination of Genoa and Bergamo, as that continues to ramp. As we go into the fourth quarter, there is an implied significant ramp in revenue. I think there are multiple components to that. So, there is — the server CPU side will continue to ramp as we see Zen 4 ramp. There is a sort of large, call it, lumpy supercomputer win, so our El Capitan win will be in the fourth quarter primarily, with a little bit in the first quarter.
And then we will have contribution from both MI300X going to large AI customers as they start their initial ramps, as well as MI250s with a number of customers who have now — view that as a very good option for some of the workloads that are not necessarily the largest language models or the largest parameters, but let’s call it more sort of the other AI workload. So, those are the components of the fourth quarter implied growth. Lots of pieces to it, but clearly a big piece of it is the MI300 ramp.
Toshiya Hari
That’s helpful, thank you, Lisa. And then shifting gears a little bit and follow-up question on the Client side. You talked about the business returning to profitability in Q3, which is great. But you’re still well below where you were in ’21 and ’22 from an operating margin perspective. Can you speak to the competitive landscape in the client business? Is there a path back to, call it, 20%, 30% operating margins there? And do you have any cost initiatives ongoing to get you back to that level of profitability in Client? Thank you.
Lisa Su
Yes, sure. Maybe let me start, and then maybe Jean can add some comment. So, look, I think the PC business has been fairly volatile over the last number of quarters, from the pandemic highs to some of the inventory digestion that we were all dealing with. I can say that I’m pleased to say that I think the growth that we’re seeing — that we saw during the second quarter and that we see in the second-half is the strength of our product portfolio. I think the Ryzen 7000 series is doing well, there’s good customer pull. I think from a competitive dynamic standpoint, the business is always competitive, but we feel good about.
The most important thing that was a little bit of a drag on operating margins was the revenue being low, as well as some of the — we had a case where the sell-in was below consumption as we were normalizing inventory levels in the supply chain. As we get past that, what we see is I think the Client business continues to grow. We believe that Client will grow into 2024 as well. In terms of some of the cost initiatives, we have been, let’s call it, optimizing sort of the overall R&D footprint, but maybe I’ll let Jean comment some more.
Jean Hu
Yes, on the OpEx side, the team has done a great job during this process to really optimize the investment in Client, on the segment to be more efficient and effective. If you look at the overall common company level, our OpEx has been largely flattish. But we are investing in AI, Datacenter, and the strategic priorities we have which generate a much higher return on investment. So, we have optimized it. We feel pretty good about this level of operating expense to continue to invest in Client, the segment. As Lisa mentioned, it’s really about revenue. The model we leverage to generate profitability, we should be able to get back to 20%.
Operator
And the next question comes from the line of Harlan Sur with JP Morgan. Please proceed with your question.
Harlan Sur
Yes, good afternoon, and thank you for taking my question. Good to see the quarter-over-quarter inflexion in your EPYC business targeted at enterprise customers. I think you did mention a continued muted environment in enterprise. But the team continues to drive share gains with global corporations, you’re ramping Genoa. Are you anticipating your enterprise segment to contribute to the strong second-half growth profile of your Datacenter business?
Lisa Su
Yes, thanks for the question, Harlan. Look, enterprise business is very strategic to us. We feel that we’re underrepresented. It’s a place that we’re putting more resources because, again, when we look at the value proposition of Genoa and entire Zen 4 portfolio, we think it plays very well into the enterprise. So, pleased to see the growth in the second quarter. We do believe that we’re on a path to continue to grow into the second-half of the year, and beyond. And the key here is also investments in some of the go-to-market activity, so investing in more business development folks that can call directly on these enterprise customers, together with our OEM partners, and ensure that our value proposition is very well understood.
Harlan Sur
Perfect, thank you. And then on the accelerated compute, general purpose compute demand might muted in China, but there is a significant amount of unmet demand for accelerated compute in this region. And I know there were performance thresholds put in place last year, and maybe U.S. government might lower that performance threshold again soon, I’m not sure. But let’s say barring that, has the team looked into developing China-specific SKUs, if they are MI250 or your new MI300 platforms? It seems like the opportunity here is quite large.
Lisa Su
Yes, Harlan, look, China is a very important market for us, certainly across our portfolio, as we think about certainly the accelerator market. Our plan is to of course be fully compliant with U.S. Export controls, but we do believe there’s an opportunity to develop product for our customer set in China that is looking for AI solutions, and we’ll continue to work in that direction.
Operator
And the next question comes from the line of Vivek Arya with Bank of America Securities. Please proceed with your questions.
Vivek Arya
Thank you for taking my question. The first one, just a clarification, would it be reasonable to assume that your GPU accelerator sales could be about, say, $500-ish million this year, so about 7%, 8% of datacenter sales? And if that is the right number, does it mean your server CPU sales are effectively flattish year-on-year this year?
Lisa Su
Yes, Vivek, I don’t know that I would go into quite that granularity. What we will say is the GPU sales in the first-half of the year were very low as we were sort of in a product transition timing as we go into the second-half of the year. In particular, the fourth quarter, we’ll have MI300 ramp. I think your number may be a little bit high in terms of the GPU sales, but overall in general, I think our expectation is that, as Jean said, the datacenter business, given all of the market dynamics, we see it up high single-digits year-on-year, we see much better second-half compared to first-half. And I think the product portfolio and the ramp of Genoa and Bergamo, as well as the ramp of MI300 are key components of the second-half ramp.
Vivek Arya
Thank you, Lisa. And for my follow-up, just kind of a broader question on AI accelerators in the commercial market, so I’m excluding the Supercomputing, the El Capitan projects, et cetera. What is AMD’s specific edge in this market? You know there are already strong and established kind of merchant players, there are a number of ASIC options, a number of your traditional competitors, Intel and others, and several startups are also ramping.
So my question is, what is AMD’s specific niche in this market? What is your value proposition and how sustainable is it, because you’re just starting to sample the product now. So, I’m trying to get some realistic sense of how big it can be and what the specific kind of niche and differentiation is for AMD in this market?
Lisa Su
Yes, sure Vivek. So, I think maybe let me take a step back and just talk about sort of our investments in AI. So, our investments in AI are very broad and I know there’s a lot of interest around datacenter, but I don’t want us to lose track of the investments on the edge as well as in the client. But to your question on what is our value proposition in the datacenter, I think what we have shown is that we have very strong capability with supercomputing, as you’ve mentioned. And then, as you look at AI, there are many different types of AI. If you look across training and inference, sort of the largest language models and what drives some of the performance in there when we look at MI300, MI300 is actually designed to be a highly flexible family of products that looks across all of these different segments.
And in particular, where we’ve seen a lot of interest is in the sort of large language model inference. So, MI300X has the highest memory bandwidth, has the highest memory capacity. And if you look at that inference workload, it’s actually a very, it’s very dependent on those things. That being said, we also believe that we have a very strong value proposition and training as well. When you look across those workloads and the investments that we’re making, not just today, but going forward with our next generation MI400 series and so on and so forth, we definitely believe that we have a very competitive and capable hardware roadmap. I think the discussion about AMD, frankly, has always been about the software roadmap, and we do see a bit of a change here on the software side. Number one, we’ve put a tremendous amount of resource on it. So, bringing together our former Xilinx software team, together with the AMD sort of based software team, we’ve dramatically increased the resources. And also the focus has now been on sort of optimizing at these higher level models.
So, if you think about the frameworks around PyTorch and Triton and Onyx, I think many of the new AI centric companies are actually optimizing at a different level, and they’re working very closely with us. So, in this place where AI is tremendously exciting, I think there will be multiple winners. And we will be first to say that there are multiple winners. But we think our portfolio is actually fairly unique in the sense that we do have CPUs, GPUs, we have the accelerator technology with Ryzen AI on the PC side as well as in the embedded side with our Xilinx portfolio. So, I think it’s a pretty broad and capable portfolio.
Operator
And the next question comes from the line of Stacy Rasgon with Bernstein Research. Please proceed with your question.
Stacy Rasgon
Hi, guys. Thanks for taking my questions. I wanted to first go back to the Q4 datacenter guide. So, if I do my math right, it’s something like $700 million sequentially in datacenter from Q3 to Q4. So, how much of that is MI300 versus CPU? And given the lumpiness of the El Capitan piece, what does that imply for the potential seasonality into Q1 as most of it rolls off?
Lisa Su
Yes, sure. So, it is a large ramp, Stacy, into the fourth quarter. I think the largest piece of that is the MI300 ramp. But there is also a significant component that’s just the EPYC processor ramp with, as I said, the Zen 4 portfolio. In terms of the lumpiness of the revenue and where it goes into 2024. Let me give you kind of a few pieces.
So, I think there was a question earlier about how much of the MI300 revenue was AI centric versus let’s call it supercomputing centric. The larger piece is supercomputing, but it’s meaningful revenue contribution from AI. As we go into 2024, our expectation is again, let me go back to the customer interest on MI300X is very high. There are a number of customers that are looking to deploy as quickly as possible.
So, we would expect early deployments as we go into the first-half of 2024, and then we would expect more volume in the second-half of ’24 as those things fully qualify. So, it is going to be a little bit lumpy as we get through the next few quarters. But our visibility is such that there are multiple customers that are looking to deploy as soon as possible. And we’re working very closely with them to do the co-engineering necessary to get them ramped.
Stacy Rasgon
But like, of the $700 million, it’s like $400 million of it El Capitan or is it $500 million or $300 million like how big is the El Capitan piece?
Lisa Su
You can assume that the El Capitan is several hundred million.
Stacy Rasgon
Several hundred, okay. For my follow-up, just gross margins coming up in the back in the second. I mean, they still kind of missed in the quarter. I know they rounded up to 50%, but they were 49.7%. I know you’re guiding 51 for Q3. Jean, where do you see gross margins sitting like in Q4 as we exit the year?
Jean Hu
Yes, I think the gross margin is, for us, the primary driver as we discussed in the past, it’s really mixed. And if you look at our guidance or outlook of Q3, gross margin of 51%, it’s more than one percentage point improvement sequentially despite of very significant headwind from embedded business declining in Q3. So, the datacenter and the client business are expect to grow double-digit sequentially and provide a positive impact on the gross margins, which actually more than offset the headwind from embedded business.
So, going to Q4, again we’re not guiding Q4 and it’s going to depend on mix. I would say one thing is you will have a similar dynamics, right. Datacenters expect to grow very significantly. At the same time, we’re going to have the same headwind from embedded business declining sequentially. So, overall, we do expect gross margin to improve from this level going forward.
Operator
And the next question comes from the line of Joe Moore with Morgan Stanley. Please proceed with your question.
Joseph Moore
Great, thank you. You’ve talked about the embedded business declining as you move into the second-half. Can you give us a sense for how much? And is that decline a function of the common infrastructure market or are you seeing weakness beyond that part of the market?
Lisa Su
Yes, sure, Joe. Thanks for the question. So, look, when I look at the embedded business, I think we should start by remembering that we’re coming off of six quarters of very strong growth. I mean, this business has performed extremely well and very pleased with the overall momentum in the business.
To your exact question of what we’re seeing in the markets, we’re actually seeing the core markets hold up pretty well, so let’s call it aerospace and defense strong; industrial vision and healthcare, strong; test and emulation strong. We are seeing communications weakness. So, that is the primary driver of the second-half commentary. And there’s also some inventory optimization, as you might expect, since our league times have come down over the last several months. So, in terms of zip code, I would say think of it as double-digit down sequentially in the third quarter, and that’s the current view that we have. But overall, the business has been extremely strong for us, so I think this is an expected decline as we come off the cycle.
Joseph Moore
Great. And any sense for beyond this quarter, since we’ve asked you so many Q4 questions already today, but any sense is that kind of the bottom level or do you expect there to be some continued contraction?
Lisa Su
As Jean would say, we’re not guiding for the fourth quarter, but I think you should expect embedded sort of in that similar zip code. Yes, that’s what I would say.
Operator
And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.
Timothy Arcuri
Thanks a lot. Jean, my first question is on inventory. You said it’s going to come down a bit as you ramp into the Q4, obviously, you have a big Q4. Can you sort of shape that out for us? Before this, normalized inventory days were kind of 90 to 100 days. Where do you think you’re going to exit Q4 in terms of inventory days?
Jean Hu
Yes, Tim, thanks for the question. I think as we ramp those product lines in Q3 and Q4, you will see inventory come down first in Q3 and Q4 again. I think the inventory days of inventory probably will be around 110 to 120 days. The key thing is, right, is if you look at a lot of our product, they are like advanced process technology, five nanometer, four nanometer, six. The manufacturing cycle tend to be long. So, in the longer term, you should expect us from days of inventory be more around 100 to 120 days versus traditionally like 80 days or 75 days. That will be too short for really most advanced process technologies.
Timothy Arcuri
Thanks a lot. And then, my follow-up is for you, Lisa. I mean if you kind of add up the units, the customer interest, you can easily get to several hundred thousand units, it seems to me, for the MI300X next year. So, the question really is on the supply chain, and particularly Cohost, do you think that’s going to be a bottleneck for you? I know that they’ve been expanding capacity. I know you’ve been trying to procure more there. Can you sort of talk about that and sort of do you think that supply could become a limiting factor you next year? Thanks.
Lisa Su
Yes, absolutely. So, I’m not going to comment on the exact units, but what I will say is that we’ve been focused on the supply chain for MI300 for quite some time. It is tight. There’s no question that it’s tight in the industry. However, we have sort of commitments for significant capacity across the entire supply chain. So, co-host is one piece of it, high bandwidth memory is another piece of it and then just the general capacity requirements and look, our goal is to make this a significant growth driver for AMD, I think it’s a great market opportunity. We love the engagements with customers, it’s our responsibility to provide the supply for the demand and so that’s what we’ve been working on.
Operator
And the next question comes from the line of Christopher Rolland with SIG. Please proceed with your question.
Christopher Rolland
Hey guys, thanks for the question and more on the MI300 opportunity that you guys called out as a multibillion dollar growth opportunity, I was wondering if perhaps you could put a time frame around that multibillion dollar opportunity but more specifically, have you guys ported over MI300 LLMs to MI300? Have you looked at the performance? How do they perform? Are you excited about that? And then, in terms of hyperscale uptake, is it the X version, the GPU only version, that you expect to be the biggest seller here? And have you had any semi-custom kind of configurations here that potentially might even include an FPGA or other kind of Lego movements on the MI300? Thank you.
Lisa Su
Sure. So, there were a lot of aspects to that question, Chris. So, let me try to give you some framework here. I don’t think we’re ready to talk about timing yet of revenue numbers. What we will say is we do believe it’s a multibillion dollar opportunity. I think 2024 is a very important year for us.
Ramping MI300 in multiple customers over the next several quarters is very important. I think I mentioned earlier in the Q&A that the customer interest is actually diverse, which is great. It includes sort of what you would expect in terms of the large Tier-1 hyperscalers. But I think these new class of sort of AI focused companies have been working very closely with us, and then some of the large enterprises are also looking at ramping up their efforts. The performance that we see is strong. I think the large language model work that we’ve done, we’ve done a lot of it on MI250, and we’ve seen very good results that’s on both training as well as inference. I think as we go through MI300 again, the early results are strong.
For AI applications, what we’re seeing now is MI300X. So, let’s call it the GPU only version is the one that is sort of most prevalent in the AI customer engagement. But the MI300A, actually, which is sort of where we have the CPU and the GPU more closely coupled together is also of interest. So, I think the key is I think we’ve built a platform that does allow people to kind of choose what is best for the models and for the workloads that they’re trying to enable. And that’s what we’re working on.
Christopher Rolland
Great. And just as a quick follow-up, then, Siena Telco is a market kind of owned by your competitor there. They have a lot of software around Telco. What kind of share do you think you can take in the Telco market from them over the next few years?
Lisa Su
Yes, we’re excited about Siena. I think Siena fits again. It’s as you said, it’s a niche that we haven’t previously been focused on. I think our interactions with the Telco suppliers are they’re anxious to have Siena be a part of their portfolio. Siena is also one that we’ll use for other edge applications, or let’s call it lower end applications that need the performance of Zen 4, but perhaps not the heavy platform that we have on the Genoa and Bergamo. So, we do think we’re starting from a very low point. So, there’s an opportunity to gain share over the next couple of years, and we’ll focus on that.
Operator
And the next question comes from the line of Chris Danely with Citi. Please proceed with your question.
Chris Danely
Hey, team. Thanks for squeezing me in. Lisa, so if the MI250, 300, et cetera ramp or the revenue is mostly GPU only, what kind of an impact would that have on AMD gross margin? Would that still be gross margin accretive or dilutive or net neutral to your corporate gross margin?
Lisa Su
Yes, thanks Chris. Let me just make sure I get the statement clear. So, both MI300A and MI300X will be part of the ramp, particularly in the fourth quarter. And as we go into next year for the AI specific applications, we are more heavily weighted towards MI300X, just given sort of where the software is written. And to your question about gross margins at the corporate level, so we would expect that our AI business will be accretive to gross margins at the corporate level. And obviously, as you start the ramp, there’s a little bit of learning, but overall we expect it to be accretive to our corporate gross margins.
Chris Danely
Great. And then, from a follow-up I just had — I guess clarification, so it sounds like most of the MI revenue you have in the hopper right now for at least the committed revenue is LCAP, is that true? And do you have other, I guess, confirmed or hard orders for that, or maybe just spend some time telling us how you’re working with the customers or what it takes for them to go from, “Hey, we are interested,” to, “Here is the purchase order?”
Lisa Su
Yes. So, may be if your question is do we have other customers who are committed to MI300 other than LCAP, the answer is yes. We have a number of customers who are actually committed. And the way these things go, actually it’s not different, not very different than how a server ramp goes, right? I mean one starts with an initial deployment, ensures that the software works, ensures that we have all of the reliability and capability in the datacenter, and then they ramp from that. I will say the difference in AI deployments is, I think, customers are willing to go very quickly. There is sort of a desire and agility because we all want to accelerate the amount of AI compute that’s out there. And so, the speed in which customers are engaged and customers are making decisions is actually faster than they would in sort of a normal, sort of regular environment. And that’s great. I think that’s helping us, as I said earlier, learn, perfect the software, get of all the capabilities in place for a significant ramp next year.
Operator
Okay. And our final question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.
Harsh Kumar
Yes, hey, guys. Thanks for letting me ask the question. And Lisa, we are looking forward to an exciting second-half for your company. I had a quick question on the server share. Do you think that there is a theoretical limit to the share that AMD can get? Historically initially we heard 80/20 was a pervading rule, and then you busted through that. Now we are hearing customers say 70/30 is more like it. More importantly, are there anything large vendors for your server business, where you have significantly more than 30% share, let’s say, 40% or even 50% share? And I have a follow-up.
Lisa Su
Yes, sure, Harsh. Thanks for the question. Look, in the server business, I think the most important thing for our customers is that we have a strong roadmap, and it’s a roadmap that they can count on. And we’ve been building that sort of working model, that roadmap, and the trust over the past four or five years. So, I don’t think there is any theoretical cap on AMD share. I would say, if we look today, there are multiple customers who have us deployed in their datacenters more than 50% share. And from our view, the place where we have perhaps been a bit more underrepresented is in the enterprise. And that’s just a matter of sort of the breadth of enterprise customers and the breadth of enterprise software. So, we believe that we have leadership today, and we are very, very focused on ensuring that we continue leadership in the market. And with that, there is an opportunity to continue to gain share in the server market.
Harsh Kumar
Thank you, Lisa. On my second one, can you help us think a little bit about the generative AI spend, let’s say, if you can side press some kind of metric, and so, how many dollars of spend today are you seeing from your customers on generative AI for let’s say each dollar of regular server CPU spend? Is there a metric that we can think of? Is there a trend today? And where do you think it can be in a couple of years.
Lisa Su
Yes. I think, Harsh, the best way to answer that, and again, we are all sort of — it’s all on crystal ball as to what’s going to happen over the next four or five years. There is no question that the demand for generative AI solutions is very high, and there is a lot of compute capacity that needs to put in. The way we size the market is, it can perhaps grow at a rate of, let’s call it 50% CAGR plus or minus over the next three or four years. So, that would take us to $150 billion by the time we get to 2027. Now, that’s all accelerators in the datacenters, so that includes GPUs, that includes other ASICs and other accelerators. But I think we have an opportunity to address a large portion of that market. So, that makes it very clear priority for us. It’s our number one strategic priority, and we will continue to work closely with our customers as they optimize between CPU and GPU or spend.