Astera Labs Inc banner
A

Astera Labs Inc
NASDAQ:ALAB

Watchlist Manager
Astera Labs Inc
NASDAQ:ALAB
Watchlist
Price: 189.9299 USD -3.52%
Market Cap: $32.3B

Earnings Call Transcript

Transcript
from 0
Operator

Thank you for standing by. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2024 Earnings Conference Call. [Operator Instructions]

I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.

L
Leslie Green
executive

Thank you, Regina. Good afternoon, everyone, and welcome to the Astera Labs First Quarter 2024 Earnings Call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer.

Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management's current beliefs expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings that we file from time to time with the SEC including the risks set forth in the final perspective relating to our IPO.

It is not possible for the company's management to predict all risks and uncertainties that could have impact on these forward-looking statements. Or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied.

All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events or changes in our expectations, except as required by law.

Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website.

With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?

J
Jitendra Mohan
executive

Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a big start with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March. First and foremost, I would like to thank our investors, customers, partners, suppliers and employees for their steadfast support over the past 6 years.

We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we've been able to scale the company and deliver innovative technology solutions with the leading hyperscalers and AI platform providers worldwide. But it is only just beginning. We are supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories while also exploring new market segments.

Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI road maps. According to recent earnings reports, on a consolidated basis, CapEx spend during the first quarter for the 4 largest U.S. hyperscalers grew by roughly 45% year-on-year to nearly $50 billion.

Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year. This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe Astera Labs is well positioned to benefit from these growing investment trends. Against a strong industry backdrop, Astera Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows, while also introducing two new products.

Our revenue in Q1 was $65.3 million, up 29% from the previous quarter and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma non-GAAP diluted earnings per share. I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products and our growth strategy. Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance.

Complex AI model sizes continue doubling about every 6 months, curing the demand for high-performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI accelerator market has been moving forward at an incredible pace and the number and variety of architectures continues to expand to handle trillion parameter models while improving AI infrastructure utilization.

We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center scale AI infrastructure. However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements from power and cooling to connectivity. We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and growing complexity, we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development.

Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive Intelligent Connectivity Platform and our deep customer partnerships. The foundation of our platform consists of semiconductor base and software-defined connectivity ICs, modules and boards, which all support our COSMO software suit.

We provide customers with a complete customizable solution. Chips, hardware and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities and matches space with the ever-quickening product introduction cycles of our customers. Not only does COSMOS software run on our entire product portfolio, but it is also integrated within our customers' operating stack to deliver seamless customization, optimization and monitoring.

Today, Astera Labs is focused on three core technology standards, PCI Express, Ethernet and Compute Express Link, re-shaping three separate product families, all generating revenue and in various stages of adoption and deployment supporting these different connectivity protocols. Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions.

First, PCI Express. PCIe is a native interface on all AI accelerators, CPUs and GPUs, and it's the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. PCI servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant civil integrity challenges.

We help solve these problems. Our hyperscalers and AI accelerator customers utilize our PCIe Smart DSP Retimers to extend the reach of PCIe Gen 5 between various components within a heterogeneous compute architecture. Our Aries product family represents the gold standard in the industry for performance, robustness and flexibility and is the most widely deployed solution in the market today.

Our leadership position with millions of critical data links running through our Aries Retimers and our COSMOS software enables us to do something more. Become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at peak utilization. Deep diagnostics and monitoring capabilities in our chips and extensive peak management features in our COSMOS software, which are deployed together in our customers' fleets have become a material differentiator for us.

Our COSMOS software provides the easiest and fastest path to deploy the next generation of our devices. We see AI workloads and newer GPUs driving the transition from PCIe Gen 5 running at 32 gigabits per second per lane to PCIe Gen 6 running at 64 gigabit per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next 6 to 9 months. In addition, while we see our Aries devices being heavily divided today for interconnecting AI accelerators with CPUs and networking, we also expect our Aries devices to play an increasing role in back-end fabrics, interconnecting AI accelerators to each other in AI clusters.

Next, let's talk about Ethernet. Ethernet protocol is extensively deployed to build large-scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top-of-rack switches. Driven by AI workloads insatiable need for speed, Ethernet data rates are doubling roughly every 2 years, and we expect the transition from 400 gig Ethernet to 800 gig Ethernet to take place later in 2025. The 800-gig ethernet is based on 100 gigabits per second per lane signaling rate, which is placing tremendous pressure on conventional passive cabling solutions.

Like our PCIe Retimers, our portfolio of Taurus Ethernet Retimers helps relieve these connectivity bottlenecks by overcoming reach, signal integrity and bandwidth issues by enabling robust 100-gig per lane connectivity over copper. Unlike our Aries portfolio, which is largely sold in a chip format, we sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners.

This approach allows us to focus on our strength and fully leverage our COSMOS software suite to offer customization, easy qualification, feed telemetry and field upgrades to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers. We expect 400 gig deployments based on our Taurus Smart Cable Modules to begin to ramp in the back half of 2024.

We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet Smart Cable Module portfolio over the coming years.

Last is Compute Express Link, or CXL. CXL is a low latency cache coherent protocol, which runs on top of PCIe protocol. CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure.

Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different application of CXL memory expansion. While the adoption of CXL technology is currently in its infancy, we do expect to see increased deployments with the introduction of next-generation CXL capable data centers server CPUs, such as Granite Rapids, Turin and others.

Our first-to-market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits, without any application-level software changes.

Furthermore, we have used our COSMOS software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the differing requirements from each hyperscaler. We anticipate memory expansion will be the first high-volume use case that will drive design wins into volume production in 2025 time frame. We remain very excited about the potential of CXL in data center applications and believe that most new CPUs will support CXL and hyperscalers will increasingly deploy innovative solutions based on CXL.

With that, let me turn the call over to our President and COO, Sanjay Gajendra to discuss some of our recent product announcements and our long-term growth strategy.

S
Sanjay Gajendra
executive

Thanks, Jitendra, and good afternoon, everyone. Astera Labs is well positioned to demonstrate long-term growth through a combination of three factors: one, we have a strong secular tailwinds with increased AI infrastructure investment; two, the next generation of products within existing product lines are gaining traction; and third, the introduction of new product lines. Over the past 3 months, we announced 2 new and significant products that play an important role in enabling next-generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024.

First, we expanded our widely deployed field-proven Aries Smart DSP Retimer portfolio with the introduction and public demonstration of our Aries 6 PCIe Retimer that delivered robust, low-power PCI Gen 6 and CXL 3 connectivity between next-generation GPUs, AI accelerators, CPUs, NICs and CXL memory controllers. Aries 6 is the third generation of our PCIe Smart Retimer portfolio and provides the bandwidth required to support data-intensive AI workloads while maximizing utilization of next-generation GPUs operating at 64 gigabit per second per lane.

Fully compatible with our field deployed COSMOS software suite Aries 6 incorporates the tribal knowledge we have acquired for the past 4 years by partnering and enabling hyperscalers to deploy AI infrastructure in the cloud. Aries 6 also enables a seamless upgrade path from current PCIe Gen 5 based platforms to next-generation PCIe Gen 6 based platforms for our customers. With Aries 6 we demonstrated industry's lowest power at 11 watts at Gen 6 in full 16 lane configuration running at 64 gigabit per second per lane significantly lower than our competitors and even lower than our own Aries Gen 5 Retimer. Through collaboration with leading providers of GPUs and CPUs such as AMD, Arm, Intel and NVIDIA, Aries 6 is being rigorously tested at Astera's Cloud-Scale Interop Lab and in customers' platforms to minimize interoperation risk, lower system development cost and reduce time to market.

Aries 6 was demonstrated at NVIDIA's GTC event during the week of March 18. Aries 6 is currently sampling to leading AI and cloud infrastructure providers, and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our Aries PCIe/CXL Smart Cable Modules for active electrical cables or AECs, to support robust and long reach up to 7 meters copper cable connectivity.

This is 3x the standard reach defined in the PCIe spec. Our new PCIe AEC Solution is designed for GPU clustering applications by extending PCIe back-end fabric deployments to multiple racks. This new Aries product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, Aries Smart Cable Modules support our COSMOS software suite to deliver a powerful yet familiar array of linked monitoring, fleet management and RAS tools, which are customizable for diverse needs of our hyperscaler customers.

We leveraged our expertise in silicon, hardware and software to deliver a complete solution in record time, and we expect initial shipments to begin later this year for the PCIe AECs. We believe this new Aries product announcement represents another concrete example of Astera Labs driving the PCIe ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules and COSMOS software suite. Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI.

In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure at scale. We have gained the trust and support of our world-class customer base by executing, innovating and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology road map for Astera. We look forward to continued collaboration with our partners as a new era unfolds, driven by AI applications.

With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook.

M
Michael Tate
executive

Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics to stock-based compensation and the related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call.

For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million, which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers. We recognize revenues across all three of our product families during the quarter with Aries products being the largest contributor. Aries enjoyed solid momentum in AI-based platforms as customers continue to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry's growing investment in generative AI.

Also, we continue to make good process with our Taurus and Leo product lines, which are in the early phases of revenue contribution. In Q1, Taurus revenues were primarily shipping into 200-gig Ethernet-based systems and we expect Taurus revenues to sequentially track higher as we progress through 2024 as we also began to ship into 400-gig Ethernet-based systems.

Q1 Leo revenues were largely from customers purchasing pre-product volumes for the development of their next-generation CXL capable compute platforms expected to launch late this year with the next server CPU refresh cycle. Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 of 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million, up from $27 million in the previous quarter.

With non-GAAP operating expenses, R&D expense was $22.9 million. Sales and marketing expense was $6 million, and general and administration expenses were $6.3 million. Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 was stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes and to a lesser extent, our normal quarterly stock-based compensation expense.

Non-GAAP operating margins for Q1 was 24.3% as revenues scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was $0.10. The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter, while our GAAP share count only includes the conversion of our preferred stock for the step period following our March IPO.

Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP. Cash flow from operating activities for Q1 was $3.7 million, and we ended the quarter with cash, cash equivalents and marketable securities of just over $800 million.

Now turning to our guidance for Q2 of fiscal 2024. We expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2. Within the Aries product family, we expect the growth to be driven by increased unit demand for AI servers as well as the ramp of new product designs with our customers.

We expect non-GAAP gross margins to be approximately 77%, given a modest increase in hardware shipments relative to stand-alone ICs. We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%.

We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across head count and intellectual property, while also scaling our back office functions. Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately $0.11. This concludes our prepared remarks.

Once again, we very much appreciate everyone joining the call. And now we open the line for questions. Operator?

Operator

[Operator Instructions] Our first question will come from the line of Harlan Sur with JPMorgan.

H
Harlan Sur
analyst

Congratulations on the strong results and guidance post your first quarter as a public company. As you guys mentioned, many new AI XPU programs coming to the market, GPU, ASIC AI chip programs, accelerators. In terms of total XPU shipments this year, I think only half is going to be NVIDIA based. So it is starting to broaden out.

The good news is, obviously, the Astera chip has exposure to all of these XPU programs. It does seem that the pace of deploying these XPU platforms has accelerated even over the past few months. So how much of the strong results and guidance is due to this acceleration, broadening in customer deployments? How much is more just kind of higher content of Retimers versus your prior expectations? And then do you guys see the strong momentum continuing into the second half of this year?

M
Michael Tate
executive

Thanks, Harlan. This is Mike. We started shipping into AI servers really in Q3 of last year, so it's just in the early innings. A lot of our customers have not fully deployed their AI system. So we're seeing incremental growth just from adding on the different platforms that we have design wins in. But it's on a -- in a backdrop where there's clearly a growing investment in AI as well. So the overall unit growth is also playing out. As we look at to the balance of this year, there's still a lot of programs that have not ramped yet. So we have higher confidence that the Gen 5 Aries platform has a lot of growth ahead of it, and that continues into 2026 -- I'm sorry, 2025 as well.

H
Harlan Sur
analyst

I appreciate that. And as you mentioned, there's been a lot of focus on next-gen PCIe Gen 6 platforms, right, obviously, with the rollout of NVIDIA's Blackwell-based platform. And obviously, with any market that is viewed of as fast-growing, you are going to attract competitors, we have seen some announcement by competitors. We know most of the Gen 5 design wins have already been locked up by the Astera team. You've been working with customers, as you mentioned on Gen 6 for some time now. Maybe how do you compare the customer engagement momentum on Gen 6 versus the same period back when you were working with customers on Gen 5?

S
Sanjay Gajendra
executive

Good question, Harlan, This is Sanjay here. Let me take that. So like you correctly said, Gen 5 is -- has still a lot of legs on it. Let's be very clear on that. Like Mike noted, we do have platforms that are still ramping and still to come. So to that standpoint, we do expect Gen 5 to be with us for some time. And in terms of Gen 6, again, it's driven by the pace of innovation that's happening on the AI side. There is, as you probably know, the GPUs are not fully utilized. Some reports put it at around 50%.

So there's still a lot of growth in terms of connectivity, which is essentially holding it back. right? Meaning there's a pace and a need to adopt faster speeds and links. So with NVIDIA announcing their Blackwell platform, those are the first set of GPUs that have Gen 6 on that.

So to that standpoint, we do expect some of those deployments to happen in 2025. But in general, others are not far behind based upon public information that's out there. So we do expect the cycle time for Gen 6 adoption to perhaps be a little bit shorter than Gen 5, especially on the AI server application more so than the general-purpose compute, which is still going to be lagging when it comes to PCIe Gen 6 adoption.

Operator

Your next question will come from the line of Joe Moore with Morgan Stanley.

J
Joseph Moore
analyst

Great. Following on from that, can you talk about PCI Gen 5 in general purpose servers. It seems like if I look at the CPU penetration of Gen 5, we're still at a pretty early stage. Do you see growth from general purpose and what are the applications driving that?

S
Sanjay Gajendra
executive

Absolutely. I mean primarily on the general-purpose compute, the main places where the PCI Retimer gets used tends to be on the storage connectivity where you have SSDs that are on the back of the server. So to that standpoint, again, it's -- there are two things that have been holding it back or three things perhaps. One is just the focus on AI. I mean most of the dollars are going to the AI server application compared to general compute.

The second thing is just the ecosystem readiness for Gen 5, primarily on the SSD side, which is starting to evolve with many of the major SSD NVMe players providing or ramping up on Gen 5 based NVMe drives. The third one really has been the CPU platforms.

If you think about it both from Intel and AMD, they're all on the cusp of introducing their next significant platform, whether it is Granite Rapids for Intel or Turin from AMD. So that is expected to drive the introduction of new platform. And if you combine that with the SSDs being ready for Gen 5 and based on the design wins that we already have, you can expect that those things would be a contributing factor as dollars start flowing back into the compute side -- general purpose compute side.

J
Joseph Moore
analyst

Great. And for my follow-up, you just mentioned Granite Rapids and Turin, which are the first kind of volume platform supporting CXL too. What are you hearing in terms of the CPUs will be out, but what will be the initial adoption? And how quickly do you think that technology can roll out in 2025?

S
Sanjay Gajendra
executive

Yes. Let me start off by saying CXL, every hyperscaler is in some shape or form evaluating and working with the technology. So it's well and alive. I think where the focus really has been in terms of CXL is on the memory expansion use case, specifically for CPUs and the expansion could be for reasons like adding more memory for large database applications, more capacity memory. And the second use case, of course, is for more memory bandwidth, which are for HPC type of applications.

So the thing that's been holding back is the availability of CPUs that support CXL at a production quality level that will change with Granite Rapids and Turin being available. So at this point, what we can say is that we've been providing checks for quite some time, we've been in preproduction and supported the various different evaluation POC type of activities that have happened with our hyperscaler customers. So to that standpoint, we do expect revenue to start coming in, in '25 from memory expansion use case for CXL.

Operator

Your next question will come from the line of Tore Svanberg with Stifel.

T
Tore Svanberg
analyst

Let me add my congratulations. My first question is on Gen 6 PCI. So Sanjay, you just mentioned that the design-in cycle is going to be shorter than Gen 5. Now since it's backwards compatible for your Gen 5 and especially given the COSMOS software platform, should we assume that you will basically retain most of those sockets that you already had in Gen 5 and then obviously some new ones as well for Gen 6?

S
Sanjay Gajendra
executive

So that's the goal for the company. We have the COSMOS software. And like I noted, PCI Express is one of those protocols, which unlike Ethernet tends to be a little messy, meaning it's something that's been around for a long time. It's a great technology, but it also requires a lot of handholding. And for us, what has happened is being in the customers' platforms, bringing up a system that ramp-up to millions of devices has allowed us to understand what are the nuances, what works, what doesn't work, how do you make the link perform at the highest rate?

So that tribal knowledge is something that we've captured within the COSMOS software that we built, running both on our chips as well as customers' platforms. So we do expect that as Gen 6 starts to materialize, a lot of those learnings will be carried over. You're right that there's been a lot of competition that has come in as well. But we believe that when it comes to competition, they could have a similar product like us. But no matter what, there is a SOP time that's essential when it comes to connectivity type of chips. Just given the interoperation and getting the kinks out and so on, meaning you could have a perfect chip, yet have a failing system.

The reason for that is the complexity of the system and how PCI Express standard is defined. So to that standpoint, I agree with what you said in the sense that we have the leading position now in the Retimer market for PCIe. And we expect to build on that, both with the new features we have added in PCIe Gen 6 or the Aries 6 product line and also the tribal knowledge that we've built by working with our partners over the last 3, 4 years.

T
Tore Svanberg
analyst

That's a great perspective. And as my follow-up, I had a question on AEC. It sounds like that business is going to start ramping late this year. First of all, is that what multiple cable partners? And then related to that, are you the only company today that have an AEC at 7 meters?

S
Sanjay Gajendra
executive

I don't know about the only customer. I would probably request maybe you need to do some research on it on where the competition is. But from a Retimer standpoint, which goes on this, we do have a leading position. So based on that, I would Imagine that we are the main provider here based on that and the customer traction that we're seeing. So this one is an interesting use case. So far, PCI Express, as you know, was defined to be inside the server. But what is happening now, and this is why we're excited about PCIe AECs is that now we are opening up a new front in terms of Clustering GPUs, meaning interconnecting accelerators.

That is where the AECs would play and that is a new opportunity that goes along with the Ethernet AECs that we already provide, which are also used for interconnecting GPUs on the back-end network. So overall, we do believe that combining our PCIe AEC Solution and Ethernet AEC solution, we are well set for some of these evolving trends and our revenue, we expect to start coming in for the later half of this year. And on PCIe, again, we do believe we are the only one just to make sure I clarify what I initially said, just that I don't know if there is someone else talking about it that's not yet in the public domain.

Operator

Your next question will come from the line of Blayne Curtis with Jefferies.

B
Blayne Curtis
analyst

Maybe first one for you, Jitendra. I'm just curious, you mentioned the right architecture, I think Harland asked on it. I was just kind of curious about, obviously, you have a lead customer into a lot of CPU to GPU connections. That's the nature of the market, who has the volume. But I'm curious, you mentioned the back-end fabrics a bunch. Kind of curious, is that still conceptual are you seeing design for? And maybe just talk about the widening out of just applications for what the Retimers are being used for?

J
Jitendra Mohan
executive

Great question. So there are many applications where we use the Retimers. Of course, we are most known for the connectivity from the GPU to the head note. That is where a lot of the deployments are happening. But these new applications also speak to how rapidly the AI systems are evolving. Every few months, we see a new AI platform come up. And that opens up additional opportunities for us. And one of those is clusters GPUs together. There are two main products that are used in addition to NV Link, of course, which are used to cluster GPUs.

That is PCI Express and Ethernet. And as Sanjay just mentioned, we now have solutions available to interconnect GPUs together, whether they are for PCI Express and -- or Ethernet. Specifically, in the case of PCI Express, some of our customers who want to use PCI Express for clustering GPUs together are now able to do so using our PCI Express Retimers, which are offered in the form of an active electrical cable. So this business is going to be in addition to the sustaining business that we have today in connecting GPUs to head nodes.

Now we are connecting GPUs together in a cluster. And as you know, these are very intense, very dense mesh connections, so they can grow very, very rapidly. So we're very excited about where this will grow. And starting with some revenue contributions later this year.

B
Blayne Curtis
analyst

And then maybe a question for Mike. The gross margin remained quite high. You said it was mix. I mean maybe you're just being kind of conservative with the IPO, but I was just kind of curious the mix come in, I mean, I think it's mostly Retimers. I know as the other products start to ramp, that will be the headwinds? So I'm just kind of -- how do you think about the rest of the year? Should we kind of just have kind of come down with mix gradually as those new products ramp up this 77% that you're guiding to?

M
Michael Tate
executive

Yes. So just to remind everybody, our stand-alone ICs carry a pretty high margin relative to our hardware solutions. So when the mix gets a little more balanced with hardware versus ICs, we're expecting our long-term gross margins to trend to 70%. In Q1, we were heavily weighted to stand-alone ICs. So it's a very favorable mix, and that's how we enjoyed the strong gross margins as we go through the balance of this year and into next year. We will see an increasing mix of our modules and also adding cards for CXL as well. So we think we'll have a gradual trend down towards a long-term model over time as that mix changes.

Operator

Your next question will come from the line of Thomas O'Malley with Barclays.

T
Thomas O'Malley
analyst

Mike, I just wanted to ask, I know you may not be giving segment details specifically, but could you talk about what you're able to, what contributed to the revenue in the quarter? And then looking out into June, could you talk about from a revenue mix perspective? Maybe some sequential help on what's growing, obviously, the non-ICE business is growing just given the fact that gross margins are pressured a bit. But just any color on the segments would be helpful to start?

M
Michael Tate
executive

Sure. So as I mentioned, we started shipping into AI server platforms and volume in Q3. And a lot of our customers are still in the ramp mode to the extent we've been shipping for the past couple of quarters, but there's still a lot of designs that haven't even begun to ramp. So we're still in the early phases that -- as we look out in time, we see the Gen 5 piece of it in AI continue to grow into next year as well. So as you look into Q2, the growth that we're guiding to is still largely driven by the Aries Gen 5 deployment in AI servers, both for existing platforms with increased unit volumes, but also the new customers begin there, there's ramps as well.

T
Thomas O'Malley
analyst

Helpful. And then just a broader one. In talking with NVIDIA, they're referencing your GV-200 architecture become a bigger percent of the mix. And doing 72 being more of the deployments that hyperscalers are taking. When you look at the Hopper architecture versus the Blackwell architecture and their NV-72 platform, where they're using NVLink among their GPUs. Can you talk about the puts and takes when it comes to your retiming product? Do you see the attach rate that's any different than the current generation?

J
Jitendra Mohan
executive

Let me take. Great question. First, let me say that we are just at the beginning phases of AI. We will continue to see new architectures being produced by AI platform providers at a very rapid pace to just match up with the growth in AI models. And on top of that, we'll see innovative ways that hyperscalers will deploy these platforms in their cloud. So these architectures evolve, so do the connectivity challenges. Some challenges are going to be incremental and some are going to be completely new. And so what we believe is given the increasing speeds, increasing complexities with these new platforms, we do expect our dollar content per AI platform to increase over time. We see these developments providing us good tailwinds going here into the future.

So now to your question about the GV-200 specifically, well, first of all, we cannot speak about specific customer architectures. But here is something that is very clear to see. As the AI platform providers produce these new architectures, the hyperscalers will choose different form factors to deploy them. And in that way, no two clouds are the same. And each hyperscaler has a unique required, unique constraints to deploy DVI platforms. And we are working with all of them to enable these deployment. This combination of new platforms and very cloud-type deployment strategy, it presents great opportunities for our PCIe connectivity portfolio.

And to that point, as Sanjay mentioned, we announced the sampling of our Gen 6 Retimer during GDC. If you look at our press release, you will see that broad support from AI platform providers. And to this day, to the best of our knowledge, we are still the only one sampling a Gen 6 solution. So on the whole, given the fact that speeds are increasing, complexity is increasing and in fact, the pace of innovation is going up as well, these all play to our strengths. And we have customers coming to us for new approaches to solve these problems. So we feel very good about the potential to grow our PCIe connectivity business.

Operator

Your next question will come from the line of Quinn Bolton with Needham.

Q
Quinn Bolton
analyst

Let me offer my congratulations on the nice results and outlook. I just want to follow up on the use of PCI Express in the GPU to GPU back-end networks. I think that's something -- historically, you had excluded it from your TAM, but it looks like it's becoming an opportunity here and starts to ramp in the second half of this year. Wondering if you could just talk about the breadth of some of the custom AI accelerators that are choosing PCI Express as their interconnect over, say, Ethernet? And then I've got a follow-up.

J
Jitendra Mohan
executive

Again, great question. So just to follow up the response that we provided before. There are three dominant protocols that are used to cluster GPUs together. The one that's most known, of course, is NVLink, which is what NVIDIA uses and it's their proprietary interface. The other two are Ethernet and PCI Express. We do see our customers using PCI Express and I think it would not be appropriate to say who.

But certainly, PCI Express is a fairly common protocol. It is the one that's natively found on all GPUs and CPUs and other data center components. Ethernet is also very popular. And to the extent that a particular customer chooses to use Ethernet or PCI Express, we are able to support them both with our solutions, the Aries PCIe Retimer family as well as the Taurus Ethernet Retimer family. We do expect these to make meaningful contributions to our revenue, as I mentioned, starting with the end of this year and then, of course, continuing into next year.

Q
Quinn Bolton
analyst

Perfect. And my second question is you guys have talked about the introduction of new products is new TAM expansion activity. And I'm not going to ask to introduce them today. But just in terms of timing, as we think out of these new products time line sort of introduction later this year, 2025, with revenue ramp 2026. Is that the general framework investors should be thinking about the new products that you've discussed?

S
Sanjay Gajendra
executive

Again, I think we -- the company don't talk about unreleased products or the timing of it. But what I can share with you is the following: First, we've been very fortunate to be in the front row seat of AI deployment and enjoy a great relationship with the hyperscaler and AI platform providers. So we get to see a lot, we get to hear a lot in terms of some of the requirements. So clearly, we are going to be developing products that address the bottleneck, whether it is on the data side, network side or on the memory side.

So we are working on certain products, as you can imagine that would all be developed ground up for AI in structure. And enable connectivity solutions that will deploy the AI application sooner. There's a lot going on, a lot of new infrastructure, a lot of new GPU announcements, CPU announcement.

So you can imagine given the pace of this market and the changes that are upcoming. We do anticipate that this will all start having meaningful impact and incremental revenue impact to our business.

Operator

Your next question will come from the line of Ross Seymore with Deutsche Bank.

R
Ross Seymore
analyst

I wanted to go into the ASIC versus GPU side of things. As ASIC start to penetrate this market to certain degrees, how does that change if any, the Retimer TAM that you would have? And I guess, even the competitive dynamic in that equation, considering one of the biggest ASIC suppliers is also an aspiring competitor of yours?

J
Jitendra Mohan
executive

So great question again. Let me just refer back to what I said. Which we will see more and more different solutions come to the market to address the evolving AI requirements. Some of them are going to be GPUs from the kind of known AI providers like NVIDIA, AMD and others. And some others will custom build ASICs that are built typically by hyperscalers, whether they are AWS or Microsoft or Google and others. And the requirements for the two systems are common in some ways, but they do differ. For example, what particular type of back-end connectivity they use and exactly what are the ins and outs that are going into each of these chips.

The good news is with the breadth of portfolio that we have and the close engagement with the several ASIC providers as well as the GPU providers, we understand the challenges of these systems very well. And not only are we providing solutions that address those today with their current generation, we are engaged with them very closely on the next generation, on the upcoming platforms, whether they are GPU-based or ASIC-based to provide these solutions. A great example was the Aries SCM where we enabled using our trusted solution for PCI Express Retimers, we enabled a new way of connecting some of these ASICs on the back-end network.

S
Sanjay Gajendra
executive

And just maybe if I can add to that, one way to visualize connectivity market or subsystem is a nervous system within the human anatomy, right? It's one of those things where you don't want to mess with it. Yes, there will be ASIC vendors, there will be options or -- off-the-shelf. Once the nerve system is built, tested, especially like what we have developed, where the nervous system that we've built is specifically done for AI application. And there's a lot of qualification, a lot of software investment that hyperscalers have done. And they want to reuse that across different kinds of topology, whether it's ASIC base or merchant silicon based. And we do see that trend happening when we look at customers that we're engaged with today. And for protocols like PCI Express, Ethernet and CXL and especially where Astera plays, these are standard space. So to that standpoint, whatever end postal architecture is being used, we believe that we will stand to gain from that.

R
Ross Seymore
analyst

I guess as my follow-up, one quick one for Mike. How should we think about OpEx beyond the second quarter? I know there's a good step up there with a full quarter of being a publicly traded company, et cetera. But just walk us through your OpEx plans for the rest of the year or even to the target.

M
Michael Tate
executive

Yes. I mean -- thanks, Ross. We are continuing to try to invest quite a bit in headcount, particularly in R&D. There's so many opportunities ahead of us that we love to get a jump on those products and also improve the time to market. That being said, we're pretty selective on who we bring into the company. So that will just meter our growth. And we believe our OpEx, although it's going to be increasing, we'll probably not increase that rate of revenue over the near term, and that's why we feel good about our long-term operating margin model of 4%. So over time, we do feel confident we can trend that direction even with increasing investment in OpEx.

Operator

Your next question will come from the line of Suji Desilva with ROTH MKM.

S
Sujeeva De Silva
analyst

Jitendra, Sanjay, Mike, congrats on the first quarter here. On the back end, the addressable market here that's not NVLink. I'm trying to understand if the PCIe and Ethernet opportunities there will be adopted at a similar pace out of the gate or whether PCI would lead that adoption in the non-NVlink back-end opportunity?

S
Sanjay Gajendra
executive

It's hard to say at this point, just because there is so much of development going on here. I mean you can imagine the non-NVIDIA ecosystem, they will rely on standard technologies, whether it is PCI Express or Ethernet. And the advantage of PCI Express is that it's low latency, right, significantly low latency compared to Ethernet. So there are some benefits to that. And there are certain extensions that people consider to add on top of PCI Express when it comes to the proprietary implementation. So overall, we do see this from a technology standpoint, PCI Express will have that advantage. Now Ethernet also has been around. So we'll have to wait and see how all of this develops over the next, let's say, 6 to 18 months.

J
Jitendra Mohan
executive

Just add to what Sanjay said I think the good news for us in some ways is that we don't have to pick. We don't have to decide which one. We have chips, we have hardware and we have software. So we have customers asking to us and say, I need this for my new AI platform. Can you build me that? And that's what we've been doing.

S
Sujeeva De Silva
analyst

Okay. Great. And other question perhaps for Mike. The initial AEC program is ramping. Maybe a few customers this year, a few customers next year, maybe perhaps all of them this year. But do you perceive that those will be larger, lumpier program-based ramps, Mike? Or will those be steady kind of build out to service growth?

M
Michael Tate
executive

I think the product ramps will mirror -- other product ramps will -- they'll gradually build over a few quarters to the hit steady state. And if you layer on top of each other, just continues to build a nice growing revenue profile. So as you look at Taurus in 2024, we're shipping 200 gig right now. And then in the back half, we start to ship 400 gig. And if you look into 2025, 800 gig, which is ultimately the biggest opportunity and a much broader set of customers. will be when the market really becomes very large.

Operator

Your next question will come from the line of Richard Shannon with Craig Hallum.

R
Richard Shannon
analyst

Congratulations on coming public here. I guess I want to follow up on a couple of topics here that have been hit early including Suji's question here about the PCI Express, AEC opportunity. Are these design wins? Or are these kind of pre-design win ramps you're talking about this year? And I guess, ultimately, my question on this topic here is, can this opportunity the PCI Express AECs become as big as your Taurus family in the foreseeable future?

S
Sanjay Gajendra
executive

Yes. So these are design wins to qualify. We have been shipping this. We announced this, we demonstrated this at public forums. So to that standpoint, it's an opportunity that we're excited about and like we noted early on, we do expect it to start contributing revenue for later half of this year.

R
Richard Shannon
analyst

Okay. Perfect. And the second question is on CXL. I think you mentioned a couple of applications here. Maybe if you can kind of express the breadth of interest here across hyperscalers and other customers. For the ones you mentioned. And also for the next ones that are a little bit more expansive in how are you seeing the testing and specking out of those? Are those coming to market at the time you're hoping for? Or is there a little bit more development required to get those to market?

S
Sanjay Gajendra
executive

Yes. So there are two questions, and let me take the first one, which is the CXL side. For CXL, there are 4 main use cases to keep in mind, memory expansion, memory tiering, where you're trying to go for a TCO type of angle, memory pooling and what is called as is memory drives that Samsung and others are providing. We believe memory drives are more suitable for the enterprise customers and whereas the first 3 are more suitable for cloud-scale deployment. And there, again, memory pooling is something that's further out in time is our belief just because it requires software changes.

So the ones that are more sort of short term, medium term is memory expansion and memory tiering. And like I noted early on, all the major hyperscalers, at least in the U.S., are all engaged on the CXL technology but it is going to be a matter of time with both CPUs being available and dollars being available from a general purpose compute standpoint.

Okay. And then in terms of -- your second question was, was that more on new products? Was that the context for it?

R
Richard Shannon
analyst

Yes.

S
Sanjay Gajendra
executive

Yes. So again, we don't talk about the exact time frame, but you can imagine our last product we announced was a little over a year ago. So our engineers have not been quite so they've been working hard. So to that standpoint, we are working very diligently and hard based upon a lot of interest and engagement from customers that we have already been working with.

Operator

There are no further questions at this time. I'll turn the call back over to Leslie Green for closing remarks.

L
Leslie Green
executive

Thank you, everyone, for your participation and questions. We look forward to seeing many of you at various financial conferences this summer and updating you on our progress on our Q2 earnings conference call. Thank you.

Operator

This concludes today's conference call. You may now disconnect.

Earnings Call Recording
Other Earnings Calls
Get AI-powered insights for any company or topic.
Open AI Assistant

Intrinsic Value is all-important and is the only logical way to evaluate the relative attractiveness of investments and businesses.

Warren Buffett