AI’s growth is unprecedented from any angle you look at it. The size of large training models is growing 10x per year. ChatGPT’s 173 million plus users are turning to the website an estimated 60 million times a day (compared to zero the year before.). And daily, people are coming up with new applications and use cases.
As a result, cloud service providers and others will have to transform their infrastructures in similarly dramatic ways to keep up, says Chris Koopmans, Chief Operations Officer at Marvell in conversation with Futurum’s Daniel Newman during the Six Five Summit on June 8, 2023.
“We are at the beginning of at least a decade-long trend and a tectonic shift in how data centers are architected and how data centers are built,” he said.
The transformation is already underway. AI training, and a growing percentage of cloud-based inference, has already shifted from running on two-socket servers based around general processors to systems containing eight more GPUs or TPUs optimized to solve a smaller set of problems more quickly and efficiently.
Accelerated servers, however, also need more versatile networking technologies. “There’s no more data hungry app than artificial intelligence,” said Koopmans. “It will require an order of magnitude more bandwidth.”
Marvell has been collaborating with cloud service providers and others in AI infrastructure for years, he noted. The company reported $200 million in AI-related revenue in its most recent fiscal year, a figure that is expected to double in the current fiscal year and double again in the year after that. Most of the AI revenue to date derives from optical interconnect technologies. Marvell is the leader in Optical DSPs, the chips at the heart of the optical modules employed to connect AI clusters. Optical DSP performance has been doubling every two years to keep pace with the rate of change. Most recently, Marvell announced Nova, a 1.6T Optical DSP that doubles the bandwidth from the current state-of-the-art optical interconnects.
“Light is the only medium that can possibly deliver this kind of bandwidth,” he said.
Marvell’s custom computing group has also been collaborating with clouds on optimized silicon.
“It’s real exciting if you’re in silicon because there isn’t just one way of doing it. There is going to be all kinds of innovation in this space and new innovation drives value,” he said.
Check out the full conversation at the link on our events page.
This blog contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Readers are cautioned that these forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the “Risk Factors” section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Actual events or results may differ materially from those contemplated in this blog. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.
Tags: Accelerated servers, AI, AI infrastructure, cloud service providers, custom computing, Data infrastructure, Optical DSPs, optical interconnect technologies
Copyright © 2024 Marvell, All rights reserved.