Marvell Blogs

Marvell Blog

Latest Marvell Blog Articles

  • February 27, 2025

    Lightwave+BTR Innovation Reviews Ranks Marvell Products as Among the Industry’s Best

    By Kirt Zimmer, Head of Social Media Marketing

    Marvell has once again been honored with a variety of Lightwave+BTR Innovation Reviews high scores. A panel of experts from the optical communications community recognized five products from Marvell’s portfolio as among the best in the industry.

    “On behalf of the Lightwave+BTR Innovation Reviews, I would like to congratulate Marvell on again achieving a well-deserved honoree status for multiple products,” said Lightwave+BTR Editor-in-Chief Sean Buckley. “This is a very competitive program allowing us to showcase and applaud the most innovative products that significantly impact the industry. It says a lot that Marvell is consistently recognized by these experts for its innovation.”

  • February 25, 2025

    Marvell and IIT Hyderabad Partner to Support India’s Next Generation of Technology Innovators

    By Kirt Zimmer, Head of Social Media Marketing

    Marvell Data Acceleration and Offload Research Facility

    Lighting the way! Marvell SVP Cary Ussery, CSE HOD at IIT Prof. Anthony, Marvell AVP Prasun Kapur, IITH Faculty Lead Dr. Praveen Tammana, and Marvell Director Abed Kamaluddin inaugurate a new chapter in the Marvell-IITH collaboration.

    Did you know that India is estimated to have about 20% of the world’s chip design workforce? This design expertise continues to expand with strategic investments in workforce education and development.

    To that end, Marvell and the Indian Institute of Technology Hyderabad (IIT Hyderabad) have launched the Marvell Data Acceleration and Offload Research Facility, focused on advancing network, storage, and security technologies to raise the performance of accelerated infrastructure.

    An inaugural reception was held with more than 150 students, staff, and faculty members attending. The event unveiled the research facility and the OCTEON server cluster in the data center. 

    The Marvell research facility, the first of its kind globally, provides students, researchers, and industry professionals with access to cutting-edge Marvell data processor units (DPUs), switches, compute express link (CXL) processors, network interface controllers (NICs) and other technologies for accelerating how data is secured, moved, managed, and processed across AI clusters, clouds and networks. Industry research estimates that up to one-third of the time spent in AI/ML processing can be consumed by waiting for network access. A key element of the facility is access to Marvell comprehensive software solution frameworks optimized for developing solutions that take advantage of packet processing, cryptography, and AI/ML accelerators integrated in Marvell silicon.  

  • February 19, 2025

    Radha Nagarajan named to the National Academy of Engineering

    By Kirt Zimmer, Head of Social Media Marketing

    Radha Nagarajan named to the National Academy of Engineering

    Even in the very practical world of engineering, heart-warming stories can inspire. A perfect example of this has just transpired.

    Dr. Radha Nagarajan has been elected to the National Academy of Engineering. He serves as Marvell’s Senior Vice President and Chief Technology Officer, Optical Engineering, in the Datacenter Engineering Group.

    Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer, essentially akin to making it into the Hall of Fame of Engineering if such a thing existed. NAE membership honors those who have made outstanding contributions to "engineering research, practice, or education.”  NAE members - who number more than 2,600 - are highly accomplished engineering professionals representing a broad spectrum of engineering disciplines working in business, academia, and government.

  • February 10, 2025

    Ten Statistical Snapshots to Better Understand AI, Data Centers and Energy

    By Michael Kanellos, Head of Influencer Relations, Marvell

    You’re likely assaulted daily with some zany and unverifiable AI factoid. By 2027, 93% of AI systems will be able to pass the bar, but limit their practice to simple slip and fall cases! Next-generation training models will consume more energy than all Panera outlets combined!  etc. etc.

    What can you trust? The stats below. Scouring the internet (and leaning heavily on 16 years of employment in the energy industry) I’ve compiled a list of somewhat credible and relevant stats that provide perspective to the energy challenge.

    1. First, the Concerning News: Data Center Demand Could Nearly Triple in a Few Years

    Lawrence Livermore National Lab and the Department of Energy1 has issued its latest data center power report and it’s ominous.

    Data center power consumption rose from a stable 60-76 terawatt hours (TWh) per year in the U.S. through 2018 to 176 TWh in 2023, or from 1.9% of total power consumption to 4.4%. By 2028, AI could push it to 6.7%-12%. (Lighting consumes 15%2.) 

    Total U.S data center electricity use from 2014 through 2028

    Report co-author Eric Masanet adds that the total doesn’t include bitcoin, which increases 2023’s consumption by 70 TWh. Add a similar 30-40% to subsequent years too if you want.

  • February 03, 2025

    The Custom Era of Chips

    By Raghib Hussain, President, Products and Technologies

    This article was originally published in VentureBeat.
     

    Artificial intelligence is about to face some serious growing pains.

    Demand for AI services is exploding globally. Unfortunately, so is the challenge of delivering those services in an economical and sustainable manner. AI power demand is forecast to grow by 44.7% annually, a surge that will double data center power consumption to 857 terawatt hours in 20281: as a nation today, that would make data centers the sixth largest consumer of electricity, right behind Japan’s2 consumption. It’s an imbalance that threatens the “smaller, cheaper, faster” mantra that has driven every major trend in technology for the last 50 years.

    It also doesn’t have to happen. Custom silicon—unique silicon optimized for specific use cases—is already demonstrating how we can continue to increase performance while cutting power even as Moore’s Law fades into history. Custom may account for 25% of AI accelerators (XPUs) by 20283 and that’s just one category of chips going custom.

    The Data Infrastructure is the Computer

    Jensen Huang’s vision for AI factories is apt. These coming AI data centers will churn at an unrelenting pace 24/7. And, like manufacturing facilities, their ultimate success or failure for service providers will be determined by operational excellence, the two-word phrase that rules manufacturing. Are we consuming more, or less, energy per token than our competitor? Why is mean time to failure rising? What’s the current operational equipment effectiveness (OEE)? In oil and chemicals, the end products sold to customers are indistinguishable commodities. Where they differ is in process design, leveraging distinct combinations of technologies to squeeze out marginal gains.

    The same will occur in AI. Cloud operators already are engaged in differentiating their backbone facilities. Some have adopted optical switching to reduce energy and latency. Others have been more aggressive at developing their own custom CPUs. In 2010, the main difference between a million-square-foot hyperscale data center and a data center inside a regional office was size. Both were built around the same core storage devices, servers and switches. Going forward, diversity will rule, and the operators with the lowest cost, least downtime and ability to roll out new differentiating services and applications will become the favorite of businesses and consumers.

    The best infrastructure, in short, will win.

    The Custom Concept

    And the chief way to differentiate infrastructure will be through custom infrastructure that are enabled by custom semiconductors, i.e., chips containing unique IP or features for achieving leapfrog performance for an application. It’s a spectrum ranging from AI accelerators built around distinct, singular design to a merchant chip containing additional custom IP, cores and firmware to optimize it for a particular software environment. While the focus is now primarily on higher value chips such as AI accelerators, every chip will get customized: Meta, for example, recently unveiled a custom NIC, a relatively unsung chip that connects servers to networks, to reduce the impact of downtime.

档案文件