Intel Corp. Chief Executive Officer Pat Gelsinger took the stage at the Computex show in Taiwan to talk about new products he expects will help turn back the tide of share losses to peers, including AI leader Nvidia Corp.
Intel showed its new Xeon 6 data center processors with more efficient cores that will allow operators to cut down the space required for a given task to a third of prior-generation hardware. Like rivals, from Advanced Micro Devices Inc. to Qualcomm Inc., Intel touted benchmarks that showed its new silicon is significantly better than its existing options. AMD and Qualcomm’s CEOs, in earlier Computex keynotes, used Intel’s laptop and desktop processors to show how far ahead they are in certain aspects of technology.
Gelsinger took a direct shot at Nvidia CEO Jensen Huang’s claim that traditional processors like Intel’s are running out of steam in the age of artificial intelligence.
“Unlike what Jensen would have you believe, Moore’s Law is alive and well,” he said, stressing that Intel will have a major role to play in the proliferation of AI as the leading provider of PC chips.
“I think of it like the internet 25 years ago, it’s that big,” Gelsinger said. “We see this as the fuel that’s driving the semiconductor industry to reach $1 trillion by the end of the decade.”
Shares of Intel were little changed at 9:59 a.m. in New York on Tuesday.
Intel’s Gaudi systems, which compile its chips into kits of multiple processors tailored to handle generative AI training, will be offered by partners like Dell Technologies Inc. and Inventec Corp., Gelsinger said. One kit with eight Intel Gaudi 2 accelerators will sell for $65,000. A more powerful kit of eight Intel Gaudi 3 accelerators will list at $125,000, with the company estimating both offerings are more affordable than competitors’.
Each of those Gaudi 3 clusters is composed of 8,192 accelerators and Intel estimates it offers up to 40% faster time to train an AI model compared to an equivalent size cluster of Nvidia H100 GPUs. Intel also said Gaudi 3 would be as much as two times faster than Nvidia’s H100 in executing AI inferencing tasks, as measured in popular models like those made by Meta Platforms Inc. and Mistral. Those advantages may not be enough to topple Nvidia from the lead in data center AI processing.
“The performance of each individual accelerator is no longer the most important thing,” said Leonard Lee, an analyst at neXt Curve. Nvidia’s greatest advantage is in having a cohesive and integrated ecosystem and proprietary technology like NVLink that ensures its computing clusters work as one. “The power is in being able to create a massive logical accelerator of tremendous size.”
Santa Clara-based Intel has led the computer industry for decades, but its revenue has slid over the last two years as it’s fallen behind rivals. Gelsinger, who was brought back to the company three years ago to turn around its fortunes, has spent heavily to revitalize its offerings and build out a factory network he has said will reclaim leadership in chip design and manufacturing.
While Intel’s sales have stopped shrinking, analysts aren’t projecting a rapid rebound and the company is on course to end 2024 with $20 billion less revenue than it had in 2021. Meanwhile Nvidia’s sales are set to double and AMD will grow at more than 10%, according to estimates, as those companies take better advantage of the flood of spending on AI computing hardware.
“This is the most consequential time of our careers together,” Gelsinger said, reiterating the importance for Intel of working with its partners. “We were made for this moment.”
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out some of our webcasts.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.