Jump to content
  • Sign Up
×
×
  • Create New...

Recommended Posts

  • Diamond Member

This is the hidden content, please
touts AI supercomputer; Nvidia tops MLPerf 3.0 tests

The war of words among AI supercomputer vendors escalated this week with

This is the hidden content, please
claiming that its TPU-based system is faster and more efficient than Nvidia’s A100-based entry, according to its own testing. Nvidia countered that its H100 system is faster based on testing conducted by the independent MLCommons using MLPerf 3.0.

This is the hidden content, please
researchers reported that its Tensor Processing Unit-based supercomputer v4 is 1.2 to 1.7 times faster than Nvidia’s 3-year-old A100 system and uses between 1.3 to 1.9 times less power. The
This is the hidden content, please
measured Nvidia’s newer H100 against systems entered by 25 organizations, but
This is the hidden content, please
’s TPU-based v4 system was not one of them.

A direct system-to-system comparison of the two companies’ latest systems would have to be conducted by an independent organization running a variety of AI-based workloads for any benchmarks to be definitive, analysts said.

“This is not so much a story about testing as it is a marketing story,” said Dan Newman, chief analyst at Futurum Research and CEO of The Futurum Group. “We’re at an inflection point where AI competitors believe if they have it, then flaunt it. But what [

This is the hidden content, please
] especially is doing is reminding users what they have so they aren’t ruled out of any AI market early.”

Another analyst agreed that each company is using its respective testing results to gain mind share among users for what is shaping up as a battle royal among not just Nvidia and

This is the hidden content, please
, but also
This is the hidden content, please
and AWS in the coming years.

“This move by

This is the hidden content, please
is an attempt to assure users they are not going to rush something to market or do something ******* that makes them look bad,” said Jack Gold, president and principal analyst at J.Gold Associates. “Is
This is the hidden content, please
behind, schedule-wise, compared with Nvidia? Yes. Are they behind technology-wise? It’s hard to say until their stuff gets delivered and tested.”

This is the hidden content, please
’s AI supercomputer

Earlier this week,

This is the hidden content, please
published a scientific paper detailing that it has built a system with more than 4,000 TPUs that are tied together along with custom components capable of running and training AI models. The system has been in use internally since 2020 and has been used to train
This is the hidden content, please
’s Palm model, an offering that competes against OpenAI’s ChatGPT model.
This is the hidden content, please
has used TPUs for more than 90% of its work on AI training, the company said.

This is the hidden content, please
, the company said custom components included its own optical switches capable of connecting individual machines. These connections figure to play a key role among competitors in the AI supercomputer market because the large language models that fuel technologies such as ChatGPT and
This is the hidden content, please
’s Bard are too large to be stored on a single chip, the paper stated.

What further clouds the objectivity of

This is the hidden content, please
’s test results is the proprietary nature of TPUs that are specifically enhanced to run
This is the hidden content, please
’s AI software, including Bard.

This is the hidden content, please
is designing these chips to meet its own needs; it’s not so much a general-purpose device,” Gold said. “But it’s advantageous for them to do so because they save a bunch of money, get better profit margins and increase sales of other offerings to people using their cloud services.
This is the hidden content, please
and AWS are doing the same thing.”

Nvidia tops MLPerf testing

Nvidia pointed out that it ran all the MLPerf benchmarks, including the latest networked models that fed the model data to the servers, over a network rather than having the parameters already loaded into the system. The MLPerf results showed that the company’s H100 Tensor Core GPUs had the highest performance in every test involving AI inference. The GPUs delivered up to 54% performance gains since last September thanks to a number of new software optimizations, the company said.

In a

This is the hidden content, please
, Nvidia CEO Jensen Huang said that three years ago, when the company delivered A100, the AI world was dominated only by computer vision, but now “generative AI has arrived.”

“This is exactly why we built Hopper, specifically optimized for GPT with the Transformer Engine. Today’s MLPerf 3.0 highlights Hopper delivering 4x more performance than A100. The next level of Generative AI requires new AI infrastructure to train large language models with great energy efficiency,” Huang said in the blog.

As Editor at Large in TechTarget Editorial’s News Group, Ed Scannell is responsible for writing and reporting breaking news, news analysis and features focused on technology issues and trends affecting corporate IT professionals.



This is the hidden content, please

#

This is the hidden content, please
#touts #supercomputer #Nvidia #tops #MLPerf #tests

This is the hidden content, please


Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.