MLPerf Inference v3.1 introduces new LLM and recommendation benchmarks

The latest release of MLPerf Inference introduces new LLM and recommendation benchmarks, marking a leap forward in the realm of AI testing.

The v3.1 iteration of the benchmark suite has seen record participation, boasting over 13,500 performance results and delivering up to a 40 percent improvement in performance. 

What sets this achievement apart is the diverse pool of 26 different submitters and over 2,000 power results, demonstrating the broad spectrum of...

Gcore partners with UbiOps and Graphcore to empower AI teams

Gcore has joined forces with UbiOps and Graphcore to introduce a groundbreaking service catering to the escalating demands of modern AI tasks.

This strategic partnership aims to empower AI teams with powerful computing resources on-demand, enhancing their capabilities and streamlining their operations.

The collaboration combines the strengths of three industry leaders: Graphcore, renowned for its Intelligence Processing Units (IPUs) hardware; UbiOps, a powerful machine...

Baidu to launch powerful ChatGPT rival

Chinese web giant Baidu is preparing to launch a powerful ChatGPT rival in March.

Baidu is often called the “Google of China” because it offers similar services, including search, maps, email, ads, cloud storage, and more. Baidu, like Google, also invests heavily in AI and machine learning.

Earlier this month, AI News reported that Google was changing its AI review processes to speed up the release of new solutions. One of the first products to be released under...

MLCommons releases latest MLPerf Training benchmark results

Open engineering consortium MLCommons has released its latest MLPerf Training community benchmark results.

MLPerf Training is a full system benchmark that tests machine learning models, software, and hardware.

The results are split into two divisions: closed and open. Closed submissions are better for comparing like-for-like performance as they use the same reference model to ensure a level playing field. Open submissions, meanwhile, allow participants to submit a...

NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud

NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100...

NVIDIA sets another AI inference record in MLPerf

NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last...

Nvidia comes out on top in first MLPerf inference benchmarks

nvidia mlperf benchmarks mlperf ai artificial intelligence neural networks ml machine learning

The first benchmark results from the MLPerf consortium have been released and Nvidia is a clear winner for inference performance.

For those unaware, inference takes a deep learning model and processes incoming data however it’s been trained to.

MLPerf is a consortium which aims to provide “fair and useful” standardised benchmarks for inference performance. MLPerf can be thought of as doing for inference what SPEC does for benchmarking CPUs and general system...

Esteemed consortium launch AI natural language processing benchmark

facebook google deepmind nlp consortium ai artificial intelligence natural language processing

A research consortium featuring some of the greatest minds in AI are launching a benchmark to measure natural language processing (NLP) abilities.

The consortium includes Google DeepMind, Facebook AI, New York University, and the University of Washington. Each of the consortium’s members believe a more comprehensive benchmark is needed for NLP than current solutions.

The result is a benchmarking platform called SuperGLUE which replaces an older platform called GLUE...

AnTuTu’s latest benchmark tests AI chip performance

We can now better scrutinise manufacturers’ claims about AI chip performance improvements thanks to AnTuTu’s latest benchmark.

If you’ve ever read a comprehensive smartphone review, you’ve likely heard of AnTuTu. The company’s smartphone benchmarking tool is often used for testing and comparing the CPU and 3D performance of devices.

With dedicated AI chips now appearing in devices from the mid-range to flagships, AnTuTu has decided it’s time for a benchmark...