GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research

Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research.

The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program.

Four of the major new SDKs are:

ReOpt – Automatically optimises logistical processes using advanced, parallel algorithms. This includes vehicle routes, warehouse...

EU launches antitrust probe into $40B Nvidia-Arm acquisition proposal

The European Commission is following the UK in launching its own antitrust probe into the proposed Nvidia-Arm acquisition.

Nvidia has put in a $40 billion offer to acquire Cambridge-based Arm, which designs a large proportion of the chips that end up in devices.

“AI is the most powerful technology force of our time and has launched a new wave of computing,” said Jensen Huang, founder and CEO of NVIDIA, when the acquisition was announced.

“In the years...

QuEST partners with NVIDIA to deliver next-gen AI solutions in Japan

QuEST has extended its partnership with NVIDIA to accelerate the digital transformation of Japanese businesses with next-gen AI solutions.

NVIDIA named QuEST an Elite Service Delivery Partner in the NVIDIA Partner Network (NPN) back in June.

Through NPN, QuEST has early access to NVIDIA platforms, software, solutions, workshops, and technology updates. The previous agreement only covered the USA but NVIDIA has now extended the collaboration to Japan.

Masataka...

Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias

Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias.

The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the "most powerful monolithic transformer language model trained to date".

For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters.

The duo trained their impressive model on 15 datasets with a total of 339 billion tokens. Various sampling weights...

Nvidia-Arm merger in doubt as CMA has ‘serious’ concerns

The proposed merger between Nvidia and British chip technology giant Arm is looking increasingly doubtful as the CMA (Competition & Markets Authority) believes the deal “raises serious competition concerns”.

A $40 billion merger of two of the biggest names in chip manufacturing was always bound to catch the eye of regulators, especially when it’s received such vocal opposition from around the world.

Hermann Hauser, Founder of Arm, even suggested that...

UK considers blocking Nvidia’s $40B acquisition of Arm

Bloomberg reports the UK is considering blocking Nvidia’s $40 billion acquisition of Arm over national security concerns.

Over 160 billion chips have been made for various devices based on designs from Arm. In recent years, the company has added AI accelerator chips to its lineup for neural network processing.

In the wake of the proposed acquisition, Nvidia CEO Jensen Huang said:

“ARM is an incredible company and it employs some of the greatest engineering...

NVIDIA launches UK supercomputer to search for healthcare solutions

NVIDIA supercomputer

Nvidia’s ‘Cambridge-1’ is now operational and utilising AI and simulation to advance research in healthcare.

The UK’s most powerful supercomputer and among the world’s top fifty, Cambridge-1 was announced by the technology company in October last year and cost $100 million (£72m) to build.

Its first projects with AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies include developing a...

MLCommons releases latest MLPerf Training benchmark results

Open engineering consortium MLCommons has released its latest MLPerf Training community benchmark results.

MLPerf Training is a full system benchmark that tests machine learning models, software, and hardware.

The results are split into two divisions: closed and open. Closed submissions are better for comparing like-for-like performance as they use the same reference model to ensure a level playing field. Open submissions, meanwhile, allow participants to submit a...

NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training

NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images...

NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’

NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100...