Google co-founder Sergey Brin gets involved with AI endeavours

In a shift from his previous hands-off approach, Google co-founder Sergey Brin has been actively involved in the company's AI endeavours.

Brin has been particularly focusing on the development of Google’s next-generation AI model, Gemini. According to reports from the Wall Street Journal, Brin has been showing up at Google offices three to four days a week since the buzz around the success of ChatGPT began late last year.

Brin's involvement has been primarily in...

Meta launches Llama 2 open-source LLM

Meta has introduced Llama 2, an open-source family of AI language models which comes with a license allowing integration into commercial products.

The Llama 2 models range in size from 7-70 billion parameters, making them a formidable force in the AI landscape.

According to Meta's claims, these models "outperform open source chat models on most benchmarks we tested."

The release of Llama 2 marks a turning point in the LLM (large language model) market and...

Beijing publishes its AI governance rules

Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are...

Mithril Security demos LLM supply chain ‘poisoning’

Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation...

Databricks acquires LLM pioneer MosaicML for $1.3B

Databricks has announced its definitive agreement to acquire MosaicML, a pioneer in large language models (LLMs).

This strategic move aims to make generative AI accessible to organisations of all sizes, allowing them to develop, possess, and safeguard their own generative AI models using their own data. 

The acquisition, valued at ~$1.3 billion – inclusive of retention packages – showcases Databricks' commitment to democratising AI and reinforcing the...

Piero Molino, Predibase: On low-code machine learning and LLMs

AI News sat down with Piero Molino, CEO and co-founder of Predibase, during this year’s AI & Big Data Expo to discuss the importance of low-code in machine learning and trends in LLMs (Large Language Models).

At its core, Predibase is a declarative machine learning platform that aims to streamline the process of developing and deploying machine learning models. The company is on a mission to simplify and democratise machine learning, making it accessible to both expert...

MosaicML’s latest models outperform GPT-3 with just 30B parameters

Open-source LLM provider MosaicML has announced the release of its most advanced models to date, the MPT-30B Base, Instruct, and Chat.

These state-of-the-art models have been trained on the MosaicML Platform using NVIDIA's latest-generation H100 accelerators and claim to offer superior quality compared to the original GPT-3 model.

With MPT-30B, businesses can leverage the power of generative AI while maintaining data privacy and security.

Since their launch in...

European Parliament adopts AI Act position

The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but...

UK will host global AI summit to address potential risks

The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI.

The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders.

“AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure,” explained Sunak.

“No one country can do this alone....

AI Task Force adviser: AI will threaten humans in two years

An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years.

The adviser, Matt Clifford, said in an interview with TalkTV that humans have a narrow window of two years to control and regulate AI before it becomes too powerful.

“The near-term risks are actually pretty scary. You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks,” said...