SEC turns its gaze from crypto to AI

US Securities and Exchange Commission (SEC) chairman Gary Gensler has announced a shift in focus from cryptocurrency to AI.

Gensler, who has been vocal about the risks and challenges posed by the cryptocurrency industry, now believes that AI is the technology that "warrants the hype" and deserves greater attention from regulators.

Gensler’s interest in AI dates back to 1997 when he became intrigued by the technology after witnessing Russian chess grandmaster Garry...

BSI publishes guidance to boost trust in AI for healthcare

In a bid to foster greater digital trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) has released high-level guidance.

The guidance, titled ’Validation framework for the use of AI within healthcare – Specification (BS 30440),’ aims to bolster confidence among clinicians, healthcare professionals, and providers regarding the safe, effective, and ethical development of AI tools.

As the global debate on the...

AI regulation: A pro-innovation approach – EU vs UK

In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI...

AI Act: The power of open-source in guiding regulations

As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI...

Assessing the risks of generative AI in the workplace

Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack...

Beijing publishes its AI governance rules

Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are...

Mithril Security demos LLM supply chain ‘poisoning’

Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation...

OpenAI introduces team dedicated to stopping rogue AI

The potential dangers of highly-intelligent AI systems have been a topic of concern for experts in the field.

Recently, Geoffrey Hinton – the so-called “Godfather of AI” – expressed his worries about the possibility of superintelligent AI surpassing human capabilities and causing catastrophic consequences for humanity.

Similarly, Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT chatbot, admitted to being fearful of the potential effects of...

Google report highlights AI’s impact on the UK economy

A new report by Google emphasises that AI represents the most profound technological shift of our lifetime and has the potential to significantly enhance the UK's economy.

The report suggests that by 2030, AI could boost the UK economy by £400 billion—leading to an annual growth rate of 2.6 percent.

Steven Mooney, CEO of FundMyPitch, commented:

“If AI is projected to bring billions to the UK economy, then why on earth aren’t our start-ups and SMEs...

Universities want to ensure staff and students are ‘AI-literate’

In a joint statement published today, the 24 Vice Chancellors of the Russell Group of universities have pledged their commitment to ensuring the ethical and responsible use of generative AI and new technologies like ChatGPT.

Universities are increasingly recognising the importance of equipping their students and staff with AI literacy skills to leverage the opportunities presented by technological advancements in teaching and learning. 

Sheila Flavell CBE, Chief...