ChatGPT’s political bias highlighted in study

A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT's responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in...

Study highlights impact of demographics on AI training

A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models.

The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within AI systems.

“Systems like ChatGPT are increasingly used by people for everyday tasks,” explains...

AI in the justice system threatens human rights and civil liberties

The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice...

Democrats renew push for ‘algorithmic accountability’

Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more...

AI bias harms over a third of businesses, 81% want more regulation

AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.

The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.

Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said: 

“DataRobot’s research...

Editorial: Our predictions for the AI industry in 2022

The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s...

UK health secretary hopes AI projects can tackle racial inequality

UK Health Secretary Sajid Javid has greenlit a series of AI-based projects that aim to tackle racial inequalities in the NHS.

Racial inequality continues to be rampant in healthcare. Examining the fallout of COVID-19 serves as yet another example of the disparity between ethnicities.

In England and Wales, males of Black African ethnic background had the highest rate of death involving COVID-19, 2.7 times higher than males of a White ethnic background. Females of Black...

Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias

Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias.

The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the "most powerful monolithic transformer language model trained to date".

For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters.

The duo trained their impressive model on 15 datasets with a total of 339 billion tokens. Various sampling weights...

F-Secure: AI-based recommendation engines are easy to manipulate

Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate.

Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.

Matti Aksela, VP of Artificial Intelligence at F-Secure, commented:

“As we rely more and more on AI...

TUC: Employment law gaps will lead to staff ‘hired and fired by algorithm’

Legal experts and the Trades Union Congress (TUC) have warned that gaps in employment law will lead to staff “hired and fired by algorithm”.

A report, commissioned by the TUC and carried out by leading employment rights lawyers Robin Allen QC and Dee Masters from the AI Law Consultancy, claims there are “huge gaps” in British law.

“The TUC is right to call for urgent legislative changes to ensure that workers and companies can both enjoy the benefits of AI,”...