open letter Archives - AI News https://www.artificialintelligence-news.com/tag/open-letter/ Artificial Intelligence News Wed, 31 May 2023 08:33:14 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png open letter Archives - AI News https://www.artificialintelligence-news.com/tag/open-letter/ 32 32 AI leaders warn about ‘risk of extinction’ in open letter https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/ https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/#respond Wed, 31 May 2023 08:33:10 +0000 https://www.artificialintelligence-news.com/?p=13124 The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement. Signatories of the... Read more »

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement.

Signatories of the statement include renowned researchers and Turing Award winners like Geoffery Hinton and Yoshua Bengio, as well as executives from OpenAI and DeepMind, such as Sam Altman, Ilya Sutskever, and Demis Hassabis.

The CAIS letter aims to spark discussions about the various urgent risks associated with AI and has attracted both support and criticism across the wider industry. It follows another open letter signed by Elon Musk, Steve Wozniak, and over 1,000 other experts who called for a halt to “out-of-control” AI development.

Despite its brevity, the latest statement does not provide specific details about the definition of AI or offer concrete strategies for mitigating the risks. However, CAIS clarified in a press release that its goal is to establish safeguards and institutions to ensure that AI risks are effectively managed.

OpenAI CEO Sam Altman has been actively engaging with global leaders and advocating for AI regulations. During a recent Senate appearance, Altman repeatedly called on lawmakers to heavily regulate the industry. The CAIS statement aligns with his efforts to raise awareness about the dangers of AI.

While the open letter has garnered attention, some experts in AI ethics have criticised the trend of issuing such statements.

Dr Sasha Luccioni, a machine-learning research scientist, suggests that mentioning hypothetical risks of AI alongside tangible risks like pandemics and climate change enhances its credibility while diverting attention from immediate issues like bias, legal challenges, and consent.

Daniel Jeffries, a writer and futurist, argues that discussing AI risks has become a status game in which individuals jump on the bandwagon without incurring any real costs.

Critics believe that signing open letters about future threats allows those responsible for current AI harms to alleviate their guilt while neglecting the ethical problems associated with AI technologies already in use.

However, CAIS – a San Francisco-based nonprofit – remains focused on reducing societal-scale risks from AI through technical research and advocacy. The organisation was co-founded by experts with backgrounds in computer science and a keen interest in AI safety.

While some researchers fear the emergence of a superintelligent AI that could surpass human capabilities and pose an existential threat, others argue that signing open letters about hypothetical doomsday scenarios distracts from the existing ethical dilemmas surrounding AI. They emphasise the need to address the real problems AI poses today, such as surveillance, biased algorithms, and the infringement of human rights.

Balancing the advancement of AI with responsible implementation and regulation remains a crucial task for researchers, policymakers, and industry leaders alike.

(Photo by Apolo Photographer on Unsplash)

Related: OpenAI CEO: AI regulation ‘is essential’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://www.artificialintelligence-news.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://www.artificialintelligence-news.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 http://artificialintelligence-news.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0