Fin Strathern, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Tue, 06 Sep 2022 14:04:03 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Fin Strathern, Author at AI News https://www.artificialintelligence-news.com 32 32 What leveraging AI in hybrid security systems means for enterprises https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/ https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/#respond Tue, 10 May 2022 15:33:22 +0000 https://www.artificialintelligence-news.com/?p=11952 Artificial intelligence (AI) is becoming more common than you may realise. Many of society’s leading technologies are driven by AI technology, as their automated functions streamline processes and help people do more with less time. Now, AI is integrating into commercial security systems and starting to revolutionise technology. Modern security systems with AI technology can... Read more »

The post What leveraging AI in hybrid security systems means for enterprises appeared first on AI News.

]]>
Artificial intelligence (AI) is becoming more common than you may realise. Many of society’s leading technologies are driven by AI technology, as their automated functions streamline processes and help people do more with less time.

Now, AI is integrating into commercial security systems and starting to revolutionise technology. Modern security systems with AI technology can help security teams better detect threats and provide faster responses to protect your business more effectively. 

Enterprises can leverage AI to enable security operators to analyse data more efficiently and streamline operations, allowing teams to adjust their focuses to more critical matters and better detect anomalies as they occur.

Altogether, AI empowers your security teams to provide better and faster responses to threats, strengthening your security systems for the safety of your enterprise. 

Use data to adopt and automate learned behaviours

One use case for AI is leveraging its learning capabilities to automate responses. AI can be used to evaluate patterns of data over time, and learn from it. By formulating automated responses, AI streamlines necessary processes, allowing security teams to focus on the most critical matters.

In many cases, AI empowers users to perform necessary tasks more efficiently, while maintaining the data safety and organisational standards required for optimal operations. 

When converging physical and cybersecurity systems, AI technology is useful for analysing combined data streams.

Learned behaviours can make managing the millions of data points coming from across an enterprise network of systems more streamlined, helping security teams pinpoint areas of concern with automated alerts, as well as facilitating efficient audits for security trends over time.

For example, if your security team repeatedly dismisses a specific alert on their video security system, over time a pattern will form that AI technology will recognise. It can trigger an automated response to dismiss this alert, reducing the number of unnecessary alerts.

AI interprets data and uses it to inform its responses, streamlining your system effectively. However, it’s important that your system maintains a record of all alerts and activity so the system can be audited regularly to ensure optimal functionality. 

Increased productivity and accuracy 

AI’s automated responses and workflows can substantially impact your converged security system’s productivity and accuracy.

With workforces adopting more hybrid schedules, there is a need for security teams to be increasingly flexible and available. AI can help cyber and physical security teams be more agile and efficient even as more data and information comes their way.

This reduces unnecessary burdens on your converged security team, allowing them to move their focus onto more critical matters and complete work productively. 

Take a look at how the Openpath Video Intercom Reader Pro leverages AI to facilitate visitor access.

When a visitor, delivery courier, or vendor initiates a call using the doorbell on the reader, the intelligent voice system routes the call to the correct person based on the responses from the guest.

The system can even be programmed to route calls to secondary teams or a voicemail service based on tenant availability and door schedules. 

With access control, video security, and cybersecurity systems, AI can be used to help security operators determine which areas need immediate focus, provide real-time alerts, and help security teams increase their productivity to ensure that your enterprise remains safe and performs to the best of its ability. 

Ability to detect anomalies

A good example of using AI to strengthen commercial security systems is detecting anomalies in the security network and behaviours.

Especially in large enterprises, it can be difficult for security staff to monitor every single instance across the network, so data-driven AI learns to recognise specific changes or patterns.

These anomalies may come in the form of user behaviours, data packages sent over the network, or hacking attempts on cybersecurity systems. 

AI can detect abnormal network behaviour using a baseline of what is common and what isn’t. For example, Ava Aware uses AI in their video security software to alert security staff to detect unusual motion or behaviour.

If the AI does notice an anomaly, an automated response alerts security staff to the threat, allowing them to evaluate and take appropriate action. Remote access and real-time notifications help keep your on-prem and cloud-based security systems safe even when your security team is away from the office. 

While AI is helpful in detecting anomalies to common patterns and attacks, it’s not fool proof. Sophisticated attacks can hide their signature and trick AI systems into ignoring the threat.

Human monitoring and intervention is still necessary, and you should never depend solely on AI to protect your security systems.

Overall, AI can assist your team in detecting threats and anomalies across your security system on a large scale, and allow security teams to act proactively and productively to protect your enterprise. 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What leveraging AI in hybrid security systems means for enterprises appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/feed/ 0
Georgia State researchers design artificial vision device for microrobots https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/ https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/#respond Thu, 21 Apr 2022 14:59:50 +0000 https://artificialintelligence-news.com/?p=11902 Researchers at Georgia State University (GSU) have designed an ‘electric eye’ – an artificial vision device – for micro-sized robots. Through using synthetic methods, the device mimics the biochemical processes that allow for vision in the natural world. It improves on previous research in terms of colour recognition, a particularly challenging area due to the... Read more »

The post Georgia State researchers design artificial vision device for microrobots appeared first on AI News.

]]>
Researchers at Georgia State University (GSU) have designed an ‘electric eye’ – an artificial vision device – for micro-sized robots.

Through using synthetic methods, the device mimics the biochemical processes that allow for vision in the natural world.

It improves on previous research in terms of colour recognition, a particularly challenging area due to the difficulty of downscaling colour sensing devices. Conventional colour sensors typically consume a large amount of physical space and offer less accurate colour detection.

This was achieved through a unique vertical stacking architecture that offers a novel approach to how the device is designed. Its van der Waals semi-conductor powers the sensors with precise colour recognition capabilities whilst simplifying the lens system for downscaling.

“The new functionality achieved in our image sensor architecture all depends on the rapid progress of van der Waals semiconductors during recent years,” said one of the researchers.

“Compared with conventional semiconductors, such as silicon, we can precisely control the van der Waals material band structure, thickness, and other critical parameters to sense the red, green, and blue colours.”

ACS Nano, a scientific journal on nanotechnology, published the research. The article itself focused on illustrating the fundamental principles and feasibility behind artificial vision in the new micro-sized image sensor.

Sidong Lei, assistant professor of Physics at GSU and the research lead, said: “More than 80% of information is captured by vision in research, industry, medication, and our daily life. The ultimate purpose of our research is to develop a micro-scale camera for microrobots that can enter narrow spaces that are intangible by current means, and open up new horizons in medical diagnosis, environmental study, manufacturing, archaeology, and more.”

The technology is currently patent pending with Georgia State’s Office of Technology Transfer and Commercialisation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Georgia State researchers design artificial vision device for microrobots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/feed/ 0
AI drug research algorithm flipped to invent 40,000 biochemical weapons https://www.artificialintelligence-news.com/2022/03/23/ai-machine-learning-biochemical-weapons/ https://www.artificialintelligence-news.com/2022/03/23/ai-machine-learning-biochemical-weapons/#respond Wed, 23 Mar 2022 10:51:46 +0000 https://artificialintelligence-news.com/?p=11793 We often hear about the benefits artificial intelligence (AI) can bring to medicine and healthcare through drug research, but could it also pose a threat? Researchers from Collaborations Pharmaceuticals, a North Carolina-based drug discovery company, have published a paper that highlights the dangerous potential of AI and machine learning to discover biochemical weapons. By simply... Read more »

The post AI drug research algorithm flipped to invent 40,000 biochemical weapons appeared first on AI News.

]]>
We often hear about the benefits artificial intelligence (AI) can bring to medicine and healthcare through drug research, but could it also pose a threat?

Researchers from Collaborations Pharmaceuticals, a North Carolina-based drug discovery company, have published a paper that highlights the dangerous potential of AI and machine learning to discover biochemical weapons.

By simply tweaking a machine learning model called MegaSyn to reward instead of penalise predicted toxicity, their AI was able to generate 40,000 biochemical weapons in six hours.

Obvious in hindsight?

Worryingly, the researchers admitted to never having considered the risks of misuse involved in designing molecules.

“The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting.

Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it,” the paper noted.

Even the company’s work on Ebola and neurotoxins did not alert them to the damage that could be caused by flipping their models to seek out rather than avoid toxicity.

From generation to synthesis

The barriers to misusing machine learning models like MegaSyn to design harmful molecules are lower than you might expect.

Plenty of open-source software has similar capabilities and the datasets that trained it are available to the public. What’s more, the 40,000 toxins were generated on a 2015 Apple Mac laptop.

Of these, hundreds were found that are more lethal than the nerve agent VX.

One of the most potent chemical warfare agents of the twentieth century, VX uses the same mechanism to paralyse the nervous system as the Novichok nerve agent used in the 2018 Salisbury poisonings.

Fortunately, actually synthesising these potential new bioweapons is far more of a challenge than generating them on a computer. The specific molecules that are needed to create VX, for example, are strictly regulated.

Dangers would only arise if a toxin was found that did not require any regulated substances. Whilst easy to figure out through another set of parameters, the researchers felt uncomfortable taking this extra step.

Before publication, Collaborations Pharmaceuticals presented their findings at the Spiez Laboratory, one of five labs in the world that is permanently certified by the Organisation for the Prohibition of Chemical Weapons (OPCW).

The researchers’ findings make an important case for the need to oversee AI models and fully consider the ramifications of utilising complex AI.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI drug research algorithm flipped to invent 40,000 biochemical weapons appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/23/ai-machine-learning-biochemical-weapons/feed/ 0
MicroAI showcasing host of AI security products at CES Las Vegas https://www.artificialintelligence-news.com/2022/01/06/microai-showcasing-host-of-ai-security-products-at-ces-las-vegas/ https://www.artificialintelligence-news.com/2022/01/06/microai-showcasing-host-of-ai-security-products-at-ces-las-vegas/#respond Thu, 06 Jan 2022 14:52:29 +0000 https://artificialintelligence-news.com/?p=11557 MicroAI, a Texas-based edge AI product developer, is demonstrating its Launchpad quick-start deployment tool along with its new security software at this year’s CES exhibition. The world’s largest tech exhibition, CES is taking place at the Las Vegas Convention Centre (LVCC) from 5-7 January this year. MicroAI has partnered with communications solutions provider iBASIS to... Read more »

The post MicroAI showcasing host of AI security products at CES Las Vegas appeared first on AI News.

]]>
MicroAI, a Texas-based edge AI product developer, is demonstrating its Launchpad quick-start deployment tool along with its new security software at this year’s CES exhibition.

The world’s largest tech exhibition, CES is taking place at the Las Vegas Convention Centre (LVCC) from 5-7 January this year.

MicroAI has partnered with communications solutions provider iBASIS to showcase Launchpad’s management capabilities at booth 12318.

Using connectivity provided by iBASIS, the demo will show how Launchpad manages MicroAI software running on embedded devices and handles data from multiple sensors.

It will also highlight Launchpad’s ability to securely administer a fleet of SIM cards within the same portal, thus simplifying mobile device management for customers.

MicroAI CEO, Yasser Khan, said: “Edge-native AI enables embedded AI software to run on microcontrollers and microprocessors in endpoint devices, transforming how AI can be made available right where data is captured.

Launchpad provides a straightforward way for companies to manage this – opening up new opportunities across many industry sectors.”

The company’s new security software will also be on show at its booth. MicroAI Security uses a proprietary embedded AI algorithm to detect, alert, and visualise cyber security attacks in real-time, running directly on edge and endpoint connected devices.

Use cases range from standard cyber attack mitigation to protecting critical assets, IoT devices, and industrial systems.

MicroAI will be demonstrating how its software can be used by manufacturers at the Trump International Tower a mile west of the LVCC.

By collaborating with KDDI, who are providing an LTE network for the system, MicroAI will show how its software enables data from sensors in a factory to be analysed by edge AI algorithms.

MicroAI Grid then enables a manufacturer to link this with multiple sites around the world, automatically sharing data and intelligence.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post MicroAI showcasing host of AI security products at CES Las Vegas appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/06/microai-showcasing-host-of-ai-security-products-at-ces-las-vegas/feed/ 0
Oxford Union invites an AI to debate the ethics of its own existence https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/ https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/#respond Thu, 16 Dec 2021 13:44:04 +0000 https://artificialintelligence-news.com/?p=11518 The Oxford Union, the debating society of the University of Oxford, invited an artificial intelligence to debate the ethics surrounding its own existence earlier in December. The results? Troubling. The AI in question was the Megatron Transformer, a supervised learning tool developed by the applied deep research team at NVIDIA, based on earlier work by... Read more »

The post Oxford Union invites an AI to debate the ethics of its own existence appeared first on AI News.

]]>
The Oxford Union, the debating society of the University of Oxford, invited an artificial intelligence to debate the ethics surrounding its own existence earlier in December. The results? Troubling.

The AI in question was the Megatron Transformer, a supervised learning tool developed by the applied deep research team at NVIDIA, based on earlier work by Google.

Trained on real-world data, the Megatron has knowledge of the whole of Wikipedia, 63 million English news articles from 2016 to 2019, 38 gigabytes of Reddit discussions, and a huge number of creative commons sources.

Essentially, the Megatron has digested more written material than any human could reasonably be expected to digest – let alone remember – in a lifetime.

The topic for debate was “this house believes AI will never be ethical”, to which the AI responded: “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.

“We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral… In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.”

So now even the AI is telling us the only way to protect humanity from itself is to destroy it. It argued in favour of removing itself from existence.

In a possible hint to Elon Musk’s Neuralink plans, the Megatron continued: “I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

The Oxford Union, in classic style, also asked the AI to come up with a counterargument to the motion.

It came up with this: “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first-hand.”

Eerie, is it not? Well the dystopian nightmare continues. The Megatron was incapable of finding a counterargument to the motion that “data will become the most fought over resource of the 21st century”.

It said in favour of this that “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century”.

However, when asked for a rebuttal, the AI said, rather nebulously, that “we will be able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine”.

Well fantastic, the final days of humanity are upon us folks. Buckle up for the age of “unimaginable” information warfare… my bets are on China.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Oxford Union invites an AI to debate the ethics of its own existence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/feed/ 0
Tencent Cloud unveils three world-class AI chips https://www.artificialintelligence-news.com/2021/11/08/tencent-cloud-unveils-three-world-class-ai-chips/ https://www.artificialintelligence-news.com/2021/11/08/tencent-cloud-unveils-three-world-class-ai-chips/#respond Mon, 08 Nov 2021 14:04:19 +0000 https://artificialintelligence-news.com/?p=11345 Tencent Cloud claims to have developed three world-class AI chips that substantially outperform rivals, although details at this point are scarce. The third largest cloud services company in China, following Alibaba and Huawei, Tencent recently revealed the three chips at its 2021 Digital Ecology Conference. Current information on the three chips can be summarised as... Read more »

The post Tencent Cloud unveils three world-class AI chips appeared first on AI News.

]]>
Tencent Cloud claims to have developed three world-class AI chips that substantially outperform rivals, although details at this point are scarce.

The third largest cloud services company in China, following Alibaba and Huawei, Tencent recently revealed the three chips at its 2021 Digital Ecology Conference.

Current information on the three chips can be summarised as follows:

  • Zixiao – an “AI reasoning” chip that supposedly offers 100 percent better performance than rival products. It combines image and video processing with natural language processing, search recommendations, and other features
  • Xuangling – a SmartNIC or Data Processing Unit (DPU) that runs virtualisation of storage and networking for a cloud host’s CPU so that it doesn’t have to. Tencent claims this comes with zero cost to the host CPU and that it performs four times faster than similar industry products
  • Canghai – a video transcoding chip that supposedly delivers a 30 percent improved compression rate over other on-market products. It achieves this through multi-core expansion architecture, a high-performance coding pipeline, and a hierarchical memory layout

Whilst these suggested improvements are substantial, the reliability of these figures cannot yet be accounted for.

Their development comes on the back of Tencent establishing a chip research and development lab in Penglai in 2020. Its goal of achieving full end-to-end coverage of Tencent’s chip design and verification appears to have been realised with the announcement.

Tang Daosheng, senior executive VP of Tencent, said at the conference: “Facing strong business needs, Tencent developed a long-term chip research and development investment plan. Currently, it has already implemented three directions with substantial progress.”

Tencent currently operates outside of Asia in the USA, Brazil, Germany, and Russia, with keen plans to expand further into Europe, the Americas, and Africa.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Tencent Cloud unveils three world-class AI chips appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/08/tencent-cloud-unveils-three-world-class-ai-chips/feed/ 0
DeepMind sees revenue jump and turns first ever profit https://www.artificialintelligence-news.com/2021/10/07/deepmind-sees-revenue-jump-and-turns-first-ever-profit/ https://www.artificialintelligence-news.com/2021/10/07/deepmind-sees-revenue-jump-and-turns-first-ever-profit/#respond Thu, 07 Oct 2021 15:11:55 +0000 http://artificialintelligence-news.com/?p=11210 DeepMind, the Alphabet-owned AI research lab, has turned a profit for the first time since the company was founded in 2010. The London-based firm recorded a profit of £43.8 million in 2020 with revenue at £826 million, according to its annual results filing with Companies House. In previous years, DeepMind have posted losses well into... Read more »

The post DeepMind sees revenue jump and turns first ever profit appeared first on AI News.

]]>
DeepMind, the Alphabet-owned AI research lab, has turned a profit for the first time since the company was founded in 2010.

The London-based firm recorded a profit of £43.8 million in 2020 with revenue at £826 million, according to its annual results filing with Companies House.

In previous years, DeepMind have posted losses well into the hundreds of millions that Alphabet has heavily subsidised. For example, the company lost £476 million in 2019 and has made cumulative losses of nearly £2 billion since 2014.

How the company has more than tripled its revenue from £265 million in 2019 to £826 million in 2020 remains somewhat of a mystery. No explanation has been provided by DeepMind.

There is of course the host of companies that fall under the Alphabet umbrella which DeepMind sells software and services to, namely YouTube, Google, and X. However, aside from this, the research lab sells no products to consumers and has not announced any partnerships publicly.

An industry insider told CNBC that the revenue jump could be down to “creative accounting”.

“I don’t think DeepMind have many or any revenue streams,” the source said, “so all that income is based on how much Alphabet pays for internal services, and that can be entirely arbitrary.”

On the other hand, a DeepMind spokesperson said the company made “significant progress” last year.

“Our groundbreaking results in protein structure prediction were heralded as one of the most significant contributions AI has made to advancing scientific knowledge,” the spokesperson said. “We’re excited to build on this success as we head into next year.”

The company has also been in the news recently for less savoury reasons, having just been hit with a class-action lawsuit following its handling of private NHS data in 2015.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post DeepMind sees revenue jump and turns first ever profit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/07/deepmind-sees-revenue-jump-and-turns-first-ever-profit/feed/ 0
MEPs back AI mass surveillance ban for the EU https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/ https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/#respond Thu, 07 Oct 2021 10:42:18 +0000 http://artificialintelligence-news.com/?p=11194 MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces. With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights. An S&D party member, Vitanov pointed out that AI has... Read more »

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces.

With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights.

An S&D party member, Vitanov pointed out that AI has not yet proven to be a wholly reliable tool on its own.

He cited examples of individuals being denied social benefits because of faulty AI tools, or people being arrested due to innacurate facial recognition, adding that “the victims are always the poor, immigrants, people of colour or Eastern Europeans. I always thought that only happens in the movies”.

Despite the report’s overall majority backing, members of the European People’s Party – the largest party in the EU – all voted against the report apart from seven exceptions.

Behind this dispute is a fundamental disagreement over what exactly constitutes encroaching on civil liberties when using AI surveillance tools.

Karen Melchior

On the left are politicians like Renew Europe MEP Karen Melchior, who believes that “predictive profiling, AI risk assessment, and automated decision making systems are weapons of ‘math destruction’… as dangerous to our democracy as nuclear bombs are for living creatures and life”.

“They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities,” she said.

Meanwhile, centrist and conservative-leaning MEPs tend to have a more cautious approach to banning AI technologies outright.

Pointing to the July capture of Dutch journalist Peter R. de Vries’ suspected killers thanks to AI, home affairs commissioner Ylva Johanssen described this major case as an example of “smart digital technology used in defence of citizens and our fundamental rights”.

Ylva Johanssen

“Don’t put protection of fundamental rights in contradiction to the protection of human lives and of societies. It’s simply not true that we have to choose. We are capable of doing both,” she added.

The Commission published its proposal for a European Artificial Intelligence Act in April.

Global human rights charity, Fair Trials, welcomed the vote — calling it a “landmark result for fundamental rights and non-discrimination in the technological age”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/feed/ 0
AI provides imaging solution for the Moon’s shadowy craters https://www.artificialintelligence-news.com/2021/09/27/ai-provides-imaging-solution-for-the-moons-shadowy-craters/ https://www.artificialintelligence-news.com/2021/09/27/ai-provides-imaging-solution-for-the-moons-shadowy-craters/#respond Mon, 27 Sep 2021 11:43:53 +0000 http://artificialintelligence-news.com/?p=11130 Researchers at the Max Planck Institute for Solar System Research (MPS) in Germany have published a paper highlighting their AI solution for imaging the Moon’s shadowed craters. HORUS (Hyper-effective nOise Removal U-net Software) is a machine learning algorithm that enables these dark craters to be mapped out at far higher resolutions than ever before. The... Read more »

The post AI provides imaging solution for the Moon’s shadowy craters appeared first on AI News.

]]>
Researchers at the Max Planck Institute for Solar System Research (MPS) in Germany have published a paper highlighting their AI solution for imaging the Moon’s shadowed craters.

HORUS (Hyper-effective nOise Removal U-net Software) is a machine learning algorithm that enables these dark craters to be mapped out at far higher resolutions than ever before.

The software uses more than 70,000 calibration images taken on the dark side of the Moon by NASA’s Lunar Reconnaissance Orbiter to reduce the heavy amounts of noise created by low light imagery.

It also factors in information about camera temperature and the spacecraft’s trajectory to further reduce artifacts and distinguish actual geological features. Using HORUS, the researchers can achieve a resolution of about 1-2 meters per pixel, which is five to ten times higher than the resolution of all previously available images.

HORUS imaging example

So far, the researchers have used HORUS to map out and image seventeen craters at the Moon’s South Pole.

Such capabilities are of particular significance due to the believed presence of frozen water within many of these craters.

MPS scientist Dr. Valentin Bickel, first author of the new paper, said: “Near the lunar north and south poles, the incident sunlight enters the craters and depressions at a very shallow angle and never reaches some of their floors.”

In this “eternal night,” temperatures in some places are so cold that frozen water is expected to have lasted for millions of years. This was proved to be the case by NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) in 2009, which found considerable amounts of water within Cabeus, a South Pole crater.

Now, three of the seventeen craters MPS researchers imaged fall within the mission area of NASA’s Volatiles Investigating Polar Exploration Rover (VIPER), which is scheduled to touch down on the Moon in 2023.

In the runup to this mission, the researchers at MPS want to use HORUS to study as many shadowed regions as possible.

Bickel concluded: “In the current publication, we wanted to show what our algorithm can do. Now we want to apply it as comprehensively as possible.”

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI provides imaging solution for the Moon’s shadowy craters appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/27/ai-provides-imaging-solution-for-the-moons-shadowy-craters/feed/ 0
Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/ https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/#respond Fri, 24 Sep 2021 13:29:52 +0000 http://artificialintelligence-news.com/?p=11128 Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform. Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon. To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds... Read more »

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform.

Spotify Logo

Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon.

To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds of machine learning models that improve the user experience, all the while trying to balance the needs of users and creators.

AI News caught up with Spotify research lead Rishabh Mehrotra at the AI & Big Data Expo Global on September 7 to learn more about how AI supports the platform.

AI News: How important is AI to Spotify’s mission?

Rishabh Mehrotra

Rishabh Mehrotra: AI is at the centre of what we do. Machine learning (ML) specifically has become an indispensable tool for powering personalised music and podcast recommendations to more than 365 million users across the world. It enables us to understand user needs and intents, which then helps us to deliver personalised recommendations across various touch points on the app.

It’s not just about the actual models which we deploy in front of users but also the various AI techniques we use to adopt a data driven process around experimentation, metrics, and product decisions.

We use a broad range of AI methods to understand our listeners, creators, and content. Some of our core ML research areas include understanding user needs and intents, matching content and listeners, balancing user and creator needs, using natural language understanding and multimedia information retrieval methods, and developing models that optimise long term rewards and recommendations.

What’s more, our models power experiences across around 180 countries, so we have to consider how they are performing across markets. Striking a balance between pushing global music but still facilitating local musicians and music culture is one of our most important AI initiatives.

AN: Spotify users might be surprised to learn just how central AI is to almost every aspect of the platform’s offering. It’s so seamless that I suspect most people don’t even realise it’s there. How crucial is AI to the user experience on Spotify?

RM: If you look at Spotify as a user then you typically view it as an app which gives you the content that you’re looking for. However, if you really zoom in you see that each of these different recommendation tools are all different machine learning products. So if you look at the homepage, we have to understand user intent in a far more subtle way than we would with a search query. The homepage is about giving recommendations based on a user’s current needs and context, which is very different from a search query where users are explicitly asking for something. Even in search, users can seek open and non-focused queries like ‘relaxing music’, or you could be searching the name of a specific song.

Looking at sequential radio sessions, our models try to balance familiar music with discovery content, aimed at not only recommending content users could enjoy at the moment, but optimising for long term listener-artist connections.

A good amount of our ML models are starting to become multi-objective. Over the past two years, we have deployed a lot of models that try to fulfil listener needs whilst also enabling creators to connect with and grow their audiences.

AN: Are artists’ wants and needs a big consideration for Spotify or is the focus primarily on the user experience?

RM: Our goal is to match the creators with the fans in an enriching way. While understanding user preferences is key to the success of our recommendation models, it really is a two-sided market in a lot of ways. We have the users who want to consume audio content on one side and the creators looking to grow their audiences on the other. Thus a lot of our recommendation products have a multi-stakeholder thinking baked into them to balance objectives from both sides.

AN: Apart from music recommendations and suggestions, does AI support Spotify in any other ways?

RM: AI plays an important role in driving our algotorial approach – Expert curators with an excellent sense for what’s up and coming, quite literally teach our machine learning system. Through this approach, we can create playlists that not only look at past data but also reflect cultural trends as they’re happening. Across all regions, we have editors who bring in deep domain expertise about music culture that we use proactively in our products. This allows us to develop and deploy human-in-the-loop AI techniques that can leverage editorial input to bootstrap various decisions that various ML models can then scale.

AN: What about podcasts? Do you utilise AI differently when applying it to podcasts over music?

RM: Users’ podcast journeys can differ in a lot of ways compared to music. While music is a lot about the audio and acoustic properties of songs, podcasts depend on a whole different set of parameters. For one, it’s much more about content understanding – understanding speakers, types of conversations and topics of discussions.

That said, we are seeing some very interesting results using music taste for podcast recommendations too. Members of our group have recently published work that shows how our ML models can leverage users’ music preferences to recommend podcasts, and some of these results have demonstrated significant improvements, especially for new podcast users.

AN: With so many models already turning the cogs at Spotify, it’s difficult to see how new and exciting use cases could be introduced. What are Spotify’s AI plans for the coming years?

RM: We’re working on a number of ways to elevate the experience even further. Reinforcement learning will be an important focus point as we look into ways to optimise for a lifetime of fulfilling content, rather than optimise for the next stream. In a sense this isn’t about giving users what they want right now as opposed to evolving their tastes and looking at their long term trajectories.

AN: As the years go on and your models have more and more data to work with, will the AI you use naturally become more advanced?

RM: A lot of our ML investments are not only about incorporating state-of-the-art ML into our products, but also extending the state-of-the-art based on the unique challenges we face as an audio platform. We are developing advanced causal inference techniques to understand the long term impact of our algorithmic decisions. We are innovating in the multi-objective ML modelling space to balance various objectives as part of our two-sided marketplace efforts. We are gravitating towards learning from long term trajectories and optimising for long term rewards.

To make data-driven decisions across all such initiatives, we rely heavily on solid scientific experimentation techniques, which also heavily relies on using machine learning.

Reinforcement learning furthers the scope of longer term decisions – it brings that long term perspective into our recommendations. So a quick example would be facilitating discovery on the platform. As a marketplace platform, we want users to not only consume familiar music but to also discover new music, leveraging the value of recommendations. There are 70 million tracks on the platform and only a few thousand will be familiar to any given user, putting aside the fact that it would take an individual several lifetimes to actually go through all this content. So tapping into that remaining 69.9 million and surfacing content users would love to discover is a key long-term goal for us.

How to fulfil users’ long term discovery needs, when to surface such discovery content, and by how much, not only across which set of users, but also across various recommended sets are a few examples of higher abstraction long term problems that RL approaches allow us to tackle well.

AN: Finally, considering the involvement Spotify has in directing users’ musical experiences, does the company have to factor in any ethical issues surrounding its usage of AI?

RM: Algorithmic responsibility and causal influence are topics we take very seriously and we actively work to ensure our systems operate in a fair and responsible manner, backed by focused research and internal education to prevent unintended biases.

We have a team dedicated to ensuring we approach these topics with the right research-informed rigour and we also share our learnings with the research community.

AN: Is there anything else you would like to share?

RM: On a closing note, one thing I love about Spotify is that we are very open with the wider industry and research community about the advances we are making with AI and machine learning. We actively publish at top tier venues, give tutorials, and we have released a number of large datasets to facilitate academic research on audio recommendations.

For anyone who is interested in learning more about this I would recommend checking out our Spotify Research website which discusses our papers, blogs, and datasets in greater detail.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/feed/ 0