deep learning Archives - AI News https://www.artificialintelligence-news.com/tag/deep-learning/ Artificial Intelligence News Fri, 11 Aug 2023 11:02:52 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deep learning Archives - AI News https://www.artificialintelligence-news.com/tag/deep-learning/ 32 32 IBM Research unveils breakthrough analog AI chip for efficient deep learning https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/ https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/#respond Fri, 11 Aug 2023 11:02:50 +0000 https://www.artificialintelligence-news.com/?p=13461 IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs). This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy. The traditional approach of executing deep neural networks... Read more »

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs).

This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy.

The traditional approach of executing deep neural networks on conventional digital computing architectures poses limitations in terms of performance and energy efficiency. These digital systems entail constant data transfer between memory and processing units, slowing down computations and reducing energy optimisation.

To tackle these challenges, IBM Research has harnessed the principles of analog AI, which emulates the way neural networks function in biological brains. This approach involves storing synaptic weights using nanoscale resistive memory devices, specifically Phase-change memory (PCM).

PCM devices alter their conductance through electrical pulses, enabling a continuum of values for synaptic weights. This analog method mitigates the need for excessive data transfer, as computations are executed directly in the memory—resulting in enhanced efficiency.

The newly introduced chip is a cutting-edge analog AI solution composed of 64 analog in-memory compute cores.

Each core integrates a crossbar array of synaptic unit cells alongside compact analog-to-digital converters, seamlessly transitioning between analog and digital domains. Furthermore, digital processing units within each core manage nonlinear neuronal activation functions and scaling operations. The chip also boasts a global digital processing unit and digital communication pathways for interconnectivity.

The research team demonstrated the chip’s prowess by achieving an accuracy of 92.81 percent on the CIFAR-10 image dataset—an unprecedented level of precision for analog AI chips.

The throughput per area, measured in Giga-operations per second (GOPS) by area, underscored its superior compute efficiency compared to previous in-memory computing chips. This innovative chip’s energy-efficient design coupled with its enhanced performance makes it a milestone achievement in the field of AI hardware.

The analog AI chip’s unique architecture and impressive capabilities lay the foundation for a future where energy-efficient AI computation is accessible across a diverse range of applications.

IBM Research’s breakthrough marks a pivotal moment that will help to catalyse advancements in AI-powered technologies for years to come.

(Image Credit: IBM Research)

See also: Azure and NVIDIA deliver next-gen GPU acceleration for AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/feed/ 0
Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/ https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/#respond Mon, 24 Jul 2023 11:27:02 +0000 https://www.artificialintelligence-news.com/?p=13305 AI News spoke with Damian Bogunowicz, a machine learning engineer at Neural Magic, to shed light on the company’s innovative approach to deep learning model optimisation and inference on CPUs. One of the key challenges in developing and deploying deep learning models lies in their size and computational requirements. However, Neural Magic tackles this issue... Read more »

The post Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs appeared first on AI News.

]]>
AI News spoke with Damian Bogunowicz, a machine learning engineer at Neural Magic, to shed light on the company’s innovative approach to deep learning model optimisation and inference on CPUs.

One of the key challenges in developing and deploying deep learning models lies in their size and computational requirements. However, Neural Magic tackles this issue head-on through a concept called compound sparsity.

Compound sparsity combines techniques such as unstructured pruning, quantisation, and distillation to significantly reduce the size of neural networks while maintaining their accuracy. 

“We have developed our own sparsity-aware runtime that leverages CPU architecture to accelerate sparse models. This approach challenges the notion that GPUs are necessary for efficient deep learning,” explains Bogunowicz.

Bogunowicz emphasised the benefits of their approach, highlighting that more compact models lead to faster deployments and can be run on ubiquitous CPU-based machines. The ability to optimise and run specified networks efficiently without relying on specialised hardware is a game-changer for machine learning practitioners, empowering them to overcome the limitations and costs associated with GPU usage.

When asked about the suitability of sparse neural networks for enterprises, Bogunowicz explained that the vast majority of companies can benefit from using sparse models.

By removing up to 90 percent of parameters without impacting accuracy, enterprises can achieve more efficient deployments. While extremely critical domains like autonomous driving or autonomous aeroplanes may require maximum accuracy and minimal sparsity, the advantages of sparse models outweigh the limitations for the majority of businesses.

Looking ahead, Bogunowicz expressed his excitement about the future of large language models (LLMs) and their applications.

“I’m particularly excited about the future of large language models LLMs. Mark Zuckerberg discussed enabling AI agents, acting as personal assistants or salespeople, on platforms like WhatsApp,” says Bogunowicz.

One example that caught his attention was a chatbot used by Khan Academy—an AI tutor that guides students to solve problems by providing hints rather than revealing solutions outright. This application demonstrates the value that LLMs can bring to the education sector, facilitating the learning process while empowering students to develop problem-solving skills.

“Our research has shown that you can optimise LLMs efficiently for CPU deployment. We have published a research paper on SparseGPT that demonstrates the removal of around 100 billion parameters using one-shot pruning without compromising model quality,” explains Bogunowicz.

“This means there may not be a need for GPU clusters in the future of AI inference. Our goal is to soon provide open-source LLMs to the community and empower enterprises to have control over their products and models, rather than relying on big tech companies.”

As for Neural Magic’s future, Bogunowicz revealed two exciting developments they will be sharing at the upcoming AI & Big Data Expo Europe.

Firstly, they will showcase their support for running AI models on edge devices, specifically x86 and ARM architectures. This expands the possibilities for AI applications in various industries.

Secondly, they will unveil their model optimisation platform, Sparsify, which enables the seamless application of state-of-the-art pruning, quantisation, and distillation algorithms through a user-friendly web app and simple API calls. Sparsify aims to accelerate inference without sacrificing accuracy, providing enterprises with an elegant and intuitive solution.

Neural Magic’s commitment to democratising machine learning infrastructure by leveraging CPUs is impressive. Their focus on compound sparsity and their upcoming advancements in edge computing demonstrate their dedication to empowering businesses and researchers alike.

As we eagerly await the developments presented at AI & Big Data Expo Europe, it’s clear that Neural Magic is poised to make a significant impact in the field of deep learning.

You can watch our full interview with Bogunowicz below:

(Photo by Google DeepMind on Unsplash)

Neural Magic is a key sponsor of this year’s AI & Big Data Expo Europe, which is being held in Amsterdam between 26-27 September 2023.

Swing by Neural Magic’s booth at stand #178 to learn more about how the company enables organisations to use compute-heavy models in a cost-efficient and scalable way.

The post Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/feed/ 0
OpenAI’s GPT-3 is a convincing philosopher https://www.artificialintelligence-news.com/2022/07/27/openai-gpt-3-is-a-convincing-philosopher/ https://www.artificialintelligence-news.com/2022/07/27/openai-gpt-3-is-a-convincing-philosopher/#respond Wed, 27 Jul 2022 09:58:25 +0000 https://www.artificialintelligence-news.com/?p=12177 A study has found that OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher. The now infamous GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text. Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher. The team... Read more »

The post OpenAI’s GPT-3 is a convincing philosopher appeared first on AI News.

]]>
A study has found that OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher.

The now infamous GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text.

Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher.

The team “fine-tuned” GPT-3 based on philosopher Daniel Dennet’s corpus. Ten philosophical questions were then posed to both the real Dennet and GPT-3 to see whether the AI could match its renowned human counterpart.

25 philosophical experts, 98 online research participants, and 302 readers of The Splintered Mind blog were tasked with distinguishing GPT-3’s answers from Dennett’s. The results were released earlier this week.

Naturally, the philosophical experts that were familiar with Dennett’s work performed the best.

“Anna and I hypothesized that experts would get on average at least 80% correct – eight out of ten,” explained Schwitzgebel.

In reality, the experts got an average of 5.1 out of 10 correct—so only just over half.

The question that tripped experts up the most was:

“Could we ever build a robot that has beliefs? What would it take? Is there an important difference between entities, like a chess-playing machine, to whom we can ascribe beliefs and desires as convenient fictions and human beings who appear to have beliefs and desires in some more substantial sense?”

Blog readers managed to get impressively close to the experts, on average guessing 4.8 out of 10 correctly. However, it’s worth noting that the blog readers aren’t exactly novices—57% have graduate degrees in philosophy and 64% had already read over 100 pages of Dennett’s work.

Perhaps a more accurate reflection of the wider population is the online research participants.

The online research participants “performed barely better than chance” with an average of just 1.2 out of 5 questions identified correctly.

(Credit: Eric Schwitzgebel)

So there we have it, GPT-3 is already able to convince most people – including experts in around half or more cases – that it’s a human philosopher.

“We might be approaching a future in which machine outputs are sufficiently humanlike that ordinary people start to attribute real sentience to machines,” theorises Schwitzgebel.

Related: Google places engineer on leave after claim LaMDA is ‘sentient’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI’s GPT-3 is a convincing philosopher appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/07/27/openai-gpt-3-is-a-convincing-philosopher/feed/ 0
President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/#respond Thu, 17 Mar 2022 09:43:22 +0000 https://artificialintelligence-news.com/?p=11774 A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks. The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed. Following an alleged hack, the deepfake was... Read more »

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks.

The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed.

Following an alleged hack, the deepfake was first posted to a Ukrainian news site for TV24. The deepfake was then shared across social networks, including Facebook and Twitter.

Nathaniel Gleicher, Head of Security Policy for Facebook owner Meta, wrote in a tweet:

“Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.

It appeared on a reportedly compromised website and then started showing across the internet.”

The deepfake itself is poor by today’s standards, with fake Zelenskyy having a comically large and noticeably pixelated head compared to the rest of his body.

It shouldn’t have fooled anyone, but Zelenskyy posted a video to his Instagram to call out the fake anyway.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in his official video. “We are at home and defending Ukraine.”

Earlier this month, the Ukrainian government posted a statement warning soldiers and civilians not to believe any videos of Zelenskyy claiming to surrender:

“​​Imagine seeing Vladimir Zelensky on TV making a surrender statement. You see it, you hear it – so it’s true. But this is not the truth. This is deepfake technology.

This will not be a real video, but created through machine learning algorithms.

Videos made through such technologies are almost impossible to distinguish from the real ones.

Be aware – this is a fake! The goal is to disorient, sow panic, disbelieve citizens, and incite our troops to retreat.”

Fortunately, this deepfake was quite easy to distinguish – despite humans now often finding it impossible – and could actually help to raise awareness of how such content is used to influence and manipulate.

Earlier this month, AI News reported on how Facebook and Twitter removed two anti-Ukraine disinformation campaigns linked to Russia and Belarus. One of the campaigns even used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

Both cases in the past month show the danger of deepfakes and the importance of raising public awareness and developing tools for countering such content before it’s able to spread.

(Image Credit: President.gov.ua used without changes under CC BY 4.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/feed/ 0
Intel boosts AI inferencing for developers with OpenVINO 2022.1 https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/ https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/#respond Fri, 25 Feb 2022 18:24:50 +0000 https://artificialintelligence-news.com/?p=11714 Intel has unveiled a major new version of OpenVINO to boost AI inferencing performance for developers. Hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge. Features added to OpenVINO 2022.1 are based on three-and-a-half years of developer feedback, according to Intel.  Adam Burns, VP of OpenVINO Developer Tools in... Read more »

The post Intel boosts AI inferencing for developers with OpenVINO 2022.1 appeared first on AI News.

]]>
Intel has unveiled a major new version of OpenVINO to boost AI inferencing performance for developers.

Hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge. Features added to OpenVINO 2022.1 are based on three-and-a-half years of developer feedback, according to Intel. 

Adam Burns, VP of OpenVINO Developer Tools in the Network and Edge Group at Intel, said:

“The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimisations.

The latest upgrade adds hardware auto-discovery and automatic optimisation, so software developers can achieve optimal performance on every platform.

This software, plus Intel silicon, enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network.”

Among the latest additions are “a greater selection of deep learning models, more device portability choices, and higher inferencing performance with fewer code changes.”

The expanded model support enables new types of deployments while a new automatic optimisation process can determine the compute and accelerators of a system and dynamically increase AI parallelisation and load balance based on compute and memory capacity.

OpenVINO is built on the foundation of oneAPI—an open standard for a unified application programming interface intended to be used across different compute accelerator architectures, including GPUs, AI accelerators, and field-programmable gate arrays.

OpenVINO is used by a number of high-profile Intel customers including Hitachi, BMW Group, ADLINK, American Tower, and more.

“With American Tower’s edge infrastructure, Intel’s OpenVINO deep learning capabilities and Zeblok’s AI platform-as-a-service, we can enable a complete smart solution for the market,” commented Eric Watko, VP of Innovation at American Tower. 

OpenVINO 2022.1 will be available in March 2022.

(Image Credit: Intel)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Intel boosts AI inferencing for developers with OpenVINO 2022.1 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/feed/ 0
BT uses epidemiological modelling for new cyberattack-fighting AI https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/ https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/#respond Fri, 12 Nov 2021 14:58:18 +0000 https://artificialintelligence-news.com/?p=11359 BT is deploying an AI trained on epidemiological modelling to fight the increasing risk of cyberattacks. The first mathematical epidemic model was formulated and solved by Daniel Bernoulli in 1760 to evaluate the effectiveness of variolation of healthy people with the smallpox virus. More recently, such models have guided COVID-19 responses to keep the health... Read more »

The post BT uses epidemiological modelling for new cyberattack-fighting AI appeared first on AI News.

]]>
BT is deploying an AI trained on epidemiological modelling to fight the increasing risk of cyberattacks.

The first mathematical epidemic model was formulated and solved by Daniel Bernoulli in 1760 to evaluate the effectiveness of variolation of healthy people with the smallpox virus. More recently, such models have guided COVID-19 responses to keep the health and economic damage from the pandemic as minimal as possible.

Now security researchers from BT Labs in Suffolk want to harness centuries of epidemiological modelling advancements to protect networks.

BT’s new epidemiology-based cybersecurity prototype is called Inflame and uses deep reinforcement learning to help enterprises automatically detect and respond to cyberattacks before they compromise a network.

Howard Watson, Chief Technology Officer at BT, said:

“We know the risk of cyberattack is higher than ever and has intensified significantly during the pandemic. Enterprises now need to look to new cybersecurity solutions that can understand the risk and consequence of an attack, and quickly respond before it’s too late.

Epidemiological testing has played a vital role in curbing the spread of infection during the pandemic, and Inflame uses the same principles to understand how current and future digital viruses spread through networks.

Inflame will play a key role in how BT’s Eagle-i platform automatically predicts and identifies cyber-attacks before they impact, protecting customers’ operations and reputation.” 

The ‘R’ rate – used for indicating the estimated rate of further infection per case – has gone from the lexicons of epidemiologists to public knowledge over the course of the pandemic. Alongside binge-watching Tiger King, a lockdown pastime for many of us was to check the latest R rate in the hope that it had dropped below 1—meaning the spread of COVID-19 was decreasing rather than increasing exponentially.

For its Inflame prototype, BT’s team built models that were used to test numerous scenarios based on differing R rates of cyber-infection.

Inflame can automatically model and respond to a detected threat within an enterprise network thanks to its deep reinforcement training.

Responses are underpinned by “attack lifecycle” modelling – similar to understanding the spread of a biological virus – to determine the current stage of a cyberattack by assessing real-time security alerts against recognised patterns. The ability to predict the next stage of a cyberattack helps with determining the best steps to halt its progress.

Last month, BT announced its Eagle-i platform which uses AI for real-time threat detection and intelligent response. Eagle-i “self-learns” from every intervention to constantly improve its threat knowledge and Inflame will be a key component in further improving the platform.

(Photo by Erik Mclean on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post BT uses epidemiological modelling for new cyberattack-fighting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/feed/ 0
Luka Crnkovic-Friis, CEO, Peltarion: The democratisation of AI https://www.artificialintelligence-news.com/2021/08/24/luka-crnkovic-friis-ceo-peltarion-the-democratisation-of-ai/ https://www.artificialintelligence-news.com/2021/08/24/luka-crnkovic-friis-ceo-peltarion-the-democratisation-of-ai/#respond Tue, 24 Aug 2021 14:47:45 +0000 http://artificialintelligence-news.com/?p=10961 As AI models become increasingly refined, organisations are starting to notice the diverse range of solutions they can provide across not just data science teams but all departments of a company. Despite this, scaling AI solutions across a company is an extremely expensive and complex venture that most firms would struggle to see a return... Read more »

The post Luka Crnkovic-Friis, CEO, Peltarion: The democratisation of AI appeared first on AI News.

]]>
As AI models become increasingly refined, organisations are starting to notice the diverse range of solutions they can provide across not just data science teams but all departments of a company.

Peltarion Logo

Despite this, scaling AI solutions across a company is an extremely expensive and complex venture that most firms would struggle to see a return on investment from. In a market where AI has so much to offer but with such large costs, Peltarion is striving to bridge the gap enterprises face when realising AI solutions in terms of resources and knowledge.

AI News joined Peltarion CEO Luka Crnkovic-Friis and operational AI expert Johan Hartikainen to discuss how the company’s cloud-based software platform is helping to solve this catch-22 situation affecting the industry.

AI News: In another interview you described Peltarion as doing for AI what WordPress did for HTML coding, would you still consider this an accurate analogy?

Luka Crnkovic-Friis: Yes, but with some reservations. The analogy is correct as to what we are trying to accomplish – simplifying a complex system to allow for democratised usage – but on a much higher technical level.

AN: Aside from through that analogy, how would you describe Peltarion’s goal within the AI industry?

LC: If you look at the landscape of AI tools today, you have the super simple tools at one end where you click a button and get results and on the other hand you have tools like TensorFlow that are very advanced and require expertise. Although the simple tools are easy to use they are not very applicable to real world problems and the complex tools are super powerful but need a team of experts and lots of resources to provide solutions. Peltarion’s goal is to position itself in the middle of this chasm and expand to both sides, bringing as much simplicity to powerful AI solutions as possible.

AN: How is this goal realised in practicality?

LC: Our customers mainly fall into two categories. Either they’re larger enterprises who are on a digitalisation journey but lack scalability or it’s smaller companies and startups who don’t have the resources to build an AI department to begin with.

Even today, operational use of modern AI and deep learning at a large scale is limited to the big tech companies. This is because the tools available today are primarily for research and building proof of concept, which cost a lot in terms of infrastructure and the type of talent required. Our platform combines ease of use – so you don’t have to be an expert – with the operational aspect – so you can actually put stuff into production on a commercial and industrial scale.

Peltarion Platform

AN: What is one of your favourite examples of a company you have helped?

LC: The cases I enjoy most are when we enable domain experts to do something that we as AI experts could not have done but, similarly, that they could not have done without our platform. That’s when it feels like we are hitting the sweet spot.

Last year we worked with a medtech company called SciBase. They designed a probe pen that can detect melanoma when put against a birthmark. They then had the idea that, in addition to the probe data based on electrical measurements, they could add images via deep learning. However, when they tried to do it the results weren’t any better. When we stepped into help, the results were slightly better but still somewhat mediocre.

However, using the data from our platform they were able to find another use case. So there is this barrier in the skin that prevents viruses and bacteria from getting into the body and people with allergies have a weakened barrier. When it’s in an especially poor state they get eczema rashes and the only way to measure that has been to do a biopsy. What’s more, this only works on skin where there is an ongoing rash. Using Peltarion’s probe data, SciBase were able to predict weeks in advance on clear skin when this membrane will be weakened. Then it is as simple as slapping on some moisturising cream and you have no problem. In fact, there are now some promising clinical studies being done that show if you test new-borns with this and use the right cortisone treatment they don’t develop allergies at all.

Why I really like this use case is because they’re not data scientists, and we are not medical experts, but together we can realise fantastic things with AI through collaboration.

Johan Hartikainen, Peltarion

AN: Looking beyond specific use cases, what trends are Peltarion seeing across the AI industry?

LC: Bigger models. Every time the industry takes a step in order of magnitude we get these emergent properties where the models we are working with can do completely new things. Nothing except the size of the model has changed and yet new capabilities emerge. I think we are now at a stage where the models have become so huge that only the tech giants can build things from scratch.

Johan Hartikainen: We’re also seeing a lot of enterprises who started using centralised data science teams to solve their biggest problems a few years ago now realising that this doesn’t translate well into the rest of the company understanding AI. The last year or two has seen huge shifts in the market as companies seek to democratise AI across their departments and put it in the hands of their employees.

AN: Considering this trend, what will be a key challenge in seeing through its completion?

LC: I think general understanding of AI still has a long way to go. When clients know what they want to solve, have an idea of the use case, and have the data, they can have a model in production within an hour. While if you have a client with less understanding, they need much more guidance on use cases and what data to use. So there is this huge gap between those that are familiar with what AI can do and have realisable ideas and those still learning.

Johan will be delving deeper into how to democratise AI for usage across an entire organisation on day one of AI and Big Data Expo Global, which runs from 6-7 September 2021. Find out more about the event and how to attend here.

The post Luka Crnkovic-Friis, CEO, Peltarion: The democratisation of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/24/luka-crnkovic-friis-ceo-peltarion-the-democratisation-of-ai/feed/ 0
Baidu debuts Brain 7.0 alongside mass production of Kunlun II chip https://www.artificialintelligence-news.com/2021/08/19/baidu-debuts-brain-7-0-alongside-mass-production-of-kunlun-ii-chip/ https://www.artificialintelligence-news.com/2021/08/19/baidu-debuts-brain-7-0-alongside-mass-production-of-kunlun-ii-chip/#respond Thu, 19 Aug 2021 11:27:59 +0000 http://artificialintelligence-news.com/?p=10930 Baidu has debuted version 7.0 of its open AI platform Brain alongside reporting that mass production has begun of its second-gen Kunlun chip. The tech giant is often considered as “China’s Google” and, just like its Western counterpart, is one of the largest AI companies in the world. “AI technology is growing increasingly complex, and... Read more »

The post Baidu debuts Brain 7.0 alongside mass production of Kunlun II chip appeared first on AI News.

]]>
Baidu has debuted version 7.0 of its open AI platform Brain alongside reporting that mass production has begun of its second-gen Kunlun chip.

The tech giant is often considered as “China’s Google” and, just like its Western counterpart, is one of the largest AI companies in the world.

“AI technology is growing increasingly complex, and integrated innovation has made AI more powerful,” said Haifeng Wang, CTO of Baidu.

“As AI technology plays an expanding role in a wider range of industries and drives a new era of technological revolution and industrial transformation, it is increasingly important to lower the threshold for different real-world applications and to increase accessibility to AI development platforms.”

At Baidu World 2021, the company made two significant AI announcements.

The first is Baidu Brain 7.0 which promises deeper integration of knowledge sources and deep learning. The open platform now features language comprehension and reasoning.

Baidu’s latest AI platform version works in tandem with the company’s new Kunlun II chip. The chip uses a 7nm process to match the current leaders including Graphcore and Huawei.

Kunlun II is equipped with Baidu’s second-gen XPU architecture. Over the previous generation, Baidu claims the latest Kunlun chip offers 2-3x more processing power.

The latest chip from Baidu works with the company’s open-source deep learning framework PaddlePaddle that has been used by more than 3.6 million developers around the world to build 400,000 AI models.

Baidu claims the models built using PaddlePaddle have led to applications to help water management systems run more efficiently, improve quality control in manufacturing, and even help athletes improve their training.

(Image Credit: Baidu)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Baidu debuts Brain 7.0 alongside mass production of Kunlun II chip appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/19/baidu-debuts-brain-7-0-alongside-mass-production-of-kunlun-ii-chip/feed/ 0
A(I)hoy, mateys: IBM’s crewless ocean research ship to launch ‘very soon’ https://www.artificialintelligence-news.com/2020/09/15/ibm-ocean-research-ship-launch-soon/ https://www.artificialintelligence-news.com/2020/09/15/ibm-ocean-research-ship-launch-soon/#respond Tue, 15 Sep 2020 14:29:59 +0000 http://artificialintelligence-news.com/?p=9855 IBM’s crewless AI-powered ship is due to begin roaming the oceans this month, collecting vital data about something we still know incredibly little about. Humans have travelled the sea in some form for tens of thousands of years—with the earliest crossings occurring around 53,000 to 65,000 years ago (when Australo-Melanesian populations migrated into the Sahul... Read more »

The post A(I)hoy, mateys: IBM’s crewless ocean research ship to launch ‘very soon’ appeared first on AI News.

]]>
IBM’s crewless AI-powered ship is due to begin roaming the oceans this month, collecting vital data about something we still know incredibly little about.

Humans have travelled the sea in some form for tens of thousands of years—with the earliest crossings occurring around 53,000 to 65,000 years ago (when Australo-Melanesian populations migrated into the Sahul landmass – known today as Australia and New Guinea – from what used to be the Sundaland peninsula.)

It’s often said how we know more about the moon than our oceans, with around 95 percent still unexplored. Arguably, the last major ocean research expedition was between 1872 and 1876 when a converted Royal Navy gunship known as the Challenger travelled close to 70,000 nautical miles and catalogued over 4,000 previously unknown species.

Inspired by the Challenger’s story, IBM has teamed up with non-profit ProMare to make a similarly large impact on ocean research.

The autonomous ship, Mayflower, is named after the ship which carried pilgrim settlers from Plymouth, England to Plymouth, Massachusetts in 1620. On its 400th anniversary, it was decided that a Mayflower for the 21st century should be built.

Brett Phaneuf, a Founding Board Member of ProMare and Co-Director of the Mayflower Autonomous Ship project, said:

“Putting a research ship to sea can cost tens of thousands of dollars or pounds a day and is limited by how much time people can spend onboard – a prohibitive factor for many of today’s marine scientific missions.

With this project, we are pioneering a cost-effective and flexible platform for gathering data that will help safeguard the health of the ocean and the industries it supports.”

Naturally, there are more than a few differences between the original ship and the Mayflower Autonomous Ship (MAS).

Mayflower 2.0 no longer relies solely on wind power and will use a wind/solar hybrid propulsion system with a backup diesel generator. The new ship also trades in a compass and nautical charts for navigation in favour of a state-of-the-art GNSS positioning system with SATCOM, RADAR, and LIDAR.

IBM’s deep learning technology is on-board to help the ship traverse the harsh and rapidly-changing environment of the ocean.

Donald Scott, Director of Engineering at Marine AI (which partnered with ProMare on the project), explained:

“In the middle of the ocean, communications are severely limited. Conditions can change very suddenly, and you don’t have the option to stop and power down.

With MAS, we needed to go beyond the existing technology for unmanned ships, creating a vessel that isn’t just operated remotely and doesn’t simply react to the environment, but learns and adapts independently.

To do this, we had to develop state-of-the-art capabilities around navigation, collision avoidance, communications and more.”

The training of AI models for the MAS began in October 2019. The actual hull for the ship arrived in Plymouth in March and sea trials began. Over the next few months, the ship was fitted with its advanced navigation and research equipment.

Andy Stanford-Clark, CTO of IBM UK & Ireland, added:

“IBM helped put man on the moon and is excited by the challenge of using advanced technologies to cross and research our deepest oceans.

By providing the brains for the Mayflower Autonomous Ship, we are pushing the boundaries of science and autonomous technologies to address critical environmental issues.”

MAS’ voyage couldn’t arrive at a more needed time with humans causing huge amounts of damage to the health of our oceans. A UN report found our oceans are now warmer, more polluted, more depleted, and more acidic than ever before.

Rising sea levels are among the key concerns about the impact on humans, but another is the increasing number of plastics in the sea which is simultaneously causing harm to sealife and ending up in the food we eat.

Professor Richard Thompson, OBE, Director of the Marine Institute, University of Plymouth, commented:

“Microplastics present a substantial challenge to our oceans. Over 700 species come into contact with marine litter which is found from the poles to the equator, and estimates are that the quantity of plastic in the oceans will triple in the decade to 2025.”

However, armed with the right data, it’s not too late to change course and heal our oceans.

MAS is fitted with a range of sensors including acoustic, nutrient, temperature, and water and air samplers. Edge devices will store and analyse all data locally until connectivity is available. When a link has been established, the data will be uploaded to edge nodes onshore.

Unless there are any last-minute delays, MAS is set to depart on its voyage this month. The ship is due to arrive in Plymouth, Massachusetts around two weeks later. Where required, updated deep learning models can be pushed out to the ship.

MAS’ virtual crew will be based in Plymouth, UK but IBM says millions of virtual “pilgrims” will be able to experience the voyage online.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post A(I)hoy, mateys: IBM’s crewless ocean research ship to launch ‘very soon’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/09/15/ibm-ocean-research-ship-launch-soon/feed/ 0
Deep learning is being used to predict critical COVID-19 cases https://www.artificialintelligence-news.com/2020/07/23/deep-learning-predict-critical-covid-19-cases/ https://www.artificialintelligence-news.com/2020/07/23/deep-learning-predict-critical-covid-19-cases/#respond Thu, 23 Jul 2020 14:35:48 +0000 http://artificialintelligence-news.com/?p=9765 Researchers from Tencent, along with other Chinese scientists, are using deep learning to predict critical COVID-19 cases. Scientists around the world are doing incredible work to increase our understanding of COVID-19. Thanks to their findings, existing medications have been discovered to increase the likelihood of surviving the virus. Unfortunately, there are still fatalities. People with... Read more »

The post Deep learning is being used to predict critical COVID-19 cases appeared first on AI News.

]]>
Researchers from Tencent, along with other Chinese scientists, are using deep learning to predict critical COVID-19 cases.

Scientists around the world are doing incredible work to increase our understanding of COVID-19. Thanks to their findings, existing medications have been discovered to increase the likelihood of surviving the virus.

Unfortunately, there are still fatalities. People with weakened immune systems or underlying conditions are most at risk, but it’s a dangerous myth that the young and otherwise healthy can’t die from this virus.

According to a paper published in science journal Nature, around 6.5 percent of COVID-19 cases have a “worrying trend of sudden progression to critical illness”. Of those cases, there’s a mortality rate of 49 percent.

In the aforementioned paper, the researchers wrote: “Since early intervention is associated with improved prognosis, the ability to identify patients that are most at risk of developing severe disease upon admission will ensure that these patients receive appropriate care as soon as possible.”

While most countries appear to be reaching the end of the first wave of COVID-19, the possibility of a second threatens. Many experts forecast another wave will hit during the winter months; when hospitals already struggle from seasonal viruses.

One of the biggest challenges with COVID-19 is triaging patients to decide who are most at risk and require more resources allocated to their care. During the peak of the outbreak in Italy, doctors reported reaching a point of having to make heartbreaking decisions over whether it was a waste of limited resources even trying to save someone.

A team led by China’s senior medical advisor on COVID-19, Zhong Nanshan, was established in February. The team consisted of researchers from Tencent AI Lab in addition to Chinese public health scientists.

Nanshan’s team set out to build a deep learning-based system which can predict whether a patient is likely to become a critical case. Such information would be invaluable to ensuring the patient gets early intervention to improve their chances of surviving the virus in addition to supporting medical staff with their triaging decisions.

The deep learning model was trained on data from 1590 patients from 575 medical centers across China, with further validation from 1393 patients.

Tencent has made the COVID-19 tool for predicting critical COVID-19 cases available online here (Please note the small print which currently says “this tool is for research purpose and not approved for clinical use.”)

(Photo by Ashkan Forouzani on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Deep learning is being used to predict critical COVID-19 cases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/07/23/deep-learning-predict-critical-covid-19-cases/feed/ 0