neural network Archives - AI News https://www.artificialintelligence-news.com/tag/neural-network/ Artificial Intelligence News Fri, 11 Aug 2023 11:02:52 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png neural network Archives - AI News https://www.artificialintelligence-news.com/tag/neural-network/ 32 32 IBM Research unveils breakthrough analog AI chip for efficient deep learning https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/ https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/#respond Fri, 11 Aug 2023 11:02:50 +0000 https://www.artificialintelligence-news.com/?p=13461 IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs). This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy. The traditional approach of executing deep neural networks... Read more »

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs).

This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy.

The traditional approach of executing deep neural networks on conventional digital computing architectures poses limitations in terms of performance and energy efficiency. These digital systems entail constant data transfer between memory and processing units, slowing down computations and reducing energy optimisation.

To tackle these challenges, IBM Research has harnessed the principles of analog AI, which emulates the way neural networks function in biological brains. This approach involves storing synaptic weights using nanoscale resistive memory devices, specifically Phase-change memory (PCM).

PCM devices alter their conductance through electrical pulses, enabling a continuum of values for synaptic weights. This analog method mitigates the need for excessive data transfer, as computations are executed directly in the memory—resulting in enhanced efficiency.

The newly introduced chip is a cutting-edge analog AI solution composed of 64 analog in-memory compute cores.

Each core integrates a crossbar array of synaptic unit cells alongside compact analog-to-digital converters, seamlessly transitioning between analog and digital domains. Furthermore, digital processing units within each core manage nonlinear neuronal activation functions and scaling operations. The chip also boasts a global digital processing unit and digital communication pathways for interconnectivity.

The research team demonstrated the chip’s prowess by achieving an accuracy of 92.81 percent on the CIFAR-10 image dataset—an unprecedented level of precision for analog AI chips.

The throughput per area, measured in Giga-operations per second (GOPS) by area, underscored its superior compute efficiency compared to previous in-memory computing chips. This innovative chip’s energy-efficient design coupled with its enhanced performance makes it a milestone achievement in the field of AI hardware.

The analog AI chip’s unique architecture and impressive capabilities lay the foundation for a future where energy-efficient AI computation is accessible across a diverse range of applications.

IBM Research’s breakthrough marks a pivotal moment that will help to catalyse advancements in AI-powered technologies for years to come.

(Image Credit: IBM Research)

See also: Azure and NVIDIA deliver next-gen GPU acceleration for AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/feed/ 0
Google no longer accepts deepfake projects on Colab https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/ https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/#respond Tue, 31 May 2022 14:01:05 +0000 https://www.artificialintelligence-news.com/?p=12025 Google has added “creating deepfakes” to its list of projects that are banned from its Colab service. Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers. With little fanfare, Google added deepfakes to its list of banned projects. Deepfakes use generative... Read more »

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/feed/ 0
Kendrick Lamar uses deepfakes in latest music video https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/ https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/#respond Mon, 09 May 2022 12:10:02 +0000 https://www.artificialintelligence-news.com/?p=11943 American rapper Kendrick Lamar has made use of deepfakes for his latest music video. Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his... Read more »

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
American rapper Kendrick Lamar has made use of deepfakes for his latest music video.

Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his creative mind isn’t limited to his rapping talent.

For his track ‘The Heart Part 5’, Lamar has made use of deepfake technology to seamlessly morph his face into various celebrities including Kanye West, Nipsey Hussle, Will Smith, and even O.J. Simpson.

You can view the music video below:

For due credit, the deepfake element was created by a studio called Deep Voodoo.

Deepfakes are often used for entertainment purposes, including for films and satire. However, they’re also being used for nefarious purposes like the creation of ‘deep porn’ videos without the consent of those portrayed.

The ability to deceive has experts concerned about the social implications. Deepfakes could be used for fraud, misinformation, influencing public opinion, and interfering in democratic processes.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of very low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had major consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

The more deepfakes that are exposed will increase public awareness. Artists like Kendrick Lamar using them for entertainment purposes will also help to spread awareness that you can no longer necessarily believe what you can see with your own eyes.

Related: Humans struggle to distinguish between real and AI-generated faces

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/feed/ 0
UK considers blocking Nvidia’s $40B acquisition of Arm https://www.artificialintelligence-news.com/2021/08/04/uk-considers-blocking-nvidias-40b-acquisition-of-arm/ https://www.artificialintelligence-news.com/2021/08/04/uk-considers-blocking-nvidias-40b-acquisition-of-arm/#respond Wed, 04 Aug 2021 16:16:06 +0000 http://artificialintelligence-news.com/?p=10825 Bloomberg reports the UK is considering blocking Nvidia’s $40 billion acquisition of Arm over national security concerns. Over 160 billion chips have been made for various devices based on designs from Arm. In recent years, the company has added AI accelerator chips to its lineup for neural network processing. In the wake of the proposed... Read more »

The post UK considers blocking Nvidia’s $40B acquisition of Arm appeared first on AI News.

]]>
Bloomberg reports the UK is considering blocking Nvidia’s $40 billion acquisition of Arm over national security concerns.

Over 160 billion chips have been made for various devices based on designs from Arm. In recent years, the company has added AI accelerator chips to its lineup for neural network processing.

In the wake of the proposed acquisition, Nvidia CEO Jensen Huang said:

“ARM is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make ARM even more incredible and take it to even higher levels.

We want to propel it — and the UK — to global AI leadership.”

Given the size of the acquisition and potential impact, UK Culture Secretary Oliver Dowden referred the deal to the Competition and Markets Authority (CMA) and asked the regulator to prepare a report on whether the deal is anti-competitive.

To squash job loss concerns and avoid regulators blocking the deal, Nvidia promised to keep the business in the UK and even hire more staff.

Nvidia also announced a new AI centre in Cambridge –  home to an increasing number of leading startups in the field such as FiveAI, Prowler.io, Fetch.ai, and Darktrace – that features an ARM/Nvidia-based supercomputer, set to be one of the most powerful in the world.

However, it seems that it’s not economic concerns that could see the deal blocked.

The CMA’s report highlighted national security concerns and the UK is therefore likely to reject the acquisition. A deeper investigation is set to be launched before any final decision is made, but the deal looks to be a long shot.

“We continue to work through the regulatory process with the U.K. government,” said an Nvidia spokesperson in a statement. “We look forward to their questions and expect to resolve any issues they may have.”

If the acquisition is granted permission to proceed, it will likely come with strict conditions.

Several of Nvidia’s international rivals have offered to invest in Arm if it helps the company remain independent.

(Photo by Markus Winkler on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post UK considers blocking Nvidia’s $40B acquisition of Arm appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/04/uk-considers-blocking-nvidias-40b-acquisition-of-arm/feed/ 0
Aussie court rules AIs can be credited as inventors under patent law https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/ https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/#respond Tue, 03 Aug 2021 16:10:43 +0000 http://artificialintelligence-news.com/?p=10821 A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent. Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia... Read more »

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent.

Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia – on behalf of US-based Dr Stephen Thaler.

The twist here is that it’s not Thaler which Abbott is attempting to credit as an inventor, but rather his AI device known as DABUS.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” said justice Jonathan Beach, overturning Australia’s original verdict. “We are both created and create. Why cannot our own creations also create?”

DABUS consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

Until now, all of the patent applications were rejected—including in Australia. Each country determined that a human must be the credited inventor.

Whether AIs should be afforded certain “rights” similar to humans is a key debate, and one that is increasingly in need of answers. This patent case could be the first step towards establishing when machines – with increasing forms of sentience – should be treated like humans.

DABUS was awarded its first patent for “a food container based on fractal geometry,” by South Africa’s Companies and Intellectual Property Commission on June 24.

Following the patent award, Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, commented:

“This is a truly historic case that recognises the need to change how we attribute invention. We are moving from an age in which invention was the preserve of people to an era where machines are capable of realising the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.

The School of Law at the University of Surrey has taken a leading role in asking important philosophical questions such as whether innovation can only be a human phenomenon, and what happens legally when AI behaves like a person.”

AI News reached out to the patent experts at ACT | The App Association, which represents more than 5,000 app makers and connected device companies around the world, for their perspective.

Brian Scarpelli, Senior Global Policy Counsel at ACT | The App Association, commented:

“The App Association, in alignment with the plain language of patent laws across key jurisdictions (including Australia’s 1990 Patents Act), is opposed to the proposal that a patent may be granted for an invention devised by a machine, rather than by a natural person.

Today’s patent laws can, for certain kinds of AI inventions, appropriately support inventorship. Patent offices can use the existing requirements for software patentability as a starting point to identify necessary elements of patentable AI inventions and applications – for example for AI technology that is used to improve machine capability, where it can be delineated, declared, and evaluated in a way equivalent to software inventions.

But more generally, determinations regarding when and by whom inventorship and authorship, autonomously created by AI, could represent a drastic shift in law and policy. This would have direct implications on policy questions raised about allowing patents on inventions made by machines further public policy goals, and even reaching into broader definitions of AI personhood.

Continued study, both by national/regional patent offices and multilateral fora like the World Intellectual Property Office, is going to be critical and needs to continue to inform a comprehensive debate by policymakers.”

Feel free to let us know in the comments whether you believe AI systems should have similar legal protections and obligations to humans.

(Photo by Trollinho on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/feed/ 0
Apple considers using ML to make augmented reality more useful https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/ https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/#respond Thu, 22 Jul 2021 14:39:31 +0000 http://artificialintelligence-news.com/?p=10792 A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful. Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple’s decision to introduce LiDAR in its recent devices has given AR a boost but it’s clear... Read more »

The post Apple considers using ML to make augmented reality more useful appeared first on AI News.

]]>
A patent from Apple suggests the company is considering how machine learning can make augmented reality (AR) more useful.

Most current AR applications are somewhat gimmicky, with barely a handful that have achieved any form of mass adoption. Apple’s decision to introduce LiDAR in its recent devices has given AR a boost but it’s clear that more needs to be done to make applications more useful.

A newly filed patent suggests that Apple is exploring how machine learning can be used to automatically (or “automagically,” the company would probably say) detect objects in AR.

The first proposed use of the technology would be for Apple’s own Measure app.

Measure’s previously dubious accuracy improved greatly after Apple introduced LiDAR but most people probably just grabbed an actual tape measure unless they were truly stuck without one available.

The patent suggests machine learning could be used for object recognition in Measure to help users simply point their devices at an object and have its measurements automatically presented in AR.

Specifically, Apple’s patent suggests displaying a “measurement of the object determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.”

This simplicity benefit over a traditional tape measure would likely drive greater adoption.

Machine learning is already used for a number of object recognition and labelling tasks within Apple’s ecosystem. Image editor Pixelmator Pro, for example, uses it to automatically label layers.

Apple’s implementation suggests an object is measured “by first generating a 3D bounding box for the object based on the depth data”. This boundary box is then refined “using various neural networks and refining algorithms described herein.”

Not all objects are measured the same so Apple suggests that a neural network could also step in here to determine what could be useful for the user. For example, “a seat height for chairs, a display diameter for TVs, a table diameter for round tables, a table length for rectangular tables, and the like.”

To accomplish what Apple envisions here, a lot of models will need to be trained for all objects. However, there are many of the more everyday items that could be supported early on—with more added over time.

“One model may be trained and used to determine measurements for chair type objects (e.g., determining a seat height, arm length, etc.),” Apple wrote, “and another model may be trained and used to determine measurements for TV type objects (e.g., determining a diagonal screen size, greatest TV depth, etc.)”

Five inventors are credited with the patent: Amit Jain, Aditya Sankar; Qi Shan, Alexandre Da Veiga, and Shreyas V Joshi.

Apple’s patent is another example of how machine learning can be combined with other technologies to add real utility and ultimately improve lives. There’s no telling when, or even if, Apple will release an updated Measure app based on this patent—but it seems more plausible in the not-so-distant future than many of the company’s patents.

(Image Credit: Apple)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Apple considers using ML to make augmented reality more useful appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/22/apple-considers-using-ml-to-make-augmented-reality-more-useful/feed/ 0
MIT researchers develop AI to calculate material stress using images https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/ https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/#respond Thu, 22 Apr 2021 09:21:13 +0000 http://artificialintelligence-news.com/?p=10488 Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images. The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital... Read more »

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images.

The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital task to prevent structural failures which could be costly at best or cause loss of life at worst.

“Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers,” says Markus Buehler, the McAfee Professor of Engineering, director of the Laboratory for Atomistic and Molecular Mechanics, and one of the paper’s co-authors.

“But it’s still a tough problem. It’s very expensive — it can take days, weeks, or even months to run some simulations. So, we thought: Let’s teach an AI to do this problem for you.”

By employing computer vision, the AI tool developed by MIT’s researchers can generate estimates of material stresses in real-time.

A Generative Adversarial Network (GAN) was used for the breakthrough. The network was trained using thousands of paired images—one showing the material’s internal microstructure when subjected to mechanical forces, and the other labelled with colour-coded stress and strain values.

Using game theory, the GAN is able to determine the relationships between the material’s appearance and the stresses it’s being put under.

“From a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth,” Buehler adds.

Even more impressively, the AI can recreate issues like cracks developing in a material that can have a major impact on how it reacts to forces.

Once trained, the neural network can run on consumer-grade computer processors. This makes the AI accessible in the field and enables inspections to be carried out with just a photo.

You can find a full copy of the paper here.

(Photo by CHUTTERSNAP on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/feed/ 0
Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/ https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/#respond Thu, 11 Mar 2021 11:17:54 +0000 http://artificialintelligence-news.com/?p=10377 Netra.ai, a solution developed by Leben Care and the Sankara Eye Foundation, is using Intel technologies to identify diabetic retinopathy in minutes. India has one of the largest diabetic populations in the world with around 65 million people suffering from the disease. Eye damage caused by diabetes, known as diabetic retinopathy (DR), is estimated will... Read more »

The post Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy appeared first on AI News.

]]>
Netra.ai, a solution developed by Leben Care and the Sankara Eye Foundation, is using Intel technologies to identify diabetic retinopathy in minutes.

India has one of the largest diabetic populations in the world with around 65 million people suffering from the disease. Eye damage caused by diabetes, known as diabetic retinopathy (DR), is estimated will affect around one-third of sufferers.

Dr. Kaushik Murali, President Medical Administration, Quality & Education at Sankara Eye Foundation India, said:

“Technology and AI are democratising healthcare access, especially in screening for ailments. Our team at Sankara Eye Foundation has focused on our vision to eliminate needless blindness from India.

The current solution Netra.ai – where we had a key role in the design and development with Leben Care – uses robust platforms from Intel. It is an example of how likeminded collaborators can create meaningful and impactful solutions for various challenges that face humanity.”

DR is a leading cause of vision loss and blindness in adults but early detection can limit its life-changing impact. In many low- and medium-income countries, there is a lack of trained retinal specialists – especially in rural areas – which leads to late diagnosis after diabetic eye disease has already reached advanced stages.

Netra.ai’s cloud-based solution analyses eye images and provides almost immediate results with a 98.5 percent accuracy. The usability of the system means it can be used by just a single technician in a rural area.

Even in more established areas or richer countries, the speed of the AI solution helps to reduce the burden on healthcare systems, increase how many patients can be seen, provide a faster diagnosis, and improve their outcomes.

The solution is powered by Intel’s Deep Learning Boost and uses the Xeon Scalable processor platform via Amazon’s EC2 C5 cloud instances.

A four-step deep convolutional neural network (DCNN) detects DR stage and annotates lesions:

In a Q&A with AI News, Hema Chamraj, Director of Technology Advocacy, Ai4Good at Intel, provided further context on the possibilities for AI in healthcare.

AI News: How long do you think it will be before AI is the primary method of diagnosing diabetic retinopathy?

Hema Chamraj: AI has shown enormous promise on the sensitivity, specificity and the ability to be autonomous as seen in this FDA-approved solution for Diabetic Retinopathy.

However, the primary usage of AI for diabetic retinopathy will be as a screening tool for early detection, reducing the sickness burden, addressing the clinician shortage and overburdened healthcare system.

AN: Are regulatory barriers proving to be much of a hindrance to widespread adoption?

HC: It is still early days for AI. Each country has different regulatory frameworks and many are progressing toward providing guidance for AI.

In the US, the FDA has provided clearance for more than a dozen AI solutions and also just released its first AI action plan.

AN: Despite being trained to be device-agnostic, Netra.ai still requires specialist hardware. However, could people one day routinely check for early signs of DR using self-serve facilities which could be particularly helpful in rural areas?

HC: Yes, AI inferencing can happen at the edge with portable devices with smaller footprint and even on smartphones in the future.

In rural areas, a technician should be able to do the initial screening and counsel the patients requiring urgent referral to the closest hospital.

The AI solution at the edge helps to correctly identify, with very high accuracy, normal versus abnormal images and classify the need of referral. This can help patients avoid unnecessary travel and work disruptions.

AN: Are you confident AI diagnosis tools can significantly reduce pressures on hospitals and help to clear some of the care backlog resulting from the COVID-19 pandemic?

HC: During the pandemic, we’ve seen AI help doctors with COVID testing and drug discovery. We’ve also seen a rise in telehealth options to reduce pressures on hospitals and medical staff.

AN: What other areas of clinical care are you particularly excited about AI’s impact on?

HC: AI in medical imaging has proven its value to uncover hidden insights from large datasets with accuracy exceeding humans in certain cases. All the data in the clinical care system including the EHR, genomic and pathology data could benefit from AI.

The potential of all this data and insights coming together to provide a holistic understanding of the patient is very exciting, as is the prospect to finally move towards prevention and precision medicine.

As of writing, Netra.ai has screened almost 3,093 patients in India and identified 742 who are at-risk. The solution can be expanded to detect other retinal conditions such as Glaucoma.

(Image Credit: Daniil Kuželev on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/feed/ 0
OpenAI’s latest neural network creates images from written descriptions https://www.artificialintelligence-news.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://www.artificialintelligence-news.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 http://artificialintelligence-news.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 2
Researchers achieve 94% power reduction for on-device AI tasks https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/ https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/#respond Thu, 17 Sep 2020 15:47:52 +0000 http://artificialintelligence-news.com/?p=9859 Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices. ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent... Read more »

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices.

ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent less power.

The reduction in power consumption achieved through LMU will be particularly beneficial to smaller form-factor devices such as smartwatches; which struggle with small batteries. IoT devices which carry out AI tasks – but may have to last months, if not years, before they’re replaced – should also benefit.

LMU is described as a Recurrent Neural Network (RNN) which enables lower power and more accurate processing of time-varying signals.

ABR says the LMU can be used to build AI networks for all time-varying tasks—such as speech processing, video analysis, sensor monitoring, and control systems.

The AI industry’s current go-to model is the Long-Short-Term-Memory (LSTM) network. LSTM was first proposed back in 1995 and is used for most popular speech recognition and translation services today like those from Google, Amazon, Facebook, and Microsoft.

Last year, researchers from the University of Waterloo debuted LMU as an alternative RNN to LSTM. Those researchers went on to form ABR, which now consists of 20 employees.

Peter Suma, co-CEO of Applied Brain Research, said in an email:

“We are a University of Waterloo spinout from the Theoretical Neuroscience Lab at UW. We looked at how the brain processes signals in time and created an algorithm based on how “time-cells” in your brain work.

We called the new AI, a Legendre-Memory-Unit (LMU) after a mathematical tool we used to model the time cells. The LMU is mathematically proven to be optimal at processing signals. You cannot do any better. Over the coming years, this will make all forms of temporal AI better.”

ABR debuted a paper in late-2019 during the NeurIPS conference which demonstrated that LMU is 1,000,000x more accurate than the LSTM while encoding 100x more time-steps.

In terms of size, the LMU model is also smaller. LMU uses 500 parameters versus the LSTM’s 41,000 (a 98 percent reduction in network size.)

“We implemented our speech recognition with the LMU and it lowered the power used for command word processing to ~8 millionths of a watt, which is 94 percent less power than the best on the market today,” says Suma. “For full speech, we got the power down to 4 milli-watts, which is about 70 percent smaller than the best out there.”

Suma says the next step for ABR is to work on video, sensor and drone control AI processing—to also make them smaller and better.

A full whitepaper detailing LMU and its benefits can be found on preprint repository arXiv here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/feed/ 0