intel Archives - AI News https://www.artificialintelligence-news.com/tag/intel/ Artificial Intelligence News Thu, 02 Nov 2023 15:01:55 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png intel Archives - AI News https://www.artificialintelligence-news.com/tag/intel/ 32 32 Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/ https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/#respond Thu, 02 Nov 2023 15:01:54 +0000 https://www.artificialintelligence-news.com/?p=13828 Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer. This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most... Read more »

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer.

This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most pressing challenges.

Dawn Phase 1 is the cornerstone of the recently launched UK AI Research Resource (AIRR), demonstrating the nation’s commitment to exploring innovative systems and architectures.

This supercomputer brings the UK closer to achieving the exascale; a computing threshold of a quintillion (10^18) floating point operations per second. To put this into perspective, the processing power of an exascale system equals what every person on Earth would calculate in over four years if they were working non-stop, 24 hours a day.

Operational at the Cambridge Open Zettascale Lab, Dawn utilises Dell PowerEdge XE9640 servers, providing an unparalleled platform for the Intel Data Center GPU Max Series accelerator. This collaboration ensures a diverse ecosystem through oneAPI, fostering an environment of choice.

The system’s capabilities extend across various domains, including healthcare, engineering, green fusion energy, climate modelling, cosmology, and high-energy physics.

Adam Roe, EMEA HPC technical director at Intel, said:

“Dawn considerably strengthens the scientific and AI compute capability available in the UK and it’s on the ground and operational today at the Cambridge Open Zettascale Lab.

Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI.

I’m very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel, and the University of Cambridge, and further broaden that to the UK scientific and AI community.”

Glimpse into the future

Dawn Phase 1 is not just a standalone achievement; it’s part of a broader strategy.

The collaborative endeavour aims to deliver a Phase 2 supercomputer in 2024, promising tenfold performance levels. This progression would propel the UK’s AI capability, strengthening the successful industry partnership.

The supercomputer’s technical foundation lies in Dell PowerEdge XE9640 servers, renowned for their versatile configurations and efficient liquid cooling technology. This innovation ensures optimal handling of AI and HPC workloads, offering a more effective solution than traditional air-cooled systems.

Tariq Hussain, Head of UK Public Sector at Dell, commented:

“Collaborations like the one between the University of Cambridge, Dell Technologies and Intel, alongside strong inward investment, are vital if we want the compute to unlock the high-growth AI potential of the UK. It is paramount that the government invests in the right technologies and infrastructure to ensure the UK leads in AI and exascale-class simulation capability.

It’s also important to embrace the full spectrum of the technology ecosystem, including GPU diversity, to ensure customers can tackle the growing demands of generative AI, industrial simulation modelling and ground-breaking scientific research.”

As the world awaits the full technical details and performance numbers of Dawn Phase 1 – slated for release in mid-November during the Supercomputing 23 (SC23) conference in Denver, Colorado – the UK stands at the precipice of a transformative era in scientific and AI research.

This collaboration between industry giants and academia not only accelerates research discovery but also propels the UK’s knowledge economy to new heights.

(Image Credit: Joe Bishop for Cambridge Open Zettascale Lab)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/feed/ 0
Intel boosts AI inferencing for developers with OpenVINO 2022.1 https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/ https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/#respond Fri, 25 Feb 2022 18:24:50 +0000 https://artificialintelligence-news.com/?p=11714 Intel has unveiled a major new version of OpenVINO to boost AI inferencing performance for developers. Hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge. Features added to OpenVINO 2022.1 are based on three-and-a-half years of developer feedback, according to Intel.  Adam Burns, VP of OpenVINO Developer Tools in... Read more »

The post Intel boosts AI inferencing for developers with OpenVINO 2022.1 appeared first on AI News.

]]>
Intel has unveiled a major new version of OpenVINO to boost AI inferencing performance for developers.

Hundreds of thousands of developers have used OpenVINO to deploy AI workloads at the edge. Features added to OpenVINO 2022.1 are based on three-and-a-half years of developer feedback, according to Intel. 

Adam Burns, VP of OpenVINO Developer Tools in the Network and Edge Group at Intel, said:

“The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimisations.

The latest upgrade adds hardware auto-discovery and automatic optimisation, so software developers can achieve optimal performance on every platform.

This software, plus Intel silicon, enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network.”

Among the latest additions are “a greater selection of deep learning models, more device portability choices, and higher inferencing performance with fewer code changes.”

The expanded model support enables new types of deployments while a new automatic optimisation process can determine the compute and accelerators of a system and dynamically increase AI parallelisation and load balance based on compute and memory capacity.

OpenVINO is built on the foundation of oneAPI—an open standard for a unified application programming interface intended to be used across different compute accelerator architectures, including GPUs, AI accelerators, and field-programmable gate arrays.

OpenVINO is used by a number of high-profile Intel customers including Hitachi, BMW Group, ADLINK, American Tower, and more.

“With American Tower’s edge infrastructure, Intel’s OpenVINO deep learning capabilities and Zeblok’s AI platform-as-a-service, we can enable a complete smart solution for the market,” commented Eric Watko, VP of Innovation at American Tower. 

OpenVINO 2022.1 will be available in March 2022.

(Image Credit: Intel)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Intel boosts AI inferencing for developers with OpenVINO 2022.1 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/25/intel-boosts-ai-inferencing-developers-openvino-2022-1/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus... Read more »

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0
Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/ https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/#respond Thu, 11 Mar 2021 11:17:54 +0000 http://artificialintelligence-news.com/?p=10377 Netra.ai, a solution developed by Leben Care and the Sankara Eye Foundation, is using Intel technologies to identify diabetic retinopathy in minutes. India has one of the largest diabetic populations in the world with around 65 million people suffering from the disease. Eye damage caused by diabetes, known as diabetic retinopathy (DR), is estimated will... Read more »

The post Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy appeared first on AI News.

]]>
Netra.ai, a solution developed by Leben Care and the Sankara Eye Foundation, is using Intel technologies to identify diabetic retinopathy in minutes.

India has one of the largest diabetic populations in the world with around 65 million people suffering from the disease. Eye damage caused by diabetes, known as diabetic retinopathy (DR), is estimated will affect around one-third of sufferers.

Dr. Kaushik Murali, President Medical Administration, Quality & Education at Sankara Eye Foundation India, said:

“Technology and AI are democratising healthcare access, especially in screening for ailments. Our team at Sankara Eye Foundation has focused on our vision to eliminate needless blindness from India.

The current solution Netra.ai – where we had a key role in the design and development with Leben Care – uses robust platforms from Intel. It is an example of how likeminded collaborators can create meaningful and impactful solutions for various challenges that face humanity.”

DR is a leading cause of vision loss and blindness in adults but early detection can limit its life-changing impact. In many low- and medium-income countries, there is a lack of trained retinal specialists – especially in rural areas – which leads to late diagnosis after diabetic eye disease has already reached advanced stages.

Netra.ai’s cloud-based solution analyses eye images and provides almost immediate results with a 98.5 percent accuracy. The usability of the system means it can be used by just a single technician in a rural area.

Even in more established areas or richer countries, the speed of the AI solution helps to reduce the burden on healthcare systems, increase how many patients can be seen, provide a faster diagnosis, and improve their outcomes.

The solution is powered by Intel’s Deep Learning Boost and uses the Xeon Scalable processor platform via Amazon’s EC2 C5 cloud instances.

A four-step deep convolutional neural network (DCNN) detects DR stage and annotates lesions:

In a Q&A with AI News, Hema Chamraj, Director of Technology Advocacy, Ai4Good at Intel, provided further context on the possibilities for AI in healthcare.

AI News: How long do you think it will be before AI is the primary method of diagnosing diabetic retinopathy?

Hema Chamraj: AI has shown enormous promise on the sensitivity, specificity and the ability to be autonomous as seen in this FDA-approved solution for Diabetic Retinopathy.

However, the primary usage of AI for diabetic retinopathy will be as a screening tool for early detection, reducing the sickness burden, addressing the clinician shortage and overburdened healthcare system.

AN: Are regulatory barriers proving to be much of a hindrance to widespread adoption?

HC: It is still early days for AI. Each country has different regulatory frameworks and many are progressing toward providing guidance for AI.

In the US, the FDA has provided clearance for more than a dozen AI solutions and also just released its first AI action plan.

AN: Despite being trained to be device-agnostic, Netra.ai still requires specialist hardware. However, could people one day routinely check for early signs of DR using self-serve facilities which could be particularly helpful in rural areas?

HC: Yes, AI inferencing can happen at the edge with portable devices with smaller footprint and even on smartphones in the future.

In rural areas, a technician should be able to do the initial screening and counsel the patients requiring urgent referral to the closest hospital.

The AI solution at the edge helps to correctly identify, with very high accuracy, normal versus abnormal images and classify the need of referral. This can help patients avoid unnecessary travel and work disruptions.

AN: Are you confident AI diagnosis tools can significantly reduce pressures on hospitals and help to clear some of the care backlog resulting from the COVID-19 pandemic?

HC: During the pandemic, we’ve seen AI help doctors with COVID testing and drug discovery. We’ve also seen a rise in telehealth options to reduce pressures on hospitals and medical staff.

AN: What other areas of clinical care are you particularly excited about AI’s impact on?

HC: AI in medical imaging has proven its value to uncover hidden insights from large datasets with accuracy exceeding humans in certain cases. All the data in the clinical care system including the EHR, genomic and pathology data could benefit from AI.

The potential of all this data and insights coming together to provide a holistic understanding of the patient is very exciting, as is the prospect to finally move towards prevention and precision medicine.

As of writing, Netra.ai has screened almost 3,093 patients in India and identified 742 who are at-risk. The solution can be expanded to detect other retinal conditions such as Glaucoma.

(Image Credit: Daniil Kuželev on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Q&A: Netra.ai uses Intel technologies to identify diabetic retinopathy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/11/qa-netra-ai-intel-technologies-identify-diabetic-retinopathy/feed/ 0
Intel’s AI is helping NFL hopefuls to reach their full potential https://www.artificialintelligence-news.com/2021/03/04/intel-ai-helping-nfl-hopefuls-reach-full-potential/ https://www.artificialintelligence-news.com/2021/03/04/intel-ai-helping-nfl-hopefuls-reach-full-potential/#comments Thu, 04 Mar 2021 14:00:00 +0000 http://artificialintelligence-news.com/?p=10359 EXOS is piloting the use of Intel’s 3D Athlete Tracking (3DAT) technology to help the next generation of professional footballers reach their full potential. This year’s hopefuls risk feeling unprepared after coming off such a disruptive year and will need all the help they can get to achieve their goals. 3DAT is a computer vision... Read more »

The post Intel’s AI is helping NFL hopefuls to reach their full potential appeared first on AI News.

]]>
EXOS is piloting the use of Intel’s 3D Athlete Tracking (3DAT) technology to help the next generation of professional footballers reach their full potential.

This year’s hopefuls risk feeling unprepared after coming off such a disruptive year and will need all the help they can get to achieve their goals.

3DAT is a computer vision solution that uses four pan-tilt mounted, highly mobile cameras to capture the form and motion of athletes. Pose estimation algorithms are then applied to analyse the biomechanics of athletes’ movements

Monica Laudermilk, VP of Research at EXOS, said:

“Metrics that were previously unmeasurable by the naked eye are now being revealed with Intel’s 3DAT technology.

We’re able to take that information, synthesize it and turn it into something tangible for our coaches and athletes. It’s a gamechanger when the tiniest of adjustments can lead to real, impactful results for our athletes.”

EXOS’ coaches can take the detailed information provided by 3DAT and use the precise skeletal analysis and performance metrics to not only provide tailored advice to athletes on maximising their potential, but also show them visually how current approaches are holding them back.

Ashton Eaton, two-time Olympic gold medalist in the decathlon, and Product Development Engineer in Intel’s Olympic Technology Group, commented:

“There’s a massive gap in the sports and movement field, between what people feel when they move and what they actually know that they’re doing.

When I was running the 100-meter dash, I’d work with my coach to make adjustments to shave off fractions of a second, but it was all by feel. Sometimes it worked, sometimes it didn’t, because I didn’t fully know what my body was actually doing.

3DAT allows athletes to understand precisely what their body is doing while in motion, so they can precisely target where to make tweaks to get faster or better.”

The entire 3DAT system is hands-free to ensure athletes aren’t burdened with sensors or anything which means they have to deviate from their usual routines. Coaches receive a full breakdown of athletes’ sessions to help pinpoint issues.

Craig Friedman, SVP of EXOS’ Performance Innovation Team, says:

“3DAT is giving us information, and insight, not just into the technique of how people are running and how they can improve, but also what might be holding them back.

This data enables us to make adjustments in the weight room to help unlock more potential on the field.”

Intel says its ongoing partnership with EXOS will help its engineers to further advance 3DAT through access to expert coaches and elite athletes.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Intel’s AI is helping NFL hopefuls to reach their full potential appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/04/intel-ai-helping-nfl-hopefuls-reach-full-potential/feed/ 2
Intel, Ubotica, and the ESA launch the first AI satellite https://www.artificialintelligence-news.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/ https://www.artificialintelligence-news.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/#respond Tue, 20 Oct 2020 15:18:13 +0000 http://artificialintelligence-news.com/?p=9961 Intel, Ubotica, and the European Space Agency (ESA) have launched the first AI satellite into Earth’s orbit. The PhiSat-1 satellite is about the size of a cereal box and was ejected from a rocket’s dispenser alongside 45 other satellites. The rocket launched from Guiana Space Centre on September 2nd. Intel has integrated its Movidius Myriad... Read more »

The post Intel, Ubotica, and the ESA launch the first AI satellite appeared first on AI News.

]]>
Intel, Ubotica, and the European Space Agency (ESA) have launched the first AI satellite into Earth’s orbit.

The PhiSat-1 satellite is about the size of a cereal box and was ejected from a rocket’s dispenser alongside 45 other satellites. The rocket launched from Guiana Space Centre on September 2nd.

Intel has integrated its Movidius Myriad 2 Vision Processing Unit (VPU) into PhiSat-1 – enabling large amounts of data to be processed on the device. This helps to prevent useless data being sent back to Earth and consuming precious bandwidth.

“The capability that sensors have to produce data increases by a factor of 100 every generation, while our capabilities to download data are increasing, but only by a factor of three, four, five per generation,” says Gianluca Furano, data systems and onboard computing lead at the ESA.

Around 30 percent data savings are expected by using AI at the edge on the PhiSat-1.

“Space is the ultimate edge,” says Aubrey Dunne, chief technology officer of Ubotica. “The Myriad was absolutely designed from the ground up to have an impressive compute capability but in a very low power envelope, and that really suits space applications.”

PhiSat-1 is currently in a sun-synchronous orbit around 329 miles (530 km) above Earth and travelling at over 17,000mph (27,500kmh).

The satellite’s mission is to assess things like polar ice for monitoring climate change, and soil moisture for the growth of crops. One day it could help to spot wildfires in minutes rather than hours or detect environmental accidents at sea.

A successor, PhiSat-2, is currently planned to test more of these possibilities. PhiSat-2 will also carry another Myriad 2.

Myriad 2 was not originally designed for use in orbit. Specialist chips which are protected against radiation are typically used for space missions and can be “up to two decades behind state-of-the-art commercial technology,” explains Dunne.

Incredibly, the Myriad 2 survived 36 straight hours of being blasted with radiation at CERN in late-2018 without any modifications.

ESA announced the joint team was “happy to reveal the first-ever hardware-accelerated AI inference of Earth observation images on an in-orbit satellite.”

PhiSat-1 and PhiSat-2 will be part of a future network with intersatellite communication systems.

(Image Credit: CERN/M. Brice)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Intel, Ubotica, and the ESA launch the first AI satellite appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/20/intel-ubotica-esa-launch-first-ai-satellite/feed/ 0
Intel and UPenn utilising federated learning to identify brain tumours https://www.artificialintelligence-news.com/2020/05/11/intel-and-upenn-utilising-federated-learning-to-identify-brain-tumours/ https://www.artificialintelligence-news.com/2020/05/11/intel-and-upenn-utilising-federated-learning-to-identify-brain-tumours/#comments Mon, 11 May 2020 17:05:53 +0000 http://artificialintelligence-news.com/?p=9594 Intel and the University of Pennsylvania (UPenn) are training artificial intelligence models to identify brain tumours – with a focus on maintaining privacy. The Perelman School of Medicine at UPenn is working with Intel Labs to co-develop technology based on federated learning, a machine learning technique which trains an algorithm across various devices without exchanging... Read more »

The post Intel and UPenn utilising federated learning to identify brain tumours appeared first on AI News.

]]>
Intel and the University of Pennsylvania (UPenn) are training artificial intelligence models to identify brain tumours – with a focus on maintaining privacy.

The Perelman School of Medicine at UPenn is working with Intel Labs to co-develop technology based on federated learning, a machine learning technique which trains an algorithm across various devices without exchanging data samples.

The goal is therefore to preserve privacy. Penn Medicine and Intel Labs have claimed they were first to publish a paper on federated learning in medical imaging, offering accuracy with a trained model to more than 99% of a model trained in a non-private method. Work which will build on this, according to the two companies, will ‘leverage Intel software and hardware to implement federated learning in a manner that provides additional privacy protection to both the model and the data.’

The two companies will be joined by 29 healthcare and research institutions from seven countries.

“AI shows great promise for the early detection of brain tumours, but it will require more data than any single medical centre holds to reach its full potential,” said Jason Martin, principal engineer at Intel Labs in a statement.

Artificial intelligence initiatives in healthcare continue apace. Microsoft recently announced details of a $40 million ‘AI for Health’ project, while last month startup Babylon Health stated its belief that it can appropriately triage patients in 85% of cases.

Read the full Intel announcement here.

Photo by jesse orrico on Unsplash

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G ExpoIoT Tech ExpoBlockchain ExpoAI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Intel and UPenn utilising federated learning to identify brain tumours appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/05/11/intel-and-upenn-utilising-federated-learning-to-identify-brain-tumours/feed/ 2
Leading AI researchers propose ‘toolbox’ for verifying ethics claims https://www.artificialintelligence-news.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/ https://www.artificialintelligence-news.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/#comments Mon, 20 Apr 2020 14:23:30 +0000 http://artificialintelligence-news.com/?p=9558 Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims. With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance. “AI systems have been developed in ways... Read more »

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims.

With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance.

“AI systems have been developed in ways that are inconsistent with the stated values of those developing them,” the researchers wrote. “This has led to a rise in concern, research, and activism relating to the impacts of AI systems.”

The researchers note that significant work has gone into articulating ethical principles by many players involved with AI development, but the claims are meaningless without some way to verify them.

“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety – they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety.”

Among the core ideas put forward is to pay developers for discovering bias in algorithms. Such a practice is already widespread in cybersecurity; with many companies offering up bounties to find bugs in their software.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the authors wrote.

“We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Another potential avenue is so-called “red teaming,” the creation of a dedicated team which adopts the mindset of a possible attacker to find flaws and vulnerabilities in a plan, organisation, or technical system.

“Knowledge that a lab has a red team can potentially improve the trustworthiness of an organization with respect to their safety and security claims.”

A red team alone is unlikely to give too much confidence; but combined with other measures can go a long way. Verification from parties outside an organisation itself will be key to instil trust in that company’s AI developments.

“Third party auditing is a form of auditing conducted by an external and independent auditor, rather than the organization being audited, and can help address concerns about the incentives for accuracy in self-reporting.”

“Provided that they have sufficient information about the activities of an AI system, independent auditors with strong reputational and professional incentives for truthfulness can help verify claims about AI development.”

The researchers highlight that a current roadblock with third-party auditing is that there’s yet to be any techniques or best practices established specifically for AI. Frameworks, such as Claims-Arguments-Evidence (CAE) and Goal Structuring Notation (GSN), may provide a starting place as they’re already widely-used for safety-critical auditing.

Audit trails, covering all steps of the AI development process, are also recommended to become the norm. The researchers again point to commercial aircraft, as a safety-critical system, and their use of flight data recorders to capture multiple types of data every second and provide a full log.

“Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.”

The final suggestion for software-oriented methods of verifying AI ethics claims is the use of privacy-preserving machine learning (PPML).

Privacy-preserving machine learning aims to protect the privacy of data or models used in machine learning, at training or evaluation time, and during deployment.

Three established types of PPML are covered in the paper: Federated learning, differential privacy, and encrypted computation.

“Where possible, AI developers should contribute to, use, and otherwise support the work of open-source communities working on PPML, such as OpenMined, Microsoft SEAL, tf-encrypted, tf-federated, and nGraph-HE.”

The researchers, representing some of the most renowned institutions in the world, have come up with a comprehensive package of ways any organisation involved with AI development can provide assurance to governance and the wider public to ensure the industry can reach its full potential responsibly.

You can find the full preprint paper on arXiv here (PDF)

(Photo by Alexander Sinn on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/feed/ 1
Intel examines whether AI can recognise faces using thermal imaging https://www.artificialintelligence-news.com/2020/01/10/intel-examines-ai-recognise-faces-thermal-imaging/ https://www.artificialintelligence-news.com/2020/01/10/intel-examines-ai-recognise-faces-thermal-imaging/#comments Fri, 10 Jan 2020 15:32:33 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6349 Researchers from Intel have published a study examining whether AI can recognise people’s faces using thermal imaging. Thermal imaging is often used to protect privacy because it obscures personally identifying details such as eye colour. In some places, like medical facilities, it’s often compulsory to use images which obscure such details. AI is opening up... Read more »

The post Intel examines whether AI can recognise faces using thermal imaging appeared first on AI News.

]]>
Researchers from Intel have published a study examining whether AI can recognise people’s faces using thermal imaging.

Thermal imaging is often used to protect privacy because it obscures personally identifying details such as eye colour. In some places, like medical facilities, it’s often compulsory to use images which obscure such details.

AI is opening up many new possibilities so Intel’s researchers set out to determine whether thermal imaging still offers a high degree of privacy.

Intel’s team used two sets of data sets:

  • The first set, known as SC3000-DB, was created using a Flir ThermaCam SC3000 infrared camera. The data set features 766 images of 40 volunteers (21 women and 19 men) who each sat in front of a camera for two minutes.
  • The second set, known as IRIS, was created by the Visual Computing and Image Processing Lab at Oklahoma State University. It features 4,190 images collected by 30 people and differs from the first set in that it contains various head angles and expressions. 

Each image from the data sets were first cropped to only contain each person’s face. 

A machine learning model then sought to numerically label facial features from the images as vectors. Another model, trained on VGGFace2 – a model trained on visible light images – was used to validate whether it could be applied to thermal images.

Here’s the full results for each data set:

The model trained on visible image data performed well in distinguishing among volunteers by extracting their facial features. 99.5 percent accuracy was observed for the SC3000-DB data set and 82.14 percent for IRIS.

Intel’s research shows that thermal imaging may not offer the privacy that many currently believe it to and it’s already possible to distinguish people using it.

“Many promising visual-processing applications, such as non-contact vital signs estimation and smart home monitoring, can involve private and or sensitive data, such as biometric information about a person’s health,” wrote the researchers.

“Thermal imaging, which can provide useful data while also concealing individual identities, is therefore used for many applications.”

You can find Intel’s full research here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Intel examines whether AI can recognise faces using thermal imaging appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/10/intel-examines-ai-recognise-faces-thermal-imaging/feed/ 1
Intel unwraps its first chip for AI and calls it Spring Hill https://www.artificialintelligence-news.com/2019/08/21/intel-ai-powered-chip-spring-hill/ https://www.artificialintelligence-news.com/2019/08/21/intel-ai-powered-chip-spring-hill/#respond Wed, 21 Aug 2019 10:17:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5956 Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres. The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill. Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into... Read more »

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
Intel has unwrapped its first processor that is designed for artificial intelligence and is planned for use in data centres.

The new Nervana Neural Network Processor for Inference (NNP-I) processor has a more approachable codename of Spring Hill.

Spring Hill is a modified 10nm Ice Lake processor which sits on a PCB and slots into an M.2 port typically used for storage.

According to Intel, the use of a modified Ice Lake processor allows Spring Hill to handle large workloads and consume minimal power. Two compute cores and the graphics engine have been removed from the standard Ice Lake design to accommodate 12 Inference Compute Engines (ICE).

In a summary, Intel detailed six main benefits it expects from Spring Hill:

  1. Best in class perf/power efficiency for major data inference workloads.
  2. Scalable performance at wide power range.
  3. High degree of programmability w/o compromising perf/power efficiency.
  4. Data centre at scale.
  5. Spring Hill solution – Silicon and SW stack – sampling with definitional partners/customers on multiple real-life topologies.
  6. Next two generations in planning/design.

Intel’s first chip for AI comes after the company invested in several Isreali artificial intelligence startups including Habana Labs and NeuroBlade. The investments formed part of Intel’s strategy called ‘AI Everywhere’ which aims to increase the firm’s presence in the market.

Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group, said:

“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources.

Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”

Facebook has said it will be using Intel’s new Spring Hill processor. Intel already has two more generations of the NNP-I in development.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Intel unwraps its first chip for AI and calls it Spring Hill appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/08/21/intel-ai-powered-chip-spring-hill/feed/ 0