ibm Archives - AI News https://www.artificialintelligence-news.com/tag/ibm/ Artificial Intelligence News Fri, 11 Aug 2023 11:02:52 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ibm Archives - AI News https://www.artificialintelligence-news.com/tag/ibm/ 32 32 IBM Research unveils breakthrough analog AI chip for efficient deep learning https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/ https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/#respond Fri, 11 Aug 2023 11:02:50 +0000 https://www.artificialintelligence-news.com/?p=13461 IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs). This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy. The traditional approach of executing deep neural networks... Read more »

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
IBM Research has unveiled a groundbreaking analog AI chip that demonstrates remarkable efficiency and accuracy in performing complex computations for deep neural networks (DNNs).

This breakthrough, published in a recent paper in Nature Electronics, signifies a significant stride towards achieving high-performance AI computing while substantially conserving energy.

The traditional approach of executing deep neural networks on conventional digital computing architectures poses limitations in terms of performance and energy efficiency. These digital systems entail constant data transfer between memory and processing units, slowing down computations and reducing energy optimisation.

To tackle these challenges, IBM Research has harnessed the principles of analog AI, which emulates the way neural networks function in biological brains. This approach involves storing synaptic weights using nanoscale resistive memory devices, specifically Phase-change memory (PCM).

PCM devices alter their conductance through electrical pulses, enabling a continuum of values for synaptic weights. This analog method mitigates the need for excessive data transfer, as computations are executed directly in the memory—resulting in enhanced efficiency.

The newly introduced chip is a cutting-edge analog AI solution composed of 64 analog in-memory compute cores.

Each core integrates a crossbar array of synaptic unit cells alongside compact analog-to-digital converters, seamlessly transitioning between analog and digital domains. Furthermore, digital processing units within each core manage nonlinear neuronal activation functions and scaling operations. The chip also boasts a global digital processing unit and digital communication pathways for interconnectivity.

The research team demonstrated the chip’s prowess by achieving an accuracy of 92.81 percent on the CIFAR-10 image dataset—an unprecedented level of precision for analog AI chips.

The throughput per area, measured in Giga-operations per second (GOPS) by area, underscored its superior compute efficiency compared to previous in-memory computing chips. This innovative chip’s energy-efficient design coupled with its enhanced performance makes it a milestone achievement in the field of AI hardware.

The analog AI chip’s unique architecture and impressive capabilities lay the foundation for a future where energy-efficient AI computation is accessible across a diverse range of applications.

IBM Research’s breakthrough marks a pivotal moment that will help to catalyse advancements in AI-powered technologies for years to come.

(Image Credit: IBM Research)

See also: Azure and NVIDIA deliver next-gen GPU acceleration for AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM Research unveils breakthrough analog AI chip for efficient deep learning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/11/ibm-research-breakthrough-analog-ai-chip-deep-learning/feed/ 0
IBM and Hugging Face release AI foundation model for climate science https://www.artificialintelligence-news.com/2023/08/03/ibm-hugging-face-ai-foundation-model-climate-science/ https://www.artificialintelligence-news.com/2023/08/03/ibm-hugging-face-ai-foundation-model-climate-science/#respond Thu, 03 Aug 2023 10:32:39 +0000 https://www.artificialintelligence-news.com/?p=13423 In a bid to democratise access to AI technology for climate science, IBM and Hugging Face have announced the release of the watsonx.ai geospatial foundation model. The geospatial model, built from NASA’s satellite data, will be the largest of its kind on Hugging Face and marks the first-ever open-source AI foundation model developed in collaboration... Read more »

The post IBM and Hugging Face release AI foundation model for climate science appeared first on AI News.

]]>
In a bid to democratise access to AI technology for climate science, IBM and Hugging Face have announced the release of the watsonx.ai geospatial foundation model.

The geospatial model, built from NASA’s satellite data, will be the largest of its kind on Hugging Face and marks the first-ever open-source AI foundation model developed in collaboration with NASA.

Jeff Boudier, head of product and growth at Hugging Face, highlighted the importance of information sharing and collaboration in driving progress in AI. Open-source AI and the release of models and datasets are fundamental in ensuring AI benefits as many people as possible.

Climate science faces constant challenges due to rapidly changing environmental conditions, requiring access to the latest data. Despite the abundance of data, scientists and researchers struggle to analyse the vast datasets effectively. NASA estimates that by 2024, there will be 250,000 terabytes of data from new missions.

To address this issue, IBM embarked on a Space Act Agreement with NASA earlier this year—aiming to build an AI foundation model for geospatial data.

By making this geospatial foundation model openly available on Hugging Face, both companies aim to promote collaboration and accelerate progress in climate and Earth science.

Sriram Raghavan, VP at IBM Research AI, commented:

“The essential role of open-source technologies to accelerate critical areas of discovery such as climate change has never been clearer.

By combining IBM’s foundation model efforts aimed at creating flexible, reusable AI systems with NASA’s repository of Earth-satellite data, and making it available on the leading open-source AI platform, Hugging Face, we can leverage the power of collaboration to implement faster and more impactful solutions that will improve our planet.”

The geospatial model, jointly trained by IBM and NASA on Harmonized Landsat Sentinel-2 satellite data (HLS) over one year across the continental United States, has shown promising results. It demonstrated a 15 percent improvement over state-of-the-art techniques using only half the labelled data.

With further fine-tuning, the model can be adapted for various tasks such as deforestation tracking, crop yield prediction, and greenhouse gas detection.

IBM’s collaboration with NASA in building the AI model aligns with NASA’s decade-long Open-Source Science Initiative, promoting a more accessible and inclusive scientific community. NASA, along with other federal agencies, has designated 2023 as the Year of Open Science, celebrating the benefits of sharing data, information, and knowledge openly.

Kevin Murphy, Chief Science Data Officer at NASA, said:

“We believe that foundation models have the potential to change the way observational data is analysed and help us to better understand our planet.

By open-sourcing such models and making them available to the world, we hope to multiply their impact.”

The geospatial model leverages IBM’s foundation model technology and is part of IBM’s broader initiative to create and train AI models with transferable capabilities across different tasks.

In June, IBM introduced watsonx, an AI and data platform designed to scale and accelerate the impact of advanced AI with trusted data. A commercial version of the geospatial model, integrated into IBM watsonx, will be available through the IBM Environmental Intelligence Suite (EIS) later this year.

By leveraging the power of open-source technologies, this latest collaboration aims to address climate challenges effectively and contribute to a more sustainable future for our planet.

(Photo by Markus Spiske on Unsplash)

See also: Jay Migliaccio, IBM Watson: On leveraging AI to improve productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM and Hugging Face release AI foundation model for climate science appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/03/ibm-hugging-face-ai-foundation-model-climate-science/feed/ 0
Wimbledon to feature AI commentary and draw analysis https://www.artificialintelligence-news.com/2023/06/22/wimbledon-feature-ai-commentary-draw-analysis/ https://www.artificialintelligence-news.com/2023/06/22/wimbledon-feature-ai-commentary-draw-analysis/#respond Thu, 22 Jun 2023 07:42:26 +0000 https://www.artificialintelligence-news.com/?p=13207 IBM and the All England Lawn Tennis Club have announced new AI-powered features for the Wimbledon digital fan experience that will debut at this year’s Championships. The features include generative AI commentary and AI draw analysis, aimed at enhancing the engagement and insight for tennis enthusiasts. Usama Al-Qassab, Marketing & Commercial Director at The All... Read more »

The post Wimbledon to feature AI commentary and draw analysis appeared first on AI News.

]]>
IBM and the All England Lawn Tennis Club have announced new AI-powered features for the Wimbledon digital fan experience that will debut at this year’s Championships.

The features include generative AI commentary and AI draw analysis, aimed at enhancing the engagement and insight for tennis enthusiasts.

Usama Al-Qassab, Marketing & Commercial Director at The All England Club, said:

“We are constantly innovating with our partners at IBM to provide Wimbledon fans, wherever they are in the world, with an insightful and engaging digital experience of The Championships.

This year, we’re introducing new features for our digital platforms that use the latest AI technology from IBM to help fans gain even more insight into the singles draw and access commentary on a wider variety of matches through our match highlights videos.”

AI commentary

The AI commentary feature, developed in collaboration with IBM iX, offers fans watching match highlights videos the option to receive audio commentary and captions of key moments.

The tool aims to provide a more insightful experience when catching up on matches through the Wimbledon App and wimbledon.com. This introduction marks a step towards making commentary available for matches outside of Wimbledon’s Show Courts, which already have live human commentary.

Experts from IBM iX and The All England Club worked together to train the AI using foundation models from IBM’s enterprise AI and data platform, watsonx.

The generative AI employed in this feature produces narration with diverse sentence structures and vocabulary, enhancing the informative and engaging nature of the clips.

AI draw analysis

Another innovative feature introduced this year is the AI draw analysis, a first of its kind in tennis.

Leveraging AI, this feature provides a statistic to determine the favorability of the path to the final for each player in the singles draw. Factors such as match-ups against potential future opponents and the player’s position in the draw compared to competitors contribute to the player’s draw favorability rating.

This unique insight enables tennis fans to uncover anomalies and potential surprises in the singles draw that may not be evident solely by considering the players’ rankings. The feature aims to inspire more debate and engagement within the fan community.

Jonathan Adashek, SVP of Marketing and Communications for IBM, commented:

“IBM is bringing new layers of insight and engagement to the 2023 Championships through the use of innovative new tools, powered by foundation models and generative AI from watsonx.

We’ve seen first-hand how these technologies have the power to help major sporting events like Wimbledon to grow their audiences through outstanding digital experiences. 

The AI and data platform that IBM is using to create unique fan experiences for Wimbledon is the same technology that we’re using to drive business transformation with clients across all sectors and industries.”

These new features complement the existing suite of AI-powered tools available on the Wimbledon App and wimbledon.com. The suite includes the IBM Power Index Leaderboard, IBM Match Insights, and Personalized Highlights Reels and Recommendations.

These digital features leverage IBM’s Watson AI technology to analyse over 100,000 data points from every shot played throughout the tournament, providing fans with a deeper understanding of the players, their opponents, and likely outcomes. The constant updates and tailored insights keep fans engaged throughout the tournament.

Wimbledon will take place from July 3 to July 16, 2023. Fans can witness these AI-powered features in action by visiting wimbledon.com or downloading the Wimbledon App from the App Store or Play Store.

(Photo by Ariv Gupta on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Wimbledon to feature AI commentary and draw analysis appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/22/wimbledon-feature-ai-commentary-draw-analysis/feed/ 0
Jay Migliaccio, IBM Watson: On leveraging AI to improve productivity https://www.artificialintelligence-news.com/2023/05/15/jay-migliaccio-ibm-watson-on-leveraging-ai-to-improve-productivity/ https://www.artificialintelligence-news.com/2023/05/15/jay-migliaccio-ibm-watson-on-leveraging-ai-to-improve-productivity/#respond Mon, 15 May 2023 15:36:09 +0000 https://www.artificialintelligence-news.com/?p=13067 IBM has been refining its AI solutions for decades and knows a thing or two about helping businesses leverage the technology to improve productivity. In 1997, IBM’s Deep Blue supercomputer was used to beat World Chess Champion Garry Kasparov. At the time, all too familiar headlines suggested that computers would soon replace humans. Over two... Read more »

The post Jay Migliaccio, IBM Watson: On leveraging AI to improve productivity appeared first on AI News.

]]>
IBM has been refining its AI solutions for decades and knows a thing or two about helping businesses leverage the technology to improve productivity.

In 1997, IBM’s Deep Blue supercomputer was used to beat World Chess Champion Garry Kasparov. At the time, all too familiar headlines suggested that computers would soon replace humans. Over two decades later, AI has proven to be an assistive tool that benefits us every day.

IBM Watson’s first commercial application was announced a little over a decade ago in February 2013 for utilisation management decisions in lung cancer treatment. In the years since, we’ve seen it used to deliver game-changing advancements in healthcare, weather forecasting, education, science, and much more.

AI News caught up with Jay Migliaccio, Senior Product Manager for Watson Orchestrate, to learn how IBM is now using its vast experience to help businesses with their digital transformations.

AI News: So, Jay, can you tell me how IBM is helping businesses to improve the productivity of their workforces?

Jay Migliaccio: Yes, Ryan. Thanks so much for the invite and for asking me here.

IBM is expanding its suite of offerings in the area of digital labour. Digital labour leverages AI and automation to help workers become more productive. And, much like human labour, digital labour performs work on business systems through “skills”.

Digital labour skills enable digital labour to interact with business applications, much like you and I would interact with a system of record or system of engagement. We can do this now through digital labour. And, what’s new and unique, is that digital labour leverages the human-centric interaction style.

So, we’ve introduced natural language and we’ve also introduced intelligent orchestration to be able to execute not just single skills, but actually multiple skills to be able to achieve higher-level tasks.

AN: Generative AI is a hot topic in the market at the moment. Do you see that being used practically in the workplace and what risks should businesses be aware of?

JM: Yeah, great question. I actually do use it myself in the workplace, I occasionally have to develop software tools and simple scripts and I have had it generate a number of scripts for me successfully. So I’m impressed not just with its ability to generate verbal and written content, but also code content. I for sure believe it will become increasingly useful in the workplace.

Current generative AI platforms have been trained on the internet, so remember your results may vary. I know anytime I Google or search for things on the internet I take the results with a grain of salt.

I believe that enterprises, as they go to look and adopt generative AI systems, they’ll lean more towards AI that they can trust. Therefore, we need to work on being able to create that trust element in generative AI solutions.

AN: What is the value of Watson Orchestrate for developers?

JM: When we talk about developers, I’m talking about automation developers. And that’s by and large developers that are integrating apps and business apps and business systems to work together.

For the most part, those developers have been integrating business systems to business systems. Now what we can do with Watson Orchestrate is we can introduce the human into the loop.

These automation developers now have a platform where they can build and integrate their automation workflows and they can bring a human experience into these automation workflows for everyday human workers.

Watson Orchestrate provides a platform for creating human-centric workflow automation, designed to interact with humans in our native communication style which is spoken or written word.

AN: How does Watson Orchestrate learn from user interactions?

JM: There are a couple of ways Watson Orchestrate is monitoring the behaviour of humans and learning from them. 

Perhaps most important is its ability to interpret the natural language through which humans are communicating. Today it’s the written word, but in the future spoken word. Watson Orchestrate can not just do a pattern match based on existing known sentences, but it can actually understand the intent of those utterances. 

Additionally, it can extract entities from those utterances. So, when you use proper nouns in a sentence, it can understand that’s an entity that it would use as part of an automation. It can match the intent of the utterance to existing skills that it has and can react accordingly. It can understand the intent of the utterance and then take action on those skills. Increasingly, it can sequence multiple skills together.

Also, we are working on systems for empowering Watson Orchestrate to monitor the user’s behaviour. And, just like any modern SaaS application today that has recommendations based on your behaviour, we’re working on recommendation engines to recommend to employees how they can use Watson Orchestrate to be more productive in the future.

AN: Talking about AI more generally, what new ways of working are today’s advancements enabling?

JM: As I just alluded to, we’re increasingly empowering systems to understand human natural language to a much more complex and sophisticated extent. Natural language interpretation has grown way beyond the basic pre-programmed bot experience.

I’m sure everybody has had an experience on a website where there’s a bot responding to your basic questions. What we’re trying to do is make that bot much more intelligent. The current generation of digital labour can understand your intent, extract entities from your utterances, and, most importantly, take action on your behalf.

AN: On the flip side, what are some of the main dangers of automation tools and how do we overcome those?

JM: I’m not sure if this is a specific category of danger, but I suppose I would put it under the law of unintended consequences. Anytime you work with technology, there can be outcomes that you don’t expect.

For example, if we think about the automobile as an automation tool for moving humans around – the intent, of course, was to move a human from A to B faster, and maybe more reliably. But the net result is occasionally we have accidents.

Much like the way we build transportation systems to constrain and reduce the potential for accidents, we have to do the same thing in our business systems with digital labour. Certainly, we’re going to want to start small and just do very selective, very specific tasks that are well-curated and well-defined.

We’ll need to create guardrails that guard against unintended and unexpected behaviour. One of the ways we’re doing this in Watson Orchestrate is to empower the digital labour to act on the user’s behalf and therefore leverage the user’s credentials when interacting with business systems.

As an employee, I’m given certain access to a business system based on my role. Therefore, we know when the digital employee performs actions on my behalf it also has those existing restrictions and permissions for those business systems.

Another option is monitoring behaviour and monitoring for unintended consequences. And, lastly, integrating governance and creating policies that permit or restrict the behaviour of digital employees.

AN: Are you a believer in the metaverse? If so, how much work do you think we will be doing in it?

JM: Yeah, great question. Metaverse for me is a very loose term. Here we are digitally speaking – we’ve never met before, you could be an avatar for all I know. So, in that sense, I’m a believer in the metaverse.

For me, like most innovative technologies, it will start on the fringe and work its way into the mainstream. You can look at entertainment and gaming and see very metaverse-like experiences being used there.

I’ve seen examples of the metaverse being used for deep meditative experiences. If you want to go into deep meditation, you can put on your virtual headsets and enter a metaverse world that is very different from the physical world we live in.

And I also can see the metaverse being initially used for education purposes. I think it’s a great way – a sort of low-risk way – to introduce people to new environments and new ideas at scale.

I don’t think we’re gonna go to the metaverse to go to work. I don’t see that as something coming in the near term.

AN: I can only promise I’m not an avatar for the time being. I might bump into you at Digital Transformation Week, next week. You’ll obviously be in attendance, what will you be sharing with the audience at the event?

JM: Yeah, Digital Transformation Week I will be talking about our view on the digital labour market and some of our solutions. I will also be sharing some of the stories about the early adopters of IBM’s digital labour solutions.

You can watch our full interview with Jay below:

IBM is a headline sponsor of Digital Transformation Week on 17-18 May 2023 and will be sharing its expertise with attendees. Swing by IBM’s booth at stand #236 and check out Jay Migliaccio’s keynote at 10:30am on day one.

The post Jay Migliaccio, IBM Watson: On leveraging AI to improve productivity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/15/jay-migliaccio-ibm-watson-on-leveraging-ai-to-improve-productivity/feed/ 0
IBM’s AI-powered Mayflower ship crosses the Atlantic https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/ https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/#respond Mon, 06 Jun 2022 15:13:54 +0000 https://www.artificialintelligence-news.com/?p=12045 A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned. The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable... Read more »

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned.

The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable oceans.

IBM’s Mayflower has been attempting to autonomously complete the voyage that its predecessor did over 400 years ago but has been beset by various problems.

The initial launch was planned for June 2021 but a number of technical glitches forced the vessel to return to Plymouth.

Back in April 2022, the Mayflower set off again. This time, an issue with the generator forced the boat to divert to the Azores Islands in Portugal.

The Mayflower was patched up and pressed on until late May when a problem developed with the charging circuit for the generator’s starter batteries. This time, a course for Halifax, Nova Scotia was charted.

After more than five weeks since it departed Plymouth, the modern Mayflower is now docked in Halifax. While it’s yet to reach its final destination, the Mayflower has successfully crossed the Atlantic (hiccups aside.)

While mechanically the ship leaves a lot to be desired, IBM says the autonomous systems have worked flawlessly—including the AI captain developed by MarineAI.

It’s beyond current AI systems to instruct and control robotics to carry out mechanical repairs for any number of potential failures. However, the fact that Mayflower’s onboard autonomous systems have been able to successfully navigate the ocean and report back mechanical issues is an incredible achievement.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, told Edge Computing News in an interview.

IBM designed Mayflower 2.0 with marine research nonprofit Promare. The ship uses a wind/solar hybrid propulsion system and features a range of sensors for scientific research on its journey including acoustic, nutrient, temperature, and water and air samplers.

You can find out more about the Mayflower and view live data and webcams from the ship here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/feed/ 0
IBM enhances Watson Discovery’s natural language processing capabilities https://www.artificialintelligence-news.com/2021/11/10/ibm-enhances-watson-discovery-natural-language-processing-capabilities/ https://www.artificialintelligence-news.com/2021/11/10/ibm-enhances-watson-discovery-natural-language-processing-capabilities/#respond Wed, 10 Nov 2021 14:29:46 +0000 https://artificialintelligence-news.com/?p=11354 IBM has announced enhancements to the natural language processing (NLP) capabilities of Watson Discovery. Watson Discovery is an AI-powered intelligent search and text-analytics platform that can retrieve critical information buried in enterprise data. In one case study, Woodside Energy had no way to retrieve the 30 years’ worth of valuable engineering and drilling knowledge that... Read more »

The post IBM enhances Watson Discovery’s natural language processing capabilities appeared first on AI News.

]]>
IBM has announced enhancements to the natural language processing (NLP) capabilities of Watson Discovery.

Watson Discovery is an AI-powered intelligent search and text-analytics platform that can retrieve critical information buried in enterprise data.

In one case study, Woodside Energy had no way to retrieve the 30 years’ worth of valuable engineering and drilling knowledge that was buried in unstructured documentation. Using the existing NLP capabilities of Watson Discovery, the firm reportedly cut research time by more than 75 percent.

Among the new enhancements planned for Watson Discovery are:

  • Pre-trained document structure understanding: Watson Discovery’s Smart Document Understanding feature now includes a new pre-trained model designed to automatically understand the visual structure and layout of a document without additional training from a developer or data scientist.
  • Automatic text pattern detection: A new advanced pattern creation feature is available in beta that helps users to quickly identify business-specific text patterns within their documents. It can start learning the underlying text patterns from as little as two examples and then refines the pattern based on user feedback.
  • Advanced NLP customisation capabilities: With a new custom entity extractor feature, IBM is simplifying the process of training NLP models to identify highly-customised, business-specific words by reducing the data prep effort, simplifying labeling with active learning and bulk annotation capabilities, and enabling simple model deployment to accelerate training time.

“The stream of innovation coming to IBM Watson from IBM Research is why global businesses in the fields of financial services, insurance, and legal services turn to IBM to help detect emerging business trends, gain operational efficiency and empower their workers to uncover new insights,” said Daniel Hernandez, General Manager of Data and AI, IBM.

“The pipeline of natural language processing innovations we’re adding to Watson Discovery can continue to provide businesses with the capabilities to more easily extract the signal from the noise and better serve their customers and employees.”

(Image Credit: IBM)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post IBM enhances Watson Discovery’s natural language processing capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/10/ibm-enhances-watson-discovery-natural-language-processing-capabilities/feed/ 0
GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/ https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/#respond Tue, 09 Nov 2021 13:06:58 +0000 https://artificialintelligence-news.com/?p=11349 Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research. The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program. Four of the major new SDKs are: ReOpt – Automatically... Read more »

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research.

The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program.

Four of the major new SDKs are:

  • ReOpt – Automatically optimises logistical processes using advanced, parallel algorithms. This includes vehicle routes, warehouse selection, and fleet mix. The dynamic rerouting capabilities – shown in an on-stage demo – can reduce travel time, save fuel costs, and minimise idle periods.
  • cuNumeric – Implements the popular NumPy application programming interface and enables scaling to multi-GPU and multi-node systems with zero code changes.
  • cuQuantum – Designed for quantum computing, it enables large quantum circuits to be simulated faster. This enables quantum researchers to simulate areas such as near-term variational quantum algorithms for molecules, error correction algorithms to identify fault tolerance, and accelerate popular quantum simulators from Atos, Google, and IBM.
  • CUDA-X accelerated DGL container – Helps developers and data scientists working on graph neural networks to quickly set up a working environment. The container makes it easy to work in an integrated, GPU-accelerated GNN environment combining DGL and Pytorch.

Some existing AI-related SDKs that have received notable updates are:

  • Deepstream 6.0 – introduces a new graph composer that makes computer vision accessible with a visual drag-and-drop interface.
  • Triton 2.15, TensorRT 8.2 and cuDNN 8.4 – assists with the development of deep neural networks by providing new optimisations for large language models and inference acceleration for gradient-boosted decision trees and random forests.
  • Merlin 0.8 – boosts recommendation systems with its new capabilities for predicting a user’s next action with little or no user data and support for models larger than GPU memory.

Accelerating quantum research

Nvidia has established a partnership with Google, IBM, and a number of small companies, national labs, and university research groups to accelerate quantum research.

“It takes a village to nurture an emerging technology, so Nvidia is collaborating with Google Quantum AI, IBM, and others to take quantum computing to the next level,” explained the company in a blog post.

The first library from the aforementioned new cuQuantum SDK is Nvidia’s initial contribution to the partnership. The library is called cuStateVec and is an accelerator for the state vector simulation method which tracks the full state of the system in memory and can scale to tens of qubits.

cuStateVec has been integrated into Google Quantum AI’s state vector simulator qsim and can be used through the open-source framework Cirq.

“Quantum computing promises to solve tough challenges in computing that are beyond the reach of traditional systems,” commented Catherine Vollgraff Heidweiller at Google Quantum AI.

“This high-performance simulation stack will accelerate the work of researchers around the world who are developing algorithms and applications for quantum computers.”

In December, cuStateVec will also be integrated with Qiskit Aer—a high-performance simulator framework for quantum circuits from IBM.

Among the national labs using cuQuantum to accelerate their research are Oak Ridge, Argonne, Lawrence Berkeley National Laboratory, and Pacific Northwest National Laboratory. University research groups include those at Caltech, Oxford, and MIT.

Nvidia is helping developers to get started by creating a ‘DGX quantum appliance’ that puts its simulation software in a container optimised for its DGX A100 systems. The software will be available early next year via the company’s NGC Catalog.

(Image Credit: Nvidia)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/feed/ 0
AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/ https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/#respond Fri, 18 Jun 2021 12:07:56 +0000 http://artificialintelligence-news.com/?p=10711 IBM’s fully-autonomous edge AI-powered ship Mayflower has set off on its crewless voyage from Plymouth, UK to Plymouth, USA. The ship is named after the Mayflower vessel which transported pilgrim settlers from Plymouth, England to Plymouth, Massachusetts in 1620. On its 400th anniversary, it was decided that a Mayflower for the 21st century should be... Read more »

The post AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail appeared first on AI News.

]]>
IBM’s fully-autonomous edge AI-powered ship Mayflower has set off on its crewless voyage from Plymouth, UK to Plymouth, USA.

The ship is named after the Mayflower vessel which transported pilgrim settlers from Plymouth, England to Plymouth, Massachusetts in 1620. On its 400th anniversary, it was decided that a Mayflower for the 21st century should be built.

Mayflower 2.0 is a truly modern vessel packed with the latest technological advancements. Onboard edge AI computing enables the ship to carry out scientific research while navigating the harsh environment of the ocean—often without any connectivity.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, recently told Edge Computing News in an interview.

The Weather Company, which IBM acquired back in 2016, has been advising on the departure window for Mayflower’s voyage. Earlier this week, the Mayflower was given the green light to set sail.

Mayflower’s AI captain is developed by MarineAI and uses IBM’s artificial intelligence powers. A little fun fact is that the AI had to be trained specifically to ignore seagulls as they could appear to be large objects and lead to Mayflower taking unnecessary action to maneuver around them.

The progress of Mayflower can be viewed using a dashboard built by IBM’s digital agency iX.

A livestream from Mayflower’s onboard cameras is also available, but it can understandably be a little temperamental. IBM partnered with Videosoft, a company that specialises in live-streaming in challenging environments, to enable streaming over speeds of just 6kbps. However, there are times when Mayflower will be fully-disconnected—which even the best algorithms can’t overcome.

If the livestream is currently available, you can view it here.

Unlike its predecessor, Mayflower 2.0 won’t be reliant solely on wind power and will employ a wind/solar hybrid propulsion system with a backup diesel generator. The new ship also trades in a compass and nautical charts for navigation in favour of a state-of-the-art GNSS positioning system with SATCOM, RADAR, and LIDAR.

A range of sensors are onboard for scientific research including acoustic, nutrient, temperature, and water and air samplers. Edge devices will store and analyse data locally until connectivity is available. When a link has been established, the data will be uploaded to edge nodes onshore.

Mayflower is a fascinating project and we look forward to following its voyage. AI News will keep you updated on any relevant developments.

(Image Credit: IBM)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI, Captain: IBM’s edge AI-powered ship Mayflower sets sail appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/18/ai-captain-ibm-edge-ai-powered-ship-mayflower-sets-sail/feed/ 0
IBM’s Project CodeNet wants to teach AI how to code https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/ https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/#respond Tue, 11 May 2021 08:35:21 +0000 http://artificialintelligence-news.com/?p=10565 IBM has announced Project CodeNet, a large dataset that aims to help teach AI how to understand and even write code. Project CodeNet was announced at IBM’s Think conference this week and claims to be the largest open-source dataset for code (approximately 10 times the size of the closest.) CodeNet features 500 million lines of... Read more »

The post IBM’s Project CodeNet wants to teach AI how to code appeared first on AI News.

]]>
IBM has announced Project CodeNet, a large dataset that aims to help teach AI how to understand and even write code.

Project CodeNet was announced at IBM’s Think conference this week and claims to be the largest open-source dataset for code (approximately 10 times the size of the closest.)

CodeNet features 500 million lines of code, 14 million examples, and spans 55 programming languages including Python, C++, Java, Go, COBOL, Pascal, and more.

Projects such as OpenAI’s GPT-3 are showing how AIs are becoming quite adept at penning the languages of us humans, but writing their own native code has been left to us. CodeNet aims to change that.

For at least the foreseeable future, projects like GPT-3 will be a tool for humans that can increase their productivity by providing a basic standard that will still require some editing to iron out errors and compensate for areas where humans still have an edge such as creativity, emotion, and compassion.

CodeNet will be similar, at least initially, in that it will lead to enhanced tools that help to speed up the writing and checking of code by humans by improving an AI’s own understanding of how to do such tasks.

“Given its wealth of programs written in a multitude of languages, we believe Project CodeNet can serve as a benchmark dataset for source-to-source translation and do for AI and code what the ImageNet dataset did years ago for computer vision,” says IBM.

US entrepreneur Marc Andreesen famously, and correctly, wrote in 2011 that “Software is eating the world”. Fast-forward to today and even cars now feature over 100 million lines of code (and growing rapidly, with the advent of autonomous vehicles.)

IBM says one of its large automotive clients recently approached the company to help update a $200 million asset consisting of 3,500, multi-generation Java files. These files contained over one million lines of code.

By applying its AI for Code stack, IBM reduced the client’s year-long ongoing code migration process down to just four weeks.

That example is sure to be the first of many in the years to come which have been greatly sped up, and improved, thanks to Project CodeNet.

You can find the full Project CodeNet dataset on GitHub here.

(Photo by ThisisEngineering RAEng on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM’s Project CodeNet wants to teach AI how to code appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/feed/ 0
IBM aims to boost AI hardware performance with new Composer tool https://www.artificialintelligence-news.com/2021/04/15/ibm-aims-boost-ai-hardware-performance-composer-tool/ https://www.artificialintelligence-news.com/2021/04/15/ibm-aims-boost-ai-hardware-performance-composer-tool/#respond Thu, 15 Apr 2021 11:44:38 +0000 http://artificialintelligence-news.com/?p=10463 IBM’s new AI Hardware Composer tool aims to boost the performance of analog AI hardware. The tool is being released on the second anniversary of the IBM Research AI Hardware Center. IBM’s pioneering centre launched in 2019 with the aim of improving AI hardware compute efficiency by 2.5 times every year for a decade. AI... Read more »

The post IBM aims to boost AI hardware performance with new Composer tool appeared first on AI News.

]]>
IBM’s new AI Hardware Composer tool aims to boost the performance of analog AI hardware.

The tool is being released on the second anniversary of the IBM Research AI Hardware Center. IBM’s pioneering centre launched in 2019 with the aim of improving AI hardware compute efficiency by 2.5 times every year for a decade.

(Credit: IBM)

AI Hardware Composer claims to help both novice and experienced developers to create neural networks and tune analog devices to build accurate AI models.

The new tool can be used with IBM’s existing Analog Hardware Acceleration Kit (AIHWKIT), an open-source Python toolkit for exploring and using the capabilities of in-memory computing devices in the context of artificial intelligence.

IBM says AI researchers can use Composer and AIHWKIT to test the company’s neural network optimisation tools and design analog hardware-aware models.

The AI Hardware Center has, so far, been exceeding its 2.5 times per year performance improvement target.

In a paper presented at the 2021 International Solid-State Circuits Virtual Conference (ISSCC), IBM explained how it doubled its training projection and outperformed its inference target sixfold.

Six companies joined IBM in founding the AI Hardware Center—Samsung, Synopsys, Tokyo Electron Limited, Applied Materials, SUNY Polytechnic Institute, and Rensselaer Polytechnic Institute (RPI). As of writing, the centre now boasts 16 industry and academic members.

(Image Credit: IBM)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM aims to boost AI hardware performance with new Composer tool appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/15/ibm-aims-boost-ai-hardware-performance-composer-tool/feed/ 0