AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ Artificial Intelligence News Fri, 09 Jun 2023 14:41:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ 32 32 Mark Zuckerberg: AI will be built into all of Meta’s products https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/ https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/#respond Fri, 09 Jun 2023 14:41:18 +0000 https://www.artificialintelligence-news.com/?p=13176 Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting. The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and... Read more »

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting.

The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and create emoji stickers for messaging services.

These developments come at a crucial time for Meta, as the company has faced financial struggles and an identity crisis in recent years. Investors criticised Meta for focusing too heavily on its metaverse ambitions and not paying enough attention to AI.

Meta’s decision to focus on AI tools follows in the footsteps of its competitors, including Google, Microsoft, and Snapchat, who have received significant investor attention for their generative AI products. Unlike the aforementioned rivals, Meta is yet to release any consumer-facing generative AI products.

To address this gap, Meta has been reorganising its AI divisions and investing heavily in infrastructure to support its AI product needs.

Zuckerberg expressed optimism during the company meeting, stating that advancements in generative AI have made it possible to integrate the technology into “every single one” of Meta’s products. This signifies Meta’s intention to leverage AI across its platforms, including Facebook, Instagram, and WhatsApp.

In addition to consumer-facing tools, Meta also announced a productivity assistant called Metamate for its employees. This assistant is designed to answer queries and perform tasks based on internal company information.

Meta is also exploring open-source models, allowing users to build their own AI-powered chatbots and technologies. However, critics and competitors have raised concerns about the potential misuse of these tools, as they can be utilised to spread misinformation and hate speech on a larger scale.

Zuckerberg addressed these concerns during the meeting, emphasising the value of democratising access to AI. He expressed hope that users would be able to develop AI programs independently in the future, without relying on frameworks provided by a few large technology companies.

Despite the increased focus on AI, Zuckerberg reassured employees that Meta would not be abandoning its plans for the metaverse, indicating that both AI and the metaverse would remain key areas of focus for the company.

The success of these endeavours will determine whether Meta can catch up with its competitors and solidify its position among tech leaders in the rapidly-evolving landscape.

(Photo by Mariia Shalabaieva on Unsplash)

Related: Meta’s open-source speech AI models support over 1,100 languages

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0
Google plays it safe with initial Bard rollout https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/ https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/#respond Tue, 21 Mar 2023 16:51:45 +0000 https://www.artificialintelligence-news.com/?p=12851 Google has begun rolling out early access to its Bard chatbot in the US and UK. The ChatGPT rival was announced via a blog post in February, seemingly to get ahead of Microsoft’s own big reveal event the next day. Microsoft’s plans to integrate a new version of ChatGPT into its Bing search engine set... Read more »

The post Google plays it safe with initial Bard rollout appeared first on AI News.

]]>
Google has begun rolling out early access to its Bard chatbot in the US and UK.

The ChatGPT rival was announced via a blog post in February, seemingly to get ahead of Microsoft’s own big reveal event the next day.

Microsoft’s plans to integrate a new version of ChatGPT into its Bing search engine set off alarm bells at Google. In response, Google CEO Sundar Pichai invited the company’s founders – Larry Page and Sergey Brin – to return for a series of meetings to review its AI strategy.

In stark contrast to Microsoft’s polished event, a last-minute event held by Google the day after was a mess. Previous announcements were rehashed, a presenter’s phone went missing, and Pichai was nowhere to be seen.

Googlers took to the internal forum ‘Memegen’ to criticise Pichai’s leadership. One wrote, “Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic” and called on Pichai to “please return to taking a long-term outlook.”

During a Bard promo video, an incorrect answer was shown that sent Google’s shares plummeting and wiped $120 billion from its value.

Google AI Chief Jeff Dean had allegedly even warned colleagues ahead of the reveal that it cannot rush products like Bard to the market because the company has more “reputational risk” in providing wrong information.

The contrast between Microsoft’s and Google’s announcements was stark. Microsoft looked unstoppable while Google appeared to be in absolute chaos. However, things shifted in the coming weeks.

Users began reporting “unhinged” responses from Microsoft’s chatbot—including not just incorrect information, but also the bot appearing to be in a depressive state, wanting to be human, and even claiming to spy on people through their webcams.

Suddenly, that one error in Bard’s promo video didn’t look so bad. Furthermore, it justified Google’s decision to hold fire on releasing Bard to the public.

Google now appears to be comfortable with Bard being ready for public testing:

“Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety,” wrote Google in a blog post.

“We’re using human feedback and evaluation to improve our systems, and we’ve also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

In a quick test, Bard (left of screenshot) is subjectively more concise with its responses than Bing (right of screenshot) while the latter is slightly more comprehensive:

However, there are currently a few key differences:

  • Bing’s chatbot wants to continue the conversation and suggests possible follow-up questions.
  • Bing’s chatbot makes it clear where it’s getting its information so users can get more background.
  • Bard reminds the user before starting a conversation that it has “limitations” and “won’t always get it right”. Furthermore, the page always displays a warning that Bard “may display inaccurate or offensive information that doesn’t represent Google’s views.”

You can sign up to try Bard here. Google is currently rolling out access in the US and UK but will be expanding to other countries and languages over time.

(Image Credit: Google)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google plays it safe with initial Bard rollout appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/21/google-plays-it-safe-initial-bard-rollout/feed/ 0
Apple shies from the spotlight with staff-only AI summit https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/ https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/#respond Mon, 20 Feb 2023 15:55:03 +0000 https://www.artificialintelligence-news.com/?p=12757 Apple seems happy to stay out of the spotlight when it comes to the “AI race” going by its latest summit. Microsoft, Google, Baidu, and others have all raced to make very public AI announcements over the past month. Apple held its own AI event earlier this month but it was a staff-only affair. Apple’s... Read more »

The post Apple shies from the spotlight with staff-only AI summit appeared first on AI News.

]]>
Apple seems happy to stay out of the spotlight when it comes to the “AI race” going by its latest summit.

Microsoft, Google, Baidu, and others have all raced to make very public AI announcements over the past month. Apple held its own AI event earlier this month but it was a staff-only affair.

Apple’s low-key AI event was notable as being the first to be held in-person at the Steve Jobs Theatre since the pandemic began. Other than that, it wasn’t particularly newsworthy—which is somewhat newsworthy in itself.

Most AI solutions rely on the cloud for processing. Google is moving an increasing amount to on-device but Apple, for better or worse, has made a big deal about its on-device AI strategy.

One of the ways that Apple markets itself as differing from rivals is its privacy-first approach. The firm collects minimal data and processes it on-device. That approach has worked great for Apple but the company may begin to struggle as it requires more data and processing power—something we may already be seeing.

Siri is widely perceived to be the third most capable virtual assistant behind Google and Alexa. Apple currently has no answer to the ChatGPT and Bard chatbots unveiled by Microsoft and Google respectively.

One of the primary uses for machine learning over the years has been web search. The threat that a ChatGPT-integrated Bing poses to Google reportedly set off the alarm bells over at Mountain View and led to the frantic (and “botched”) announcement of Bard.

Apple has reportedly been working on its own search engine but the company’s ethos against data collection could be holding it back from launching a product that can go toe-to-toe against Google and Bing.

At its AI event this month, Apple appeared set on rallying employees and convincing them it isn’t falling behind. Apple’s AI chief told attendees that “machine learning is moving faster than ever” and that Apple has talent that is “truly at the forefront.”

That doesn’t sound like a company that is particularly confident.

“While that may be Apple’s belief, I haven’t heard of anything — for consumers — that is a game changer coming out of the summit,” wrote Bloomberg’s Mark Gurman in the latest edition of his Power On newsletter.

“For those wondering, I don’t believe Apple previewed a ChatGPT/New Bing competitor or anything of the sort.”

Apple isn’t known to rush products to market and it’s not surprising that we’re not getting any major announcements ahead of WWDC. However, this staff-only event – and Gurman’s report – certainly gives the impression that Apple knows it’s not as well-positioned as its rivals when it comes to AI.

For now, Apple looks quite happy to sit out of the spotlight when it comes to AI. This year, all the attention will be firmly on its mixed-reality headset. However, questions will certainly be raised in the coming years about whether Apple is an AI leader unless it can silence the critics.

(Photo by Oscar Keys on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple shies from the spotlight with staff-only AI summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/20/apple-shies-from-spotlight-staff-only-ai-summit/feed/ 0
Inworld closes $50M Series A for its realistic NPC generator https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/ https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/#respond Wed, 24 Aug 2022 09:47:52 +0000 https://www.artificialintelligence-news.com/?p=12213 Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round. The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games. In current video games, NPCs... Read more »

The post Inworld closes $50M Series A for its realistic NPC generator appeared first on AI News.

]]>
Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round.

The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games.

In current video games, NPCs have pre-scripted responses. AI-powered virtual characters like those Inworld are developing can offer dynamic responses to general questions about the local area or wider world.

While graphics have generally become more immersive over the years, interactions have largely remained the same. The ability for NPCs to hold conversations more naturally will be a giant leap forward for gaming and help to deliver on the metaverse’s bold promises.

Inworld’s potentially key role in the future of the metaverse and gaming market – which is already worth hundreds of billions of dollars – has opened investors’ wallets.

Since it left stealth last year, Inworld has raised $70 million. In March, it raised $10 million in a strategic funding round.

Inworld was also notably included in the latest Disney Accelerator cohort which also included blockchain network Polygon, NFT platform Flickplay, AR hardware and interface developer Red 6, metaverse e-commerce platform Obsess, and immersive experience provider Lockerverse.

“I’ve long believed that interactions, memories, and connections formed in immersive worlds can become as real as those formed in real life. It’s why I’m excited about being named a Disney Accelerator company,” wrote Inworld AI CEO Ilya Gelfenbeyn in a LinkedIn post about the accelerator.

“Our platform for creating memorable, AI-powered characters will unlock new opportunities for creators in entertainment, gaming, the metaverse, and other digital realms who are ready to create memories like mine.”

Inworld Studio launched in beta a few months ago. The no-code system works with most common game and metaverse engines, including Unreal and unity.

“It’s crazy to think that Inworld was founded just 13 months ago. Since then, we raised two seed rounds, were selected for the Disney Accelerator, launched the beta version of our product, and now, closed our Series A!” wrote Gelfenbeyn in a blog post.

“It’s been impressive to see our community bring characters to life with AI. I know the best is yet to come.”

(Image Credit: Inworld)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inworld closes $50M Series A for its realistic NPC generator appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/24/inworld-closes-50m-series-a-realistic-npc-generator/feed/ 0
IRS expands voice bot options for faster service https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/ https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/#respond Tue, 21 Jun 2022 13:51:14 +0000 https://www.artificialintelligence-news.com/?p=12096 The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times. “This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We... Read more »

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times.

“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”

Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.

Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.

“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.

Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:

  • Account and return transcripts.
  • Payment history.
  • Current balance owed.

In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to... Read more »

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our... Read more »

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0