virtual assistant Archives - AI News https://www.artificialintelligence-news.com/tag/virtual-assistant/ Artificial Intelligence News Tue, 21 Jun 2022 13:51:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png virtual assistant Archives - AI News https://www.artificialintelligence-news.com/tag/virtual-assistant/ 32 32 IRS expands voice bot options for faster service https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/ https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/#respond Tue, 21 Jun 2022 13:51:14 +0000 https://www.artificialintelligence-news.com/?p=12096 The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times. “This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We... Read more »

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times.

“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”

Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.

Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.

“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.

Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:

  • Account and return transcripts.
  • Payment history.
  • Current balance owed.

In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/21/irs-expands-voice-bot-options-for-faster-service/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to... Read more »

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus... Read more »

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0
Google’s AI reservation service Duplex is now available in 49 states https://www.artificialintelligence-news.com/2021/04/01/google-ai-reservation-service-duplex-now-available-49-states/ https://www.artificialintelligence-news.com/2021/04/01/google-ai-reservation-service-duplex-now-available-49-states/#respond Thu, 01 Apr 2021 14:05:35 +0000 http://artificialintelligence-news.com/?p=10431 Google has expanded its controversial AI reservation service Duplex to 49 states across the US. Duplex will have to comply with privacy regulations which vary between states and – when it expands further outside the US – their national laws too. Google says the rollout delay in the US was due to awaiting changes in... Read more »

The post Google’s AI reservation service Duplex is now available in 49 states appeared first on AI News.

]]>
Google has expanded its controversial AI reservation service Duplex to 49 states across the US.

Duplex will have to comply with privacy regulations which vary between states and – when it expands further outside the US – their national laws too.

Google says the rollout delay in the US was due to awaiting changes in legislation or the need to add features on a per-state basis. Some states, for example, require a call-back number.

The reservation service caused both awe and fear when it was announced in May 2018 for sounding eerily human-like – complete with the “ums” and “ahs” we often fail to avoid – raising concerns it could be used for automating criminal activities such as fraud.

Many have since called for AI bots to identify themselves as such before speaking to a human, something which Duplex now does.

Duplex will, eventually, be able to fully-automatically undertake time-consuming and mundane tasks such as booking hairdresser appointments or table reservations at restaurants. However, right now it’s a bit hit-and-miss.

Google confirmed in 2019 that around 25 percent of calls made by Duplex are actually conducted by humans. A further 19 percent of calls initiated by Duplex had to be completed by us mere mortals.

The final state Duplex is yet to launch in is Louisiana. The local laws preventing Duplex’s launch in the state are unspecified.

You can find the current US states and international countries Duplex has launched in here.

(Photo by Luke Michael on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google’s AI reservation service Duplex is now available in 49 states appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/01/google-ai-reservation-service-duplex-now-available-49-states/feed/ 0
IBM study highlights rapid uptake and satisfaction with AI chatbots https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 http://artificialintelligence-news.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
The BBC’s virtual assistant is now available for testing in the UK https://www.artificialintelligence-news.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/ https://www.artificialintelligence-news.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/#respond Wed, 03 Jun 2020 15:49:57 +0000 http://artificialintelligence-news.com/?p=9668 A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing. Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country,... Read more »

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
A virtual assistant from the BBC which aims to cater for Britain’s many dialects is now available for testing.

Even as a Brit, it can often feel like a translation app is needed between Bristolian, Geordie, Mancunian, Brummie, Scottish, Irish, or any of the other regional dialects in the country. For a geographically small country, we’re a diverse bunch – and US-made voice assistants often struggle with even the slightest accent.

The BBC thinks it can do a better job than the incumbents and first announced its plans to build a voice assistant called ‘Beeb’ in August last year.

Beeb is being trained using the BBC’s staff from around the country. As a public service, the institution aims to offer as wide representation as possible which is reflected in its employees.

The broadcaster also believes that Beeb addresses public concerns about voice assistants; primarily that they collect vast amounts of data for commercial purposes. As a taxpayer-funded service, the BBC does not rely on things like advertising.

“People know and trust the BBC,” a spokesperson told The Guardian last year, “so it will use its role as public service innovator in technology to ensure everyone – not just the tech-elite – can benefit from accessing content and new experiences in this new way.”

An early version of Beeb is now available for testing by UK participants of the Windows Insider program. Microsoft is heavily involved in the Beeb assistant as the company’s Azure AI services are being used by the BBC.

The first version of Beeb allows users to do virtual assistant norms like getting weather updates and the news, access radio and podcasts, and even grab a few jokes from BBC Comedy writers and facts from QI host Sandi Toksvig.

According to the broadcaster, Beeb won’t launch on dedicated hardware but instead will be designed to eventually be implemented in smart speakers, TVs, and mobiles.

While it still has a long way to go to take on the capabilities of Google, Alexa, Siri, and others, Beeb may offer a compelling alternative for accent-heavy Brits that struggle with American voice assistants.

Grab the Beeb app from the Microsoft Store here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post The BBC’s virtual assistant is now available for testing in the UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/03/bbc-virtual-assistant-tested-in-uk/feed/ 0
Meena is Google’s first truly conversational AI https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/ https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/#respond Wed, 29 Jan 2020 14:59:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6387 Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should... Read more »

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena.

Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.

Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation (everyone has that friend who goes off on multiple tangents during the same conversation, right?)

Google published its work on e-print repository arXiv on Monday in a paper called “Towards a Human-like Open Domain Chatbot”.

A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

Google also debuted a metric alongside Meena called Sensibleness and Specificity Average (SSA) which measures the ability of agents to maintain a conversation.

Meena scores 79 percent using the new SSA metric. For comparison, Mitsuku – a Loebner Prize-winning AI agent developed by Pandora Bots – scored 56 percent.

The result of Meena brings its conversational ability close to that of humans. On average, humans score around 86 percent using the SSA metric.

We don’t yet know when Google intends to debut Meena’s technology in its products but, as the digital assistant war heats up, we’re sure the company is as eager to release it as we are to use it.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Meena is Google’s first truly conversational AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/29/meena-google-truly-conversational-ai/feed/ 0
Huawei announces its own AI assistant as it prepares for Google-less life https://www.artificialintelligence-news.com/2019/09/19/huawei-announces-ai-assistant-google/ https://www.artificialintelligence-news.com/2019/09/19/huawei-announces-ai-assistant-google/#respond Thu, 19 Sep 2019 15:13:10 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6033 Huawei has announced its own AI-powered assistant during a launch event in Munich as it prepares for life without Google’s services. Due to US trade restrictions, Huawei is losing access to Google’s services. The new Mate 30 smartphones announced in Munich today will launch with the open-source Android, but it will not feature the Play... Read more »

The post Huawei announces its own AI assistant as it prepares for Google-less life appeared first on AI News.

]]>
Huawei has announced its own AI-powered assistant during a launch event in Munich as it prepares for life without Google’s services.

Due to US trade restrictions, Huawei is losing access to Google’s services. The new Mate 30 smartphones announced in Munich today will launch with the open-source Android, but it will not feature the Play Store, Gmail, YouTube, Google Pay, or the many other services which Western consumers are used to.

Among the features that will be missing from the Mate 30 onwards is Google Assistant. Huawei is quickly working to fill the gaps left without access to Google’s services with its own and is launching the Huawei Assistant as a replacement for Mountain View’s virtual assistant.

Walter Ji, Director of Business, HUAWEI Consumer Business Group Western Europe, said:

“With our focus on user experience, we bring AI into mobile services so we can proactively identify user needs and thus improve their smartphone experience.

Huawei Assistant is a product that intelligently fulfils user needs at the same time as offering partners an opportunity to provide their services to users through a globally-available distribution platform.”

Huawei Assistant will launch with basic functionality compared to Google’s version, but the company is promising to expand it.

By swiping to the right of the homescreen, much like accessing Google Assistant today, users can begin interacting with Huawei Assistant. The service is powered by Huawei Ability Gallery, a service distribution platform.

There are four key features of the Huawei Assistant:

  • Newsfeed – Today’s Google Assistant provides some personalised articles when you swipe to it on an Android device. The newsfeed feature is Huawei Assistant’s alternative but users can decide whether to receive custom recommendations or to select from news agencies to fill their feed with “up-to-the-minute” articles.
  • Search – Users can search for information on their smartphone using Huawei Assistant. The assistant will surface things such as installed apps, memos, emails, and calendar entries, while also providing an online search feature using the default browser.
  • Instant Access – Four shortcuts to a user’s choice of applications can be selected for quick access. In the future, Huawei says this can make use of AI so the shortcuts are intelligently-selected based on what the user may want at that moment.
  • SmarterCare – Real-time information will be provided using AI. At launch, this will mean things such as the weather forecast, missed calls, and schedule reminders. Future planned functionality will enable more powerful abilities like booking restaurants, flights, taxis, and hotels.

The new assistant from Huawei will be pre-installed on Mate 30 series devices but it will also be downloadable from the company’s App Gallery.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Huawei announces its own AI assistant as it prepares for Google-less life appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/09/19/huawei-announces-ai-assistant-google/feed/ 0
Amazon patent envisions Alexa listening to everything 24/7 https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/ https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/#respond Wed, 29 May 2019 14:07:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5691 A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword. Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps... Read more »

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword.

Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps to quell privacy concerns.

There are some drawbacks from this approach, mainly context. Future AI assistants will be able to provide more help when armed with information leading up to the request.

For example, say you were discussing booking a seat at your favourite restaurant next Tuesday. After asking, “Alexa, do I have anything on my schedule next Tuesday?” it could respond: “No, would you like me to book a seat at the restaurant you were discussing and add it to your calendar?”

Today, such a task would require three separate requests.

Amazon’s patent isn’t quite as complex just yet. The example provided in the filing envisions allowing the user to say things such as “Play ‘And Your Bird Can Sing’ Alexa, by the Beatles,” (Note the wakeword after the play song command.)

David Emm, Principal Security Researcher at Kaspersky Lab, said:

“Many Amazon Alexa users will likely be alarmed by today’s news that the company’s latest patent would allow the devices – commonplace in homes across the UK – to record everything a person says before even being given a command. Whilst the patent doesn’t suggest it will be installed in future Alexa-enabled devices, this still signals an alarming development in the further surrender of our personal privacy.

Given the amount of sensitive information exchanged in the comfort of people’s homes, Amazon would be able to access a huge volume of personal information – information that would be of great value to cybercriminals and threat actors. If the data isn’t secured effectively, a successful breach of Amazon’s systems could have a severe knock-on effect on the data security and privacy of huge numbers of people.

If this patent comes into effect, consumers need to be made very aware of the ramifications of this – and to be fully briefed on what data is being collected, how it is being used, and how they can opt out of this collection. Amazon may argue that analysing stored data will make their devices smarter for Alexa owners – but in today’s digital era, such information could be used nefariously, even by trusted parties. For instance, as we saw with Cambridge Analytica, public sector bodies could target election campaigns at those discussing politics.

There’s a world of difference between temporary local storage of sentences, to determine if the command word has been used, and bulk retention of data for long periods, or permanently – even if the listening process is legitimate and consumers have opted in. There have already been criticisms of Amazon for not making it clear what is being recorded and stored – and we are concerned that this latest development shows the company moving in the wrong direction – away from data visibility, privacy, and consent.”

There’s a joke about Uber that society used to tell you not to get into cars with strangers, and now you’re encouraged to order one from your phone. Lyft has been able to ride in Uber’s wake relatively negative PR free.

Getting the balance right between innovation and safety can be a difficult task. Pioneers often do things first and face the backlash before it actually becomes somewhat normal. That’s not advocating Amazon’s possible approach, but we’ve got to be careful outrage doesn’t halt progress while remaining vigilant of actual dangers.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/feed/ 0