speech recognition Archives - AI News https://www.artificialintelligence-news.com/tag/speech-recognition/ Artificial Intelligence News Tue, 23 May 2023 12:46:21 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png speech recognition Archives - AI News https://www.artificialintelligence-news.com/tag/speech-recognition/ 32 32 Meta’s open-source speech AI models support over 1,100 languages https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/ https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/#respond Tue, 23 May 2023 12:46:19 +0000 https://www.artificialintelligence-news.com/?p=13101 Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models. In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has... Read more »

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models.

In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has made remarkable strides in expanding language coverage and improving the performance of speech recognition and synthesis models.

By combining self-supervised learning techniques with a diverse dataset of religious readings, the MMS project has achieved impressive results in growing the ~100 languages supported by existing speech recognition models to over 1,100 languages.

Breaking down language barriers

To address the scarcity of labelled data for most languages, the MMS project utilised religious texts, such as the Bible, which have been translated into numerous languages.

These translations provided publicly available audio recordings of people reading the texts, enabling the creation of a dataset comprising readings of the New Testament in over 1,100 languages.

By including unlabeled recordings of other religious readings, the project expanded language coverage to recognise over 4,000 languages.

Despite the dataset’s specific domain and predominantly male speakers, the models performed equally well for male and female voices. Meta also says it did not introduce any religious bias.

Overcoming challenges through self-supervised learning

Training conventional supervised speech recognition models with just 32 hours of data per language is inadequate.

To overcome this limitation, the MMS project leveraged the benefits of the wav2vec 2.0 self-supervised speech representation learning technique.

By training self-supervised models on approximately 500,000 hours of speech data across 1,400 languages, the project significantly reduced the reliance on labelled data.

The resulting models were then fine-tuned for specific speech tasks, such as multilingual speech recognition and language identification.

Impressive results

Evaluation of the models trained on the MMS data revealed impressive results. In a comparison with OpenAI’s Whisper, the MMS models exhibited half the word error rate while covering 11 times more languages.

Furthermore, the MMS project successfully built text-to-speech systems for over 1,100 languages. Despite the limitation of having relatively few different speakers for many languages, the speech generated by these systems exhibited high quality.

While the MMS models have shown promising results, it is essential to acknowledge their imperfections. Mistranscriptions or misinterpretations by the speech-to-text model could result in offensive or inaccurate language. The MMS project emphasises collaboration across the AI community to mitigate such risks.

You can read the MMS paper here or find the project on GitHub.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/feed/ 0
Zoom receives backlash for emotion-detecting AI https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/ https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/#respond Thu, 19 May 2022 08:22:19 +0000 https://www.artificialintelligence-news.com/?p=11988 Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions. The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions. Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for... Read more »

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions.

The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions.

Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for helping salespeople improve their pitches based on the emotions of call participants.

Naturally, the system is seen as rather dystopian and has received its fair share of criticism.

On Wednesday, over 25 rights groups sent a joint letter to Zoom CEO Eric Yuan. The letter urges Zoom to cease research on emotion-based AI.

The letter’s signatories include the American Civil Liberties Union (ACLU), Muslim Justice League, and Access Now.

One of the key concerns is that emotion-detecting AI could be used for things like hiring or financial decisions; such as whether to grant loans. That has the possibility to increase existing inequalities.

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” Zoom explained.

Zoom IQ tracks metrics including:

  • Talk-listen ratio
  • Talking speed
  • Filler words
  • Longest spiel (monologue)
  • Patience
  • Engaging questions
  • Next steps set up
  • Sentiment/Engagement analysis

Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project, called emotion-detecting AI “creepy” and “a junk science”.

(Photo by iyus sugiharto on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/feed/ 0
DeepMind co-founder Mustafa Suleyman launches new AI venture https://www.artificialintelligence-news.com/2022/03/09/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/ https://www.artificialintelligence-news.com/2022/03/09/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/#respond Wed, 09 Mar 2022 12:08:56 +0000 https://artificialintelligence-news.com/?p=11742 DeepMind co-founder Mustafa Suleyman has joined two other high-profile industry figures in launching a new venture called Inflection AI. LinkedIn co-founder Reid Hoffman is joining Suleyman on the venture. “Reid and I are excited to announce that we are co-founding a new company, Inflection AI,” wrote Suleyman in a statement. “Inflection will be an AI-first... Read more »

The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.

]]>
DeepMind co-founder Mustafa Suleyman has joined two other high-profile industry figures in launching a new venture called Inflection AI.

LinkedIn co-founder Reid Hoffman is joining Suleyman on the venture.

“Reid and I are excited to announce that we are co-founding a new company, Inflection AI,” wrote Suleyman in a statement.

“Inflection will be an AI-first consumer products company, incubated at Greylock, with all the advantages and expertise that come from being part of one of the most storied venture capital firms in the world.”

Dr Karén Simonyan, another former DeepMind AI expert, will serve as Inflection AI’s chief scientist and its third co-founder.

“Karén is one of the most accomplished deep learning leaders of his generation. He completed his PhD at Oxford, where he designed VGGNet and then sold his first company to DeepMind,” continued Suleyman.

“He created and led the deep learning scaling team and played a key role in such breakthroughs as AlphaZero, AlphaFold, WaveNet, and BigGAN.”

Inflection AI will focus on machine learning and natural language processing.

“Recent advances in artificial intelligence promise to fundamentally redefine human-machine interaction,” explains Suleyman.

“We will soon have the ability to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people. Over time these new language capabilities will revolutionise what it means to have a digital experience.”

Interest in natural language processing is surging. This month, Microsoft completed its $19.7 billion acquisition of Siri voice recognition engine creator Nuance.

Suleyman departed Google in January 2022 following an eight-year stint at the company.

While at Google, Suleyman was placed on administrative leave following bullying allegations. During a podcast, he said that he “really screwed up” and was “very sorry about the impact that caused people and the hurt people felt.”

Suleyman joined venture capital firm Greylock after leaving Google.

“There are few people who are as visionary, knowledgeable and connected across the vast artificial intelligence landscape as Mustafa,” wrote Hoffman, a Greylock partner, in a post at the time.

“Mustafa has spent years thinking about how technological advances impact society, and he cares deeply about the ethics and governance supporting new AI systems.”

Inflection AI was incubated by Greylock. Suleyman and Hoffman will both remain venture partners at the company.

Suleyman promises that more details about Inflection AI’s product plans will be provided over the coming months.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/09/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/feed/ 0
Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/ https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/#respond Tue, 08 Mar 2022 15:46:00 +0000 https://artificialintelligence-news.com/?p=11738 Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”. “Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice... Read more »

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”.

“Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice President of the Cloud + AI Group at Microsoft. 

“This powerful combination will help providers offer more affordable, effective, and accessible healthcare, and help organisations in every industry create more personalised and meaningful customer experiences. I couldn’t be more pleased to welcome the Nuance team to our Microsoft family.”

Nuance became a household name (in techie households, anyway) for creating the speech recognition engine that powers Apple’s smart assistant, Siri. However, Nuance has been in the speech recognition business since 2001 when it was known as ScanSoft.

While it may not have made many big headlines in recent years, Nuance has continued to make some impressive advancements—which caught the attention of Microsoft.

Microsoft announced its intention to acquire Nuance for $19.7 billion last year, in the company’s largest deal after its $26.2 billion acquisition of LinkedIn (both deals would be blown out the water by Microsoft’s proposed $70 billion purchase of Activision Blizzard).

The proposed acquisition of Nuance caught the attention of global regulators. It was cleared in the US relatively quickly, while the EU’s regulator got in the festive spirit and cleared the deal just prior to last Christmas. The UK’s Competition and Markets Authority finally gave it a thumbs-up last week.

Regulators examined whether there may be anti-competition concerns in some verticals where both companies are active, such as healthcare. However, after investigation, the regulators determined that competition shouldn’t be affected by the deal.

The EU, for example, determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.

Furthermore, the EU’s regulator concluded:

  • Microsoft-Nuance will continue to face stiff competition from rivals in the future.
  • There’d be no ability/incentive to foreclose existing market solutions.
  • Nuance can only use the data it collects for its own services.
  • The data will not provide Microsoft with an advantage to shut out competing software providers.

The companies appear keen to ensure that people are aware the deal is about more than just healthcare.

“Combining the power of Nuance’s deep vertical expertise and proven business outcomes across healthcare, financial services, retail, telecommunications, and other industries with Microsoft’s global cloud ecosystems will enable us to accelerate our innovation and deploy our solutions more quickly, more seamlessly, and at greater scale to solve our customers’ most pressing challenges,” said Mark Benjamin, CEO of Nuance.

Benjamin will remain the CEO of Nuance and will report to Guthrie.

(Photo by Omid Armin on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/08/microsoft-acquires-nuance-new-era-outcomes-based-ai/feed/ 0
EU clears $19.7B Microsoft-Nuance deal without any small print https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/ https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/#respond Wed, 22 Dec 2021 12:27:33 +0000 https://artificialintelligence-news.com/?p=11543 The EU has concluded Microsoft’s $19.7 billion acquisition of Nuance doesn’t pose competition concerns. Nuance gained renown for originally creating the backend of that little old virtual assistant called Siri (you might have heard of it?) The company has since continued to focus on building its speech recognition capabilities and has a number of solutions... Read more »

The post EU clears $19.7B Microsoft-Nuance deal without any small print appeared first on AI News.

]]>
The EU has concluded Microsoft’s $19.7 billion acquisition of Nuance doesn’t pose competition concerns.

Nuance gained renown for originally creating the backend of that little old virtual assistant called Siri (you might have heard of it?)

The company has since continued to focus on building its speech recognition capabilities and has a number of solutions which span particular industries such as healthcare to general omni-channel customer experience services.

Earlier this year, Microsoft decided Nuance is worth coughing up $19.7 billion for.

As such large deals often do, the proposed acquisition caught the eyes of several global regulators. In the case of the EU, it was referred to the Commission’s regulators on 16 November.

The regulator said on Tuesday that the proposed acquisition “would raise no competition concerns” within the bloc and that “Microsoft and Nuance offer very different products” after looking at potential horizontal overlaps between the companies’ transcription solutions.

Vertical links in the healthcare space were also analysed but it was determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.

Furthermore, the regulator concluded:

  • Microsoft-Nuance will continue to face stiff competition from rivals in the future.
  • There’d be no ability/incentive to foreclose existing market solutions.
  • Nuance can only use the data it collects for its own services.
  • The data will not provide Microsoft with an advantage to shut out competing software providers.

The EU’s decision mirrors that of regulators in the US and Australia. However, the UK’s Competition and Markets Authority (CMA) announced its own investigation earlier this month.

When it announced the deal, Microsoft said that it aims to complete its acquisition by the end of 2021. The CMA is accepting comments until 10 January 2022 so it seems that Microsoft may have to hold out a bit longer.

(Photo by Annie Spratt on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post EU clears $19.7B Microsoft-Nuance deal without any small print appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/22/eu-clears-19-7b-microsoft-nuance-deal-without-small-print/feed/ 0
Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/ https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/#respond Fri, 24 Sep 2021 13:29:52 +0000 http://artificialintelligence-news.com/?p=11128 Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform. Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon. To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds... Read more »

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
Streaming behemoth Spotify hosts more than seventy million songs and close to three million podcast titles on its platform.

Spotify Logo

Delivering this without artificial intelligence (AI) would be comparable to traversing the Amazon rainforest armed with nothing but a spoon.

To cut – or scoop – through this jungle of music, Spotify’s research team deploy hundreds of machine learning models that improve the user experience, all the while trying to balance the needs of users and creators.

AI News caught up with Spotify research lead Rishabh Mehrotra at the AI & Big Data Expo Global on September 7 to learn more about how AI supports the platform.

AI News: How important is AI to Spotify’s mission?

Rishabh Mehrotra

Rishabh Mehrotra: AI is at the centre of what we do. Machine learning (ML) specifically has become an indispensable tool for powering personalised music and podcast recommendations to more than 365 million users across the world. It enables us to understand user needs and intents, which then helps us to deliver personalised recommendations across various touch points on the app.

It’s not just about the actual models which we deploy in front of users but also the various AI techniques we use to adopt a data driven process around experimentation, metrics, and product decisions.

We use a broad range of AI methods to understand our listeners, creators, and content. Some of our core ML research areas include understanding user needs and intents, matching content and listeners, balancing user and creator needs, using natural language understanding and multimedia information retrieval methods, and developing models that optimise long term rewards and recommendations.

What’s more, our models power experiences across around 180 countries, so we have to consider how they are performing across markets. Striking a balance between pushing global music but still facilitating local musicians and music culture is one of our most important AI initiatives.

AN: Spotify users might be surprised to learn just how central AI is to almost every aspect of the platform’s offering. It’s so seamless that I suspect most people don’t even realise it’s there. How crucial is AI to the user experience on Spotify?

RM: If you look at Spotify as a user then you typically view it as an app which gives you the content that you’re looking for. However, if you really zoom in you see that each of these different recommendation tools are all different machine learning products. So if you look at the homepage, we have to understand user intent in a far more subtle way than we would with a search query. The homepage is about giving recommendations based on a user’s current needs and context, which is very different from a search query where users are explicitly asking for something. Even in search, users can seek open and non-focused queries like ‘relaxing music’, or you could be searching the name of a specific song.

Looking at sequential radio sessions, our models try to balance familiar music with discovery content, aimed at not only recommending content users could enjoy at the moment, but optimising for long term listener-artist connections.

A good amount of our ML models are starting to become multi-objective. Over the past two years, we have deployed a lot of models that try to fulfil listener needs whilst also enabling creators to connect with and grow their audiences.

AN: Are artists’ wants and needs a big consideration for Spotify or is the focus primarily on the user experience?

RM: Our goal is to match the creators with the fans in an enriching way. While understanding user preferences is key to the success of our recommendation models, it really is a two-sided market in a lot of ways. We have the users who want to consume audio content on one side and the creators looking to grow their audiences on the other. Thus a lot of our recommendation products have a multi-stakeholder thinking baked into them to balance objectives from both sides.

AN: Apart from music recommendations and suggestions, does AI support Spotify in any other ways?

RM: AI plays an important role in driving our algotorial approach – Expert curators with an excellent sense for what’s up and coming, quite literally teach our machine learning system. Through this approach, we can create playlists that not only look at past data but also reflect cultural trends as they’re happening. Across all regions, we have editors who bring in deep domain expertise about music culture that we use proactively in our products. This allows us to develop and deploy human-in-the-loop AI techniques that can leverage editorial input to bootstrap various decisions that various ML models can then scale.

AN: What about podcasts? Do you utilise AI differently when applying it to podcasts over music?

RM: Users’ podcast journeys can differ in a lot of ways compared to music. While music is a lot about the audio and acoustic properties of songs, podcasts depend on a whole different set of parameters. For one, it’s much more about content understanding – understanding speakers, types of conversations and topics of discussions.

That said, we are seeing some very interesting results using music taste for podcast recommendations too. Members of our group have recently published work that shows how our ML models can leverage users’ music preferences to recommend podcasts, and some of these results have demonstrated significant improvements, especially for new podcast users.

AN: With so many models already turning the cogs at Spotify, it’s difficult to see how new and exciting use cases could be introduced. What are Spotify’s AI plans for the coming years?

RM: We’re working on a number of ways to elevate the experience even further. Reinforcement learning will be an important focus point as we look into ways to optimise for a lifetime of fulfilling content, rather than optimise for the next stream. In a sense this isn’t about giving users what they want right now as opposed to evolving their tastes and looking at their long term trajectories.

AN: As the years go on and your models have more and more data to work with, will the AI you use naturally become more advanced?

RM: A lot of our ML investments are not only about incorporating state-of-the-art ML into our products, but also extending the state-of-the-art based on the unique challenges we face as an audio platform. We are developing advanced causal inference techniques to understand the long term impact of our algorithmic decisions. We are innovating in the multi-objective ML modelling space to balance various objectives as part of our two-sided marketplace efforts. We are gravitating towards learning from long term trajectories and optimising for long term rewards.

To make data-driven decisions across all such initiatives, we rely heavily on solid scientific experimentation techniques, which also heavily relies on using machine learning.

Reinforcement learning furthers the scope of longer term decisions – it brings that long term perspective into our recommendations. So a quick example would be facilitating discovery on the platform. As a marketplace platform, we want users to not only consume familiar music but to also discover new music, leveraging the value of recommendations. There are 70 million tracks on the platform and only a few thousand will be familiar to any given user, putting aside the fact that it would take an individual several lifetimes to actually go through all this content. So tapping into that remaining 69.9 million and surfacing content users would love to discover is a key long-term goal for us.

How to fulfil users’ long term discovery needs, when to surface such discovery content, and by how much, not only across which set of users, but also across various recommended sets are a few examples of higher abstraction long term problems that RL approaches allow us to tackle well.

AN: Finally, considering the involvement Spotify has in directing users’ musical experiences, does the company have to factor in any ethical issues surrounding its usage of AI?

RM: Algorithmic responsibility and causal influence are topics we take very seriously and we actively work to ensure our systems operate in a fair and responsible manner, backed by focused research and internal education to prevent unintended biases.

We have a team dedicated to ensuring we approach these topics with the right research-informed rigour and we also share our learnings with the research community.

AN: Is there anything else you would like to share?

RM: On a closing note, one thing I love about Spotify is that we are very open with the wider industry and research community about the advances we are making with AI and machine learning. We actively publish at top tier venues, give tutorials, and we have released a number of large datasets to facilitate academic research on audio recommendations.

For anyone who is interested in learning more about this I would recommend checking out our Spotify Research website which discusses our papers, blogs, and datasets in greater detail.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Rishabh Mehrotra, research lead, Spotify: Multi-stakeholder thinking with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/24/rishabh-mehrotra-research-lead-spotify-multi-stakeholder-thinking-with-ai/feed/ 0
McDonald’s drive-thru AI bot may have broken privacy law https://www.artificialintelligence-news.com/2021/06/11/mcdonalds-drive-thru-ai-bot-may-broken-privacy-law/ https://www.artificialintelligence-news.com/2021/06/11/mcdonalds-drive-thru-ai-bot-may-broken-privacy-law/#respond Fri, 11 Jun 2021 16:27:04 +0000 http://artificialintelligence-news.com/?p=10678 McDonald’s announced earlier this month that it was deploying an AI chatbot to handle its drive-thru orders, but it turns out it might break privacy law. The chatbot is the product of a voice recognition company McDonald’s snapped up in 2019 called Apprente which is now known as McD Tech Labs. McDonald’s deployed the chatbots... Read more »

The post McDonald’s drive-thru AI bot may have broken privacy law appeared first on AI News.

]]>
McDonald’s announced earlier this month that it was deploying an AI chatbot to handle its drive-thru orders, but it turns out it might break privacy law.

The chatbot is the product of a voice recognition company McDonald’s snapped up in 2019 called Apprente which is now known as McD Tech Labs.

McDonald’s deployed the chatbots to ten of its restaurants in Chicago, Illinois. And there lies the issue.

The state of Illinois has some of the strictest data privacy laws in the country. For example, the state’s Biometric Information Privacy Act (BIPA) states: “No private entity may collect, capture, purchase, receive through trade, or otherwise obtain a person’s or a customer’s biometric identifier or biometric information.”

One resident, Shannon Carpenter, has sued McDonald’s on behalf of himself and other Illinois residents—claiming the fast food biz has broken BIPA by not receiving explicit written consent from its customers to process their voice data.

“Plaintiff, like the other class members, to this day does not know the whereabouts of his voiceprint biometrics which defendant obtained,” the lawsuit states.

The software is said to not only transcribe speech into text but also process it to predict personal information about the customers such as their “age, gender, accent, nationality, and national origin.”

Furthermore, the lawsuit alleges that McDonald’s has been testing AI software at its drive-thrus since last year.

Anyone found to have had their rights under BIPA violated are eligible for up to $5000 each per case. Given the huge number of McDonald’s customers, it’s estimated that damage payouts could exceed $5 million.

Once again, this case shows the need to be certain that any AI deployments are 100 percent compliant with increasingly strict data laws in every state and country they operate.

(Image Credit: Erik Mclean on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post McDonald’s drive-thru AI bot may have broken privacy law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/11/mcdonalds-drive-thru-ai-bot-may-broken-privacy-law/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus... Read more »

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0
Microsoft considers acquiring Siri creator Nuance for $16 billion https://www.artificialintelligence-news.com/2021/04/12/microsoft-considers-acquiring-siri-creator-nuance-16-billion/ https://www.artificialintelligence-news.com/2021/04/12/microsoft-considers-acquiring-siri-creator-nuance-16-billion/#comments Mon, 12 Apr 2021 10:05:54 +0000 http://artificialintelligence-news.com/?p=10452 Microsoft is considering acquiring Siri creator Nuance for $16 billion in a deal that’s expected to be announced imminently. Silicon Valley darling Nuance is one of the world’s most recognisable AI companies for the creation of Siri, the voice assistant that would later go on to become Apple’s first-party solution. Nuance now focuses its efforts... Read more »

The post Microsoft considers acquiring Siri creator Nuance for $16 billion appeared first on AI News.

]]>
Microsoft is considering acquiring Siri creator Nuance for $16 billion in a deal that’s expected to be announced imminently.

Silicon Valley darling Nuance is one of the world’s most recognisable AI companies for the creation of Siri, the voice assistant that would later go on to become Apple’s first-party solution.

Nuance now focuses its efforts predominantly on enterprise communications, particularly in the healthcare space which has seen increased interest amid the COVID-19 pandemic. Using technology from Nuance, medical staff can reduce their reliance on pen-and-paper and free up more time for improving patient outcomes.

Bloomberg initially reported Microsoft’s interest in Nuance.

Anurag Rana, Senior Analyst at Bloomberg Intelligence, commented in the report: “This can really help Microsoft accelerate the digitisation of the healthcare industry, which has lagged other sectors such as retail and banking.

“The biggest near-term benefit that I can see is in the area of telehealth, where Nuance transcription product is currently being used with Microsoft Teams.”

Nuance’s shares [NUAN] closed yesterday at $45.58 but Microsoft is reportedly prepared to pay $56 per share – over 20 percent higher. Nuance’s shares have increased pre-market to $54.90 per share, seemingly boosted by the rumours.

Microsoft has all but killed off its Cortana voice assistant for consumer-facing purposes, removing it from the App Store and Play Store earlier this month. Redmond now appears to be pivoting its efforts towards developing AI solutions for specific enterprise areas.

In addition to Nuance’s AI expertise, Microsoft would also be gaining access to the company’s over 3,500 patents.

If the deal goes ahead, it would represent Microsoft’s largest acquisition after its $24 billion takeover of Linkedin. Microsoft is also reportedly in talks to acquire gaming communications platform Discord for over $10 billion. This could be a very big news month for Redmond.

Update: Well, we did say the deal was expected to be announced imminently. Mere hours after this article went live Microsoft confirmed its acquisition of Nuance in an all-cash transaction valued at $19.7 billion.

“Nuance provides the AI layer at the healthcare point of delivery and is a pioneer in the real-world application of enterprise AI,” said Satya Nadella, CEO, Microsoft. “AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud for Healthcare and Nuance.”

“Over the past three years, Nuance has streamlined its portfolio to focus on the healthcare and enterprise AI segments, where there has been accelerated demand for advanced conversational AI and ambient solutions,” said Mark Benjamin, CEO, Nuance. “To seize this opportunity, we need the right platform to bring focus and global scale to our customers and partners to enable more personal, affordable and effective connections to people and care. The path forward is clearly with Microsoft —  who brings intelligent cloud-based services at scale and who shares our passion for the ways technology can make a difference. At the same time, this combination offers a critical opportunity to deliver meaningful and certain value to our shareholders who have driven and supported us on this journey.”

“Nuance not only brings an attractive set of healthcare customers in AI – a huge bid for Microsoft – but also deep domain expertise as well. This has been the missing ingredient for Microsoft until now,” comments Nicholas McQuire, Chief of Enterprise Research at CCS Insight.

“In the past, we have seen firms like IBM buy industry specialism in datasets but Nuance delivers Microsoft a more mature set of AI solutions in areas such as speech recognition, document processing, fraud detection, and image recognition. Ultimately these will prove key to differentiating Azure to healthcare customers against its largely horizontal competitors.”

(Image Credit: Apple)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft considers acquiring Siri creator Nuance for $16 billion appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/12/microsoft-considers-acquiring-siri-creator-nuance-16-billion/feed/ 1
Researchers achieve 94% power reduction for on-device AI tasks https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/ https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/#respond Thu, 17 Sep 2020 15:47:52 +0000 http://artificialintelligence-news.com/?p=9859 Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices. ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent... Read more »

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
Researchers from Applied Brain Research (ABR) have achieved significantly reduced power consumption for a range of AI-powered devices.

ABR designed a new neural network called the Legendre Memory Unit (LMU). With LMU, on-device AI tasks – such as those on speech-enabled devices like wearables, smartphones, and smart speakers – can take up to 94 percent less power.

The reduction in power consumption achieved through LMU will be particularly beneficial to smaller form-factor devices such as smartwatches; which struggle with small batteries. IoT devices which carry out AI tasks – but may have to last months, if not years, before they’re replaced – should also benefit.

LMU is described as a Recurrent Neural Network (RNN) which enables lower power and more accurate processing of time-varying signals.

ABR says the LMU can be used to build AI networks for all time-varying tasks—such as speech processing, video analysis, sensor monitoring, and control systems.

The AI industry’s current go-to model is the Long-Short-Term-Memory (LSTM) network. LSTM was first proposed back in 1995 and is used for most popular speech recognition and translation services today like those from Google, Amazon, Facebook, and Microsoft.

Last year, researchers from the University of Waterloo debuted LMU as an alternative RNN to LSTM. Those researchers went on to form ABR, which now consists of 20 employees.

Peter Suma, co-CEO of Applied Brain Research, said in an email:

“We are a University of Waterloo spinout from the Theoretical Neuroscience Lab at UW. We looked at how the brain processes signals in time and created an algorithm based on how “time-cells” in your brain work.

We called the new AI, a Legendre-Memory-Unit (LMU) after a mathematical tool we used to model the time cells. The LMU is mathematically proven to be optimal at processing signals. You cannot do any better. Over the coming years, this will make all forms of temporal AI better.”

ABR debuted a paper in late-2019 during the NeurIPS conference which demonstrated that LMU is 1,000,000x more accurate than the LSTM while encoding 100x more time-steps.

In terms of size, the LMU model is also smaller. LMU uses 500 parameters versus the LSTM’s 41,000 (a 98 percent reduction in network size.)

“We implemented our speech recognition with the LMU and it lowered the power used for command word processing to ~8 millionths of a watt, which is 94 percent less power than the best on the market today,” says Suma. “For full speech, we got the power down to 4 milli-watts, which is about 70 percent smaller than the best out there.”

Suma says the next step for ABR is to work on video, sensor and drone control AI processing—to also make them smaller and better.

A full whitepaper detailing LMU and its benefits can be found on preprint repository arXiv here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers achieve 94% power reduction for on-device AI tasks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/09/17/researchers-achieve-power-reduction-on-device-ai-tasks/feed/ 0