bots Archives - AI News https://www.artificialintelligence-news.com/tag/bots/ Artificial Intelligence News Fri, 12 Nov 2021 16:47:40 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png bots Archives - AI News https://www.artificialintelligence-news.com/tag/bots/ 32 32 Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0
Twitter begins labelling ‘good’ bots on the social media platform https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/ https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/#respond Fri, 10 Sep 2021 14:13:40 +0000 http://artificialintelligence-news.com/?p=11039 Twitter is testing a new feature that will give the good kind of bots some due recognition. Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines. However, despite... Read more »

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
Twitter is testing a new feature that will give the good kind of bots some due recognition.

Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines.

However, despite their image problem, bots can be an important tool for good.

Some bots share critical information around things like severe weather, natural disasters, active shooters, and other emergencies. Others can be educational and provide facts or dig up historical events and artifacts to remind us of the past as we’re browsing on our modern devices.

On Thursday, Twitter announced that it’s testing a new label to let users know the account is using automated but legitimate content.

Twitter says the new feature is based on research from users that found they want more context about non-human accounts.

A study by Carnegie Mellon University last year found that almost half of Twitter accounts tweeting about the coronavirus pandemic were likely automated accounts. Twitter says it will continue to remove fake accounts that break its rules.

The move could be likened to Twitter’s verified accounts scheme that puts a little blue tick mark next to a user’s name to show others that it belongs to the person in question and isn’t a fake, often created for scam purposes.

However, unlike Twitter’s verified accounts scheme that provides no guarantees about the content of a user’s tweets, the social network is taking a bit of a gamble that tweets from a ‘good’ bot account will remain accurate.

(Photo by Jeremy Bezanger on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/feed/ 0
Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/ https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/#respond Fri, 02 Jul 2021 15:04:01 +0000 http://artificialintelligence-news.com/?p=10740 A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations. AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm. The researchers’ study into offensive AI used both existing... Read more »

The post Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat appeared first on AI News.

]]>
A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations.

AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm.

The researchers’ study into offensive AI used both existing research into the subject in addition to responses from organisations including Airbus, Huawei, and IBM.

Three core motivations were highlighted as to why an adversary would turn to AI: coverage, speed, and success.

While offensive AI threats come in many shapes, it’s the use of the technology for impersonation that has both academia and industry highly concerned. Deepfakes, for example, are growing in prevalence for purposes ranging from relatively innocuous comedy to the far more sinister fraud, blackmail, exploitation, defamation, and spreading misinformation.

Similar campaigns in the past using fake/manipulated content has been a slow and arduous process with little chance of success. Not only is AI making the creation of such content easier but it’s also meaning that organisations can be bombarded with phishing attacks which greatly increases the chance of success.

Tools such as Microsoft’s Video Authenticator are helping to counter deepfakes but it will be an ongoing battle to keep up with their increasing sophistication.

Back when Google’s Duplex service was announced – which sounds like a real human to book appointments on a person’s behalf – concerns were raised that similar technology could be used to automate fraud. The researchers expect bots to gain the ability to make convincing deepfake phishing calls.

The researchers also predict an increased prevalence of offensive AI in “data collection, model development, training, and evaluation” in the coming years.

Here are the top 10 offensive AI concerns from both the perspectives of industry and academia:

Very few organisations are currently investing in ways to counter, or mitigate the fallout, of an offensive AI attack such as a deepfake phishing campaign.

The researchers recommend more research into post-processing tools that can protect software from analysis after development (i.e anti-vulnerability detection) and that organisations extend the current MLOps paradigm to also encompass ML security (MLSecOps) that incorporates security testing, protection, and monitoring of AI/ML models.

You can find the full paper, The Threat of Offensive AI to Organizations, on arXiv here (PDF)

(Photo by Dan Dimmock on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/feed/ 0
IBM study highlights rapid uptake and satisfaction with AI chatbots https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/ https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/#respond Tue, 27 Oct 2020 11:03:20 +0000 http://artificialintelligence-news.com/?p=9975 A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction. Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an... Read more »

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
A study by IBM released this week highlights the rapid uptake of AI chatbots in addition to increasing customer satisfaction.

Most of us are hardwired to hate not speaking directly to a human when we have a problem—following years of irritating voicemail systems. However, perhaps the only thing worse is being on hold for an uncertain amount of time due to overwhelmed call centres.

Chatbots have come a long way and can now quickly handle most queries within minutes. Where a human is required, the reduced demand through using virtual agent technology (VAT) means customers can get the assistance they need more quickly.

The COVID-19 pandemic has greatly increased the adoption of VAT as businesses seek to maintain customer service through such a challenging time.

According to IBM’s study, 99 percent of organisations reported increased customer satisfaction by integrating virtual agents. Human agents also report increased satisfaction and IBM says those “who feel valued and empowered with the proper tools and support are more likely to deliver a better experience to customers.”

68 percent of leaders cite improving the human agent experience as being among their key reasons for adopting VAT. There’s also economic incentive, with the cost of replacing a dissatisfied agent who leaves a business estimated at as much as 33 percent of the exiting employee’s salary.

IBM claims that VAT performance in the past has only been studied through individual case studies. The company set out, alongside Oxford Economics, to change that by surveying 1,005 respondents from companies using VAT daily.

Businesses wondering whether virtual assistants are worth the investment may be interested to know that 96 percent of the respondents “exceeded, achieved, or expect to achieve” their anticipated return.

On average, companies which have implemented VAT have increased their revenue by three percent.

IBM is one of the leading providers of chatbots through its Watson Assistant solution. While there’s little reason to doubt the claims made in the report, it’s worth keeping in mind that it’s not entirely unbiased.

Watson Assistant has gone from strength-to-strength and appears to have been among the few things which benefited from the pandemic. Between February and August, Watson Assistant usage increased by 65 percent.

You can download a full copy of IBM’s report here.

(Photo by Volodymyr Hryshchenko on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post IBM study highlights rapid uptake and satisfaction with AI chatbots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/27/ibm-study-uptake-satisfaction-ai-chatbots/feed/ 0
Babylon Health lashes out at doctor who raised AI chatbot safety concerns https://www.artificialintelligence-news.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/ https://www.artificialintelligence-news.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/#respond Wed, 26 Feb 2020 17:24:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6433 Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year. The chatbot... Read more »

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot.

Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year.

The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital.

A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event in addition to appearing on a BBC Newsnight report.

Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

This particular issue has since been rectified but Dr Watkins has highlighted many further examples over the years which show, very clearly, there are serious safety issues.

In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to the release, Dr Watkins has conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Speaking to TechCrunch, Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”.

Dr Watkins estimates he has conducted between 800 and 900 full triages, some of which were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

The doctor acknowledges Babylon Health’s chatbot has improved and has issues around the rate of around one in three instances. In 2018, when Dr Watkins first reached out to us and other outlets, he says this rate was “one in one”.

While it’s one account versus the other, the evidence shows that Babylon Health’s chatbot has issued dangerous advice on a number of occasions. Dr Watkins has dedicated many hours to highlighting these issues to Babylon Health in order to improve patient safety.

Rather than welcome his efforts and work with Dr Watkins to improve their service, it seems Babylon Health has decided to go on the offensive and “try and discredit someone raising patient safety concerns”.

In their press release, Babylon accuses Watkins of posting “over 6,000” misleading attacks but without giving details of where. Dr Watkins primarily uses Twitter to post his findings. His account, as of writing, has tweeted a total of 3,925 times and not just about Babylon’s service.

This isn’t the first time Babylon Health’s figures have come into question. Back in June 2018, Babylon Health held an event where it boasted its AI beat trainee GPs at the MRCGP exam used for testing their ability to diagnose medical problems. The average pass mark is 72 percent. “How did Babylon Health do?” said Dr Mobasher Butt at the event, a director at Babylon Health. “It got 82 percent.”

Given the number of dangerous suggestions to trivial ailments the chatbot has given, especially at the time, it’s hard to imagine the claim that it beats trainee GPs as being correct. Intriguingly, the video of the event has since been deleted from Babylon Health’s YouTube account and the company removed all links to coverage of it from the “Babylon in the news” part of its website.

When asked why it deleted the content, Babylon Health said in a statement: “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

AI solutions like those offered by Babylon Health will help to reduce the demand on health services and ensure people have access to the right information and care whenever and wherever they need it. However, patient safety must come first.

Mistakes are less forgivable in healthcare due to the risk of potentially fatal or lifechanging consequences. The usual “move fast and break things” ethos in tech can’t apply here. 

There’s a general acceptance that rarely is a new technology going to be without its problems, but people want to see that best efforts are being made to limit and address those issues. Instead of welcoming those pointing out issues with their service before it leads to a serious incident, it seems Babylon Health would rather blame everyone else for its faults.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/feed/ 0
Google’s Duplex booking AI often relies on humans for backup https://www.artificialintelligence-news.com/2019/05/23/google-duplex-booking-ai-humans-backup/ https://www.artificialintelligence-news.com/2019/05/23/google-duplex-booking-ai-humans-backup/#respond Thu, 23 May 2019 14:21:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5679 Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed. Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.... Read more »

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
Google Duplex often calls on humans for backup when making reservations on behalf of users, and that should be welcomed.

Duplex caused a stir when it debuted at Google’s I/O developer conference last year. The AI was shown calling a hair salon to make a booking and did so complete with human-like “ums” and “ahs”.

The use of such human mannerisms goes to show Google’s intention was for the human to be unaware they’re in conversation with an AI. Following some outcry, Google and other tech giants have pledged to make it clear to humans if they’re not speaking to another person.

Duplex is slowly rolling out and is available for Pixel smartphone owners in the US. Currently, it turns out Duplex bookings are often being carried out by humans in call centres.

Google confirmed to the New York Times that about 25 percent of the Assistant-based calls start with a human in a call centre, while 15 percent require human intervention. Times reporters Brian Chen and Cade Metz made four sample reservations and just one was completed start to finish by the AI.

The practice of using humans as a backup should always be praised. Making this standard practice helps increase trust, reduces concerns about human workers being replaced, and provides some accountability when things go awry.

Only so much can go wrong when booking a hair appointment, but setting expectations now will help to guide developments further down the line.

AI is being increasingly used in a military capacity, and most will sleep better at night knowing a human is behind any final decision rather than complete automation. Just imagine if Soviet officer Stanislav Yevgrafovich Petrov decided to launch retaliatory nuclear missiles after his early warning system falsely reported the launch of missiles from the US back in 1983.

According to the Times, Google isn’t in a rush to replace the human callers, and that should be welcomed.

Related: Watch our interview with UNICRI AI and Robotics Centre head Irakli Beridze discussing issues like weaponisation and the impact on jobs:

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Google’s Duplex booking AI often relies on humans for backup appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/05/23/google-duplex-booking-ai-humans-backup/feed/ 0
DeepMind thrashed pro StarCraft 2 players in latest demo https://www.artificialintelligence-news.com/2019/01/25/deepmind-starcraft-2-players-demo/ https://www.artificialintelligence-news.com/2019/01/25/deepmind-starcraft-2-players-demo/#respond Fri, 25 Jan 2019 13:03:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4835 DeepMind’s AI demonstrated last night how its prowess in StarCraft 2 battles against professional human players has grown in recent months. The live stream of the showdowns was viewed by more than 55,000 people. “This is, of course, an exciting moment for us,” said David Silver, a researcher at DeepMind. “For the first time, we... Read more »

The post DeepMind thrashed pro StarCraft 2 players in latest demo appeared first on AI News.

]]>
DeepMind’s AI demonstrated last night how its prowess in StarCraft 2 battles against professional human players has grown in recent months.

The live stream of the showdowns was viewed by more than 55,000 people.

“This is, of course, an exciting moment for us,” said David Silver, a researcher at DeepMind. “For the first time, we saw an AI that was able to defeat a professional player.”

DeepMind created five versions of their ‘AlphaStar’ AI. Each AI was trained with historic game footage that StarCraft-developer Blizzard has been releasing on a monthly basis.

In order to further improve their abilities, the five AIs were pitted against each other in a league. The leading AI racked up experience that would equate to a human training for around 200 years.

Perhaps needless to say, AlphaStar wiped the floor with human players Grzegorz Komincz and Dario Wunsch.

You can watch AlphaStar taking on the human players below:

The only hope for humans so far is that AlphaStar was trained for a single map and using just the one alien race type of three available in the game. Removed from its comfort zone, it would not perform as well.

Video games have driven more rudimentary AI developments for decades. The advancement shown by AlphaStar could be used to create more complex ‘bots’ that can pose a challenge and help train even the best human players.

This isn’t the first time we’ve seen DeepMind’s AI bots in action – but, in the past, they’ve had a tendency of immediately rushing its opponents with ‘workers’ in a behaviour that Blizzard called “amusing”.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post DeepMind thrashed pro StarCraft 2 players in latest demo appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/25/deepmind-starcraft-2-players-demo/feed/ 0
DeepMind’s AI will show off its new StarCraft 2 skills this week https://www.artificialintelligence-news.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/ https://www.artificialintelligence-news.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/#respond Wed, 23 Jan 2019 17:27:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4500 DeepMind has been continuing to train its AI in the ways of StarCraft 2 and will show off its most recent progress this week. StarCraft 2 is a complex game with many strategies, making it the perfect testing ground for AI. Google’s DeepMind first started exploring how it can use AI to beat the world’s... Read more »

The post DeepMind’s AI will show off its new StarCraft 2 skills this week appeared first on AI News.

]]>
DeepMind has been continuing to train its AI in the ways of StarCraft 2 and will show off its most recent progress this week.

StarCraft 2 is a complex game with many strategies, making it the perfect testing ground for AI. Google’s DeepMind first started exploring how it can use AI to beat the world’s best StarCraft players back in 2016.

In 2017, StarCraft’s developer Blizzard made 65,000 past matches available to DeepMind researchers to begin training bots. Blizzard promised it would make a further half a million games available each month.

We’ve seen DeepMind’s AI bots in action with various degrees of success. The AI had a tendency of immediately rushing its opponents with ‘workers’ in a behaviour that Blizzard called “amusing,” but confessed it had a 50 percent success rate even against StarCraft 2’s AI bots on ‘insane’ difficulty.

Fed with some replays from human players using more complex strategies, the AI began adopting them.

“After feeding the agent replays from real players, it started to execute standard macro-focused strategies, as well as defend against aggressive tactics such as cannon rushes,” Blizzard said.

We’re yet to see these new strategies being used by DeepMind’s AI but it won’t be much longer until we do.

“It’s only been a few months since BlizzCon but DeepMind is ready to share more information on their research,” Blizzard said today.

“The StarCraft games have emerged as a ‘grand challenge’ for the AI community as they’re the perfect environment for benchmarking progress against problems such as planning, dealing with uncertainty, and spatial reasoning.”

You can find a stream of DeepMind’s AI playing StarCraft 2 via StarCraft’s Twitch or Deepmind’s YouTube at 6pm GMT/10am PT/1pm ET on January 24th.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post DeepMind’s AI will show off its new StarCraft 2 skills this week appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/23/deepmind-ai-starcraft-2-skills-week/feed/ 0
Microsoft acquires conversational AI company XOXCO https://www.artificialintelligence-news.com/2018/11/16/microsoft-conversational-ai-xoxco/ https://www.artificialintelligence-news.com/2018/11/16/microsoft-conversational-ai-xoxco/#comments Fri, 16 Nov 2018 13:42:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4207 Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field. XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack. Microsoft believes that bots will become a key way that businesses engage with employees and customers.... Read more »

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
Microsoft has announced the acquisition of Texas-based conversational AI firm XOXCO to amplify the company’s work in the field.

XOXCO have been in operation since 2013 and gained renown for creating Howdy, the first commercially available bot for Slack.

Microsoft believes that bots will become a key way that businesses engage with employees and customers. The company has undertaken many projects with the aim of unlocking their potential, to various success.

Tay, for example, has become an infamous example of a bot gone wrong after the wonderful denizens of the internet taught Microsoft’s creation to be a “Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot”.

However, the company has grand ideas about inter-communicating bots which could be groundbreaking. Asking Cortana to order a pizza could invoke a bot by Dominos, or to book a flight may call on KAYAK.

The Microsoft Bot Framework already supports over 360,000 developers. With the acquisition of XOXCO, the company hopes to further democratise AI development, conversation and dialog, and the integration of conversational experiences where people communicate.

Lili Cheng, Corporate Vice President of Conversational AI at Microsoft, wrote in a post:

“Our goal is to make AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.

To do this, Microsoft is infusing intelligence across all its products and services to extend individuals’ and organizations’ capabilities and make them more productive, providing a powerful platform of AI services and tools that makes innovation by developers and partners faster and more accessible, and helping transform business by enabling breakthroughs to current approaches and entirely new scenarios that leverage the power of intelligent technology.”

Microsoft has made several related acquisitions this year, demonstrating how important AI and bots are to the company.

  • May – Microsoft bought Semantic Machines, another company working on conversational AI.
  • July – Bonsai was acquired, a firm combining machine teaching, reinforcement learning, and simulation.
  • September – Lobe came under Microsoft’s wing, a company aiming to make AI and deep learning development easier.

Gartner backs Microsoft’s belief in bots, recently predicting: “By 2020, conversational artificial intelligence will be a supported user experience for more than 50 percent of large, consumer-centric enterprises.”

Microsoft is setting itself up to be in one of the best positions to capitalise on the growth of conversational AIs, and it looks set to pay off.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Microsoft acquires conversational AI company XOXCO appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/11/16/microsoft-conversational-ai-xoxco/feed/ 1
Bill forcing AI bots to reveal themselves faces EFF opposition https://www.artificialintelligence-news.com/2018/05/24/bill-ai-bot-reveal-eff/ https://www.artificialintelligence-news.com/2018/05/24/bill-ai-bot-reveal-eff/#comments Thu, 24 May 2018 13:58:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3175 A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns. Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month... Read more »

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
A bill that would force AI bots to reveal themselves as not being human is facing opposition from the EFF over free speech concerns.

Many were slightly disturbed by Google’s demo of its Duplex AI conducting a phone call and the other participant being unaware they weren’t speaking to a human. Less than a month later, Microsoft demonstrated it also had the same capabilities.

There are clearly big changes ahead in how we interact, and not everyone is going to be happy speaking to a robot without being aware. The B.O.T. Act (SB 1001) intends to make it illegal for a computer to speak to someone in California without revealing it’s not human.

The summary of the bill reads:

“This bill would make it unlawful for any person to use a bot, as defined, to communicate or interact with natural persons in California online with the intention of misleading and would provide that a person using a bot is presumed to act with the intent to mislead unless the person discloses that the bot is not a natural person.

The bill would require an online platform to enable users to report violations of this prohibition, to respond to the reports, and to provide the Attorney General with specified related information.”

Google and Microsoft have both said their respective AIs would reveal themselves not to be human regardless of legislation.

The B.O.T. Act is facing stiff opposition from the Electronic Freedom Foundation (EFF) who appear to be setting themselves up as champions of rights for machines.

In a post, the EFF wrote: “Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?”

The non-profit for digital privacy argues the law raises ‘significant free speech concerns’ and could represent the start of what’s going to be a long debate over what rights machines should have.

Do you think AIs should be forced to reveal themselves as not human? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Bill forcing AI bots to reveal themselves faces EFF opposition appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/24/bill-ai-bot-reveal-eff/feed/ 1