Microsoft warns its AI offerings ‘may result in reputational harm’

Microsoft warns its AI offerings ‘may result in reputational harm’ Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


Microsoft has warned investors that its AI offerings could damage the company’s reputation in a bid to prepare them for the worst.

AI can be unpredictable, and Microsoft already has experience. Back in 2016, a Microsoft chatbot named Tay became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine learning capabilities

The chatbot was covered in media around the world and itself was bound to have caused Microsoft some reputational damage.

In the company’s latest quarterly report, Microsoft warned investors:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions.

These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.

Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Several companies have been criticised for unethical AI development, including several of Microsoft’s competitors.

AI Backlash

Google was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Several Googlers left the company and others threatened if the contract was not dropped. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai said the contract would not be renewed.

Pichai also promised in a blog post the company will not develop technologies or weapons that cause harm or anything which can be used for surveillance violating ‘internationally accepted norms’ or ‘widely accepted principles of international law and human rights’.

In June last year, Microsoft faced its own internal revolt over a $19 million contract with ICE (Immigration and Customs Enforcement) during a time when authorities were splitting up immigrant families.

Microsoft CEO Satya Nadella was forced to clarify that Microsoft isn’t directly involved with the government’s policy of separating families at the US-Mexico border.

A report from the American Civil Liberties Union found bias in Amazon’s facial recognition algorithm. Amazonians wrote a letter to CEO Jeff Bezos expressing their concerns.

Problems with AI bias keep arising and likely will continue for some time. It’s an issue which needs to be tackled before any mass rollouts.

Last month, Algorithmic Justice League founder Joy Buolamwini gave an impactful presentation during the World Economic Forum on the AI bias issue.

Microsoft is clearly preparing investors for some controversial slip-ups of its own along its AI development journey.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , , , , ,

View Comments
Leave a comment

Leave a Reply