big data Archives - AI News https://www.artificialintelligence-news.com/tag/big-data/ Artificial Intelligence News Mon, 15 May 2023 13:57:48 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png big data Archives - AI News https://www.artificialintelligence-news.com/tag/big-data/ 32 32 Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/ https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/#respond Mon, 15 May 2023 13:57:46 +0000 https://www.artificialintelligence-news.com/?p=13059 Could you tell us a little bit about SoftServe and what the company does? Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems –... Read more »

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
Could you tell us a little bit about SoftServe and what the company does?

Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems – not about technologies – although we are technology experts. But we always try to find the best technology that will help our customers get to the point where they want to be. 

So we’ve been in the market for quite a while, having originated in Ukraine. But now we have offices all over the globe – US, Latin America, Singapore, Middle East, all over Europe – and we operate in multiple industries. We have some specialised leadership around specific industries, such as retail, financial services, healthcare, energy, oil and gas, and manufacturing. We also work with a lot of digital natives and independent software vendors, helping them adopt this technology in their products, so that they can better serve their customers.

What are the main trends you’ve noticed developing in AI and machine learning?

One of the biggest trends is that, while people used to question whether AI, machine learning and data science are the technologies of the future; that’s no longer the question. This technology is already everywhere. And the vast majority of the innovation that we see right now wouldn’t have been possible without these technologies. 

One of the main reasons is that this tech allows us to address and solve some of the problems that we used to consider intractable. Think of natural language, image recognition or code generation, which are not only hard to solve, they’re also hard to define. And approaching these types of problems with our traditional engineering mindset – where we essentially use programming languages – is just impossible. Instead, we leverage the knowledge stored in the vast amounts of data we collect, and use it to find solutions to the problems we care about. This approach is now called Machine Learning, and it is the most efficient way to address those types of problems nowadays.

But with the amount of data we can now collect, the compute power available in the cloud, the efficiency of training and the algorithms that we’ve developed, we are able to get to the stage where we can get superhuman performance with many tasks that we used to think only humans could perform. We must admit that human intelligence is limited in capacity and ability to process information. And machines can augment our intelligence and help us more efficiently solve problems that our brains were not designed for.

The overall trend that we see now is that machine learning and AI are essentially becoming the industry standard for solving complex problems that require knowledge, computation, perception, reasoning and decision-making. And we see that in many industries, including healthcare, finance and retail.

There are some more specific emerging trends. The topic of my TechEx North America keynote will be about generative AI, which many folk might think is something just recently invented, something new, or they may think of it as just ChatGPT. But these technologies have been evolving for a while. And we, as hands-on practitioners in the industry, have been working with this technology for quite a while. 

What has changed now is that, based on the knowledge and experience we’ve collected, we were able to get this tech to a stage where GenAI models are useful. We can use it to solve some real problems across different industries, from concise document summaries to advanced user experiences, logical reasoning and even the generation of unique knowledge. That said, there are still some challenges with reliability, and understanding the actual potential of these technologies.

How important are AI and machine learning with regards to product innovation?

AI and Machine Learning essentially allow us to address the set of problems that we can’t solve with traditional technology. If you want to innovate, if you want to get the most out of tech, you have to use them. There’s no other choice. It’s a powerful tool for product development, to introduce new features, for improving customer user experiences, for deriving some really deep actionable insights from the data. 

But, at the same time, it’s quite complex technology. There’s quite a lot of expertise involved in applying this tech, training these types of models, evaluating them, deciding what model architecture to use, etc. And, moreover, they’re highly experiment driven, meaning that in traditional software development we often know in advance what to achieve. So we set some specific requirements, and then we write a source code to meet those requirements. 

And that’s primarily because, in traditional engineering, it’s the source code that defines the behaviour of our system. With machine learning and artificial intelligence the behaviour is defined by the data, which means that we hardly ever know in advance what the quality of our data is. What’s the predictive power of our data? What kind of data do we need to use? Whether the data that we collected is enough, or whether we need to collect more data. That’s why we always need to experiment first. 

But I think, in some way, we got used to the uncertainty in the process and the outcomes of AI initiatives. The AI industry gave up on the idea that machine learning will be predictable at some point. Instead, we learned how to experiment efficiently, turning our ideas into hypotheses that we can quickly validate via experimentation and rapid prototyping, and evolving the most successful experiments into full-fledged products. That’s essentially what the modern lifecycle of AI/ML products looks like.

It also requires the product teams to adopt a different mindset of constant ideation and experimentation, though. It starts with selecting those ideas and use cases that have the highest potential, the most feasible ones that may have the biggest impact on the business and the product. From there, the team can ideate around potential solutions, quickly prototyping and selecting those that are most successful. That requires experience in identifying the problems that can benefit from AI/ML the most, and agile, iterative processes of validating and scaling the ideas.

How can businesses use that type of technology to improve personalisation?

That’s a good question because, again, there are some problems that are really hard to define. Personalisation is one of them. What makes me or you a person? What contributes to that? Whether it’s our preferences. How do we define our preferences? They might be stochastic, they might be contextual. It’s a highly multi dimensional problem. 

And, although you can try to approach it with a more traditional tech, you’ll still be limited in that capacity – depths of personalisation that you may get. The most efficient way is to learn those personal signals, preferences from the data, and use those insights to deliver personalised experiences, personalised marketing, and so on. 

Essentially, AI/ML acts as a sort of black box between the signal and the user and specific preferences, specific content that would resonate with that specific user. As of right now, that’s the most efficient way to achieve personalisation. 

One other benefit of modern AI/ML is that you can use various different types of data. You can combine clickstream data from your website, collecting information about how users behave on your website. You can collect text data from Twitter or any other sources. You can collect imagery data, and you can use all that information to derive the insights you care about. So the ability to analyse that heterogeneous set of data is another benefit that AI/ML brings into this game.

How do you think machine learning is impacting the metaverse and how are businesses benefiting from that?

There are two different aspects. ‘Metaverse’ is quite an abstract term, and we used to think of it from two different perspectives. One of them is that you want to replicate your physical assets – part of our physical world in the metaverse. And, of course, you can try to approach it from a traditional engineering standpoint, but many of the processes that we have are just too complex. It’s really hard to replicate them in a digital world. So think of a modern production line in manufacturing. In order for you to have a really precise, let’s call it a digital twin, of some physical assets, you have to be smart and use something that will allow you to get as close as possible in your metaverse to the physical world. And AI/ML is the way to go. It’s one of the most efficient ways to achieve that.

Another aspect of the metaverse is that since it’s digital, it’s unlimited. Thus, we may also want to have some specific types of assets that are purely digital, that don’t have any representation in the real world. And those assets should have similar qualities and behaviour as the real ones, handling a similar level of complexity. In order to program these smart, purely digital processes or assets, you need AI and ML to make them really intelligent.

Are there any examples of companies that you think have been utlising AI and machine learning well?

There are the three giants – Facebook, Google, Amazon. All of them are essentially a key driver behind the industry. And the vast majority of their products are, in some way, powered by AI/ML. Quite a lot has changed since I started my career but, even when I joined SoftServe around 10 years ago, there was a lot of research going on into AI/ML. 

There were some big players using the technology, but the vast majority of the market were just exploring this space. Most of our customers didn’t know anything about it. Some of the first questions they had were ‘can you educate us on this? What is AI/ML? How can we use it?’ 

What has changed now is that almost any company we interact with has already done some AI/ML work, whether they build something internally or they use some AI/ML products. So the perception has changed.

The overall adoption of this technology now is at the scale where you can find some aspects of AI/ML in almost any company.

You may see a company that does a lot of AI/ML on their, let’s say, marketing or distribution, but they have some old school legacy technologies in their production site or in their supply chain. The level of AI/ML adoption may differ across different lines of business. But I think almost everyone is using it now. Even your phone, it’s backed with AI/ML features. So it’s hard to think of a company that doesn’t use any AI/ML right now.

Do you think, in general, companies are using AI and machine learning well? What kind of challenges do they have when they implement it?

That’s a good question. The main challenge of applying these technologies today is not how to be successful with this tech, but rather how to be efficient. With the amount of data that we have now, and data that the companies are collecting, plus the amount of tech that is open source or publicly available – or available as managed services from AWS, from GCP – it’s easy to get some good results.

The question is, how do you decide where to apply this technology? How efficiently can you identify those opportunities, and find the ones that will bring the biggest impact, and can be implemented in the most time-efficient and cost-effective manner? 

Another aspect is how do you quickly turn those ideas into production-grade products? It’s a highly experiment-driven area, and there is a lot of science, but you still need to build reliable software on the research results. 

The key drivers for successful AI adoption are finding the right use cases where you can actually get the desired outcomes in the most efficient way, and turn ideas into full-fledged products. We’ve seen some really innovative companies that had brilliant ideas. They may have built some proof of concepts around their ideas, but they didn’t know how to evolve or how to build reliable products out of it. At the same time, there are some technically savvy and digitally native companies. They have tonnes of smart engineers, but they don’t have the right expertise and experience in AI/ML technologies. They don’t know how to apply this tech to real business problems, or what low-hanging fruits are available to them. They just struggle with finding the best way to leverage this tech.

What do you think the future holds for AI and machine learning?

I generally try to be more optimistic about the future because there are obviously a lot of fears around AI/ML. And I think that’s quite natural. If you look back in history, it was the same with electricity and any other innovative technologies.

One of the fears that I think does have some merit is that this technology may replace some real jobs. I think that’s a bit of a pessimistic view because history also teaches us that whatever technology we get, we still need that human aspect to it. 

Almost all the technology that we use right now augments our intelligence. It does not replace it. And I think that the future of AI will be used in a cooperative way. If you’ve seen products like GitHub Copilot, the purpose of this product is essentially to assist the developer in writing code. We still can’t use AI to write entire programs. We need a human to guide that AI to our desired outcome. What exactly do we want to achieve? What is our objective? What is our user expectation?

Similarly, maybe this technology will be applied to a broader set of use cases where AI will be assisting us, not replacing us. There is a quote that I wish was mine but I still think it’s a very good way of thinking about the role of AI: if you think that AI will replace you or your job, most likely you’re wrong. It’s the people who will be using AI who will replace you at your job. 

So I think one of the most important skills to learn right now is how to leverage this tech to make your work more efficient. And that should help many people get that competitive advantage in the future.

  • Iurii Milovanov is the director of AI and data science at SoftServe, a technology company specialising in consultancy services and software development. 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/feed/ 0
UK eases data mining laws to support flourishing AI industry https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/ https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/#respond Wed, 29 Jun 2022 12:21:38 +0000 https://www.artificialintelligence-news.com/?p=12111 The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry. We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups... Read more »

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry.

We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups rely on mining data to get started.

Europe has notoriously strict data laws. Advocates of regulations like GDPR believe they’re necessary to protect consumers, while critics argue it drives innovation, investment, and jobs out of the Eurozone to countries like the USA and China.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” explained Peter Wright, Solicitor and MD of Digital Law UK.

An announcement this week sets out how the UK intends to support its National AI Strategy from an intellectual property standpoint.

The announcement comes via the country’s Intellectual Property Office (IPO) and follows a two-month cross-industry consultation period with individuals, large and small businesses, and a range of organisations.

Text and data mining

Text and data mining (TDM) allows researchers to copy and harness disparate datasets for their algorithms. As part of the announcement, the UK says it will now allow TDM “for any purpose,” which provides much greater flexibility than an exception made in 2014 that allowed AI researchers to use such TDM for non-commercial purposes.

In stark contrast, the EU’s Directive on Copyright in the Digital Single Market offers a TDM exception only for scientific research.

“These changes make the most of the greater flexibilities following Brexit. They will help make the UK more competitive as a location for firms doing data mining,” wrote the IPO in the announcement.

AIs still can’t be inventors

Elsewhere, the UK is more or less sticking to its previous stances—including that AI systems cannot be credited as inventors in patents.

The most high-profile case on the subject is of US-based Dr Stephen Thaler, the founder of Imagination Engines. Dr Thaler has been leading the fight to give credit to machines for their creations.

An AI device created by Dr Thaler, DABUS, was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

In August 2021, a federal court in Australia ruled that AI systems can be credited as inventors under patent law after Ryan Abbott, a professor at the University of Surrey, filed applications in the country on behalf of Dr Thaler. Similar applications were also filed in the UK, US, and New Zealand.

The UK’s IPO rejected the applications at the time, claiming that – under the country’s Patents Act – only humans can be credited as inventors. Subsequent appeals were also rejected.

“A patent is a statutory right and it can only be granted to a person,” explained Lady Justice Liang. “Only a person can have rights. A machine cannot.”

In the IPO’s latest announcement, the body reiterates: ”For AI-devised inventions, we plan no change to UK patent law now. Most respondents felt that AI is not yet advanced enough to invent without human intervention.”

However, the IPO highlights the UK is one of only a handful of countries that protects computer-generated works. Any person who makes “the arrangements necessary for the creation of the [computer-generated] work” will have the rights for 50 years from when it was made.

Supporting a flourishing AI industry

Despite being subject to strict data regulations, the UK has become Europe’s hub for AI with pioneers like DeepMind, Wayve, Graphcore, Oxbotica, and BenevolentAI. The country’s world-class universities churn out in-demand AI talent and tech investments more than double other European countries.

(Credit: Atomico)

More generally, the UK is regularly considered one of the best places in the world to set up a business. All eyes are on how the country will use its post-Brexit freedoms to diverge from EU rules to further boost its industries.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” said Chris Philp, DCMS Minister.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

There will undoubtedly be debates over the decisions made by the UK to boost its AI industry, especially regarding TDM, but the policies announced so far will support entrepreneurship and the country’s attractiveness for relevant investments.

(Photo by Chris Robert on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is also co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/feed/ 0
US appeals court decides scraping public web data is fine https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/ https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/#respond Tue, 19 Apr 2022 12:35:56 +0000 https://artificialintelligence-news.com/?p=11890 The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA). In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles. The court barred Linkedin from stopping HiQ scraping data... Read more »

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA).

In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles.

The court barred Linkedin from stopping HiQ scraping data after deciding the CFAA – which criminalises accessing a protected computer – doesn’t apply due to the information being public.

LinkedIn appealed the case and in 2019 the Ninth Circuit Court sided with HiQ and upheld the original decision.

In March 2020, LinkedIn once again appealed the decision on the basis that implementing technical barriers and sending a cease-and-desist letter is revoking authorisation. Therefore, any subsequent attempts to scrape data are unauthorised and therefore break the CFAA.

“At issue was whether, once hiQ received LinkedIn’s cease-and-desist letter, any further scraping and use of LinkedIn’s data was ‘without authorization’ within the meaning of the CFAA,” reads the filing (PDF).

“The panel concluded that hiQ raised a serious question as to whether the CFAA ‘without authorization’ concept is inapplicable where, as here, prior authorization is not generally required but a particular person—or bot—is refused access.”

The filing highlights several of LinkedIn’s technical measures to protect against data-scraping:

  • Prohibiting search engine crawlers and bots – aside from certain allowed entities, like Google – from accessing LinkedIn’s servers via the website’s standard ‘robots.txt’ file.
  • ‘Quicksand’ system that detects non-human activity indicative of scraping.
  • ‘Sentinel’ system that slows (or blocks) activity from suspicious IP addresses.
  • ‘Org Block’ system that generates a list of known malicious IP addresses linked to large-scale scraping.

Overall, LinkedIn claims to block approximately 95 million automated attempts to scrape data every day.

The appeals court once again ruled in favour of HiQ, upholding the conclusion that “the balance of hardships tips sharply in hiQ’s favor” and the company’s existence would be threatened without having access to LinkedIn’s public data.

“hiQ’s entire business depends on being able to access public LinkedIn member profiles,” hiQ’s CEO argued. “There is no current viable alternative to LinkedIn’s member database to obtain data for hiQ’s Keeper and Skill Mapper services.” 

However, LinkedIn’s petition (PDF) counters that the ruling has wider implications.

“Under the Ninth Circuit’s rule, every company with a public portion of its website that is integral to the operation of its business – from online retailers like Ticketmaster and Amazon to social networking platforms like Twitter – will be exposed to invasive bots deployed by free-riders unless they place those websites entirely behind password barricades,” wrote the company’s attorneys.

“But if that happens, those websites will no longer be indexable by search engines, which will make information less available to discovery by the primary means by which people obtain information on the Internet.”

AI companies that often rely on mass data-scraping will undoubtedly be pleased with the court’s decision.

Clearview AI, for example, has regularly been targeted by authorities and privacy campaigners for scraping billions of images from public websites to power its facial recognition system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Clearview AI recently made headlines for offering its services to Ukraine to help the country identify both Ukrainian defenders and Russian assailants who’ve lost their lives in the brutal conflict.

Mass data scraping will remain a controversial subject. Supporters will back the appeal court’s ruling while opponents will join LinkedIn’s attorneys in their concerns about normalising the practice.

(Photo by ThisisEngineering RAEng on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/feed/ 0
Babylon Health taps Google Cloud to boost scalability and innovation https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/ https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/#respond Mon, 28 Mar 2022 12:01:47 +0000 https://artificialintelligence-news.com/?p=11812 AI-powered healthcare service Babylon Health has announced a partnership with Google Cloud to boost scalability and innovation. London-based Babylon Health is a digital-first health service provider that uses AI and machine learning technology to provide access to health information to people whenever and wherever they need it. The company has partnered with private and public... Read more »

The post Babylon Health taps Google Cloud to boost scalability and innovation appeared first on AI News.

]]>
AI-powered healthcare service Babylon Health has announced a partnership with Google Cloud to boost scalability and innovation.

London-based Babylon Health is a digital-first health service provider that uses AI and machine learning technology to provide access to health information to people whenever and wherever they need it.

The company has partnered with private and public across the UK, North America, South-East Asia, and Rwanda with the aim of making healthcare more accessible and affordable to 24 million patients worldwide.

“Our job is to help people to stay well and we’re on a mission to provide affordable, accessible health care to everyone in the world,” explains Richard Noble, Engineering Director of Data at Babylon.

Babylon Health’s rapid growth has led it to seek a partner to help it scale.

By partnering with Google Cloud, the company claims that it’s been able to:

  • Increase event data ingestion from 1 TB per week to 190 TB daily
  • Reduce the wait time for users to access data from six months to a week
  • Integrate over 100 data sources – providing access to 80 billion data points
  • Save hundreds of hours of work by automatically transcribing 100,000 video consultations in 2021

Babylon Health needs to store and process huge amounts of sensitive data.

“We work with a lot of private patient data and we must ensure that it stays private,” explains Natalie Godec, cloud engineer at Babylon. “At the same time, we must enable our teams to innovate with that data while meeting different national regulatory standards.”

Therefore, Babylon Health required a partner it felt could handle such demands.

“We chose Google Cloud because we knew it could scale with us and support us with our data science and analysis and we could build the tools we needed with it quickly,” added Noble. “It offers the solutions that enable us to focus on our core business, access to health.”

Babylon Health says the move to Google Cloud has enabled it to better analyse its data using AI to unlock new tools and features that help clinicians and users alike. While building a new data model and giving access to users initially took six months, the company says it now takes under a week.

In London, Babylon Health offers its ‘GP at Hand’ service which – in partnership with the NHS – acts as a digital GP practice. Patients can connect to NHS clinicians remotely 24/7 and even be issued prescriptions if required. Where physical examinations are needed, patients will be directed to a suitable venue.

However, GP at Hand has been criticised as “cherry-picking” healthier patients—taking resources away from local GP practices that are often trying to care for sicker, more elderly patients.

Growing pains

While initial problems are to be expected from any relatively new service; poor advice in a healthcare service could result in unnecessary suffering, long-term complications, or even death.

In 2018, Dr David Watkins – a consultant oncologist at Royal Marsden Hospital – reached out to AI News to alert us to Babylon Health’s chatbot giving unsafe advice.

Dr Watkins provided numerous examples of clearly dangerous advice being given by the chatbot:

Babylon Health called Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to Babylon Health, Dr Watkins conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”. He estimates conducting between 800 and 900 full triages and that some were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

That same year, Babylon Health published a paper claiming that its AI could diagnose common diseases as well as human physicians. The Royal College of General Practitioners, the British Medical Association, Fraser and Wong, and the Royal College of Physicians all issued statements disputing the paper’s claims.

Dr Watkins has acknowledged that Babylon Health’s chatbot has improved and has substantially reduced its error rate. In 2018, when Dr Watkins first reached out to us, he says this rate was “one in one”.

In 2020, Babylon Health claimed in a paper that it can now appropriately triage patients in 85 percent of cases.

Hopefully, the partnership with Google Cloud continues to improve Babylon Health’s abilities to help it achieve its potentially groundbreaking aim to deliver 24/7 access to healthcare wherever a patient is.

(Photo by Hush Naidoo Jade Photography on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Babylon Health taps Google Cloud to boost scalability and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/feed/ 0
AI & Big Data Expo 2021: Ethics, myths and making sense of data https://www.artificialintelligence-news.com/2021/09/08/ai-big-data-expo-2021-ethics-myths-and-making-sense-of-data/ https://www.artificialintelligence-news.com/2021/09/08/ai-big-data-expo-2021-ethics-myths-and-making-sense-of-data/#respond Wed, 08 Sep 2021 07:35:00 +0000 http://artificialintelligence-news.com/?p=11012 The global AI market is predicted to snowball over the next few years, reaching a $190.61 billion market value in 2025. By 2030, AI will lead to an estimated $15.7 trillion, or 26% increase in global GDP. Industry analyst firm Gartner has also predicted that by 2022 companies will have an average of 35 AI... Read more »

The post AI & Big Data Expo 2021: Ethics, myths and making sense of data appeared first on AI News.

]]>
The global AI market is predicted to snowball over the next few years, reaching a $190.61 billion market value in 2025. By 2030, AI will lead to an estimated $15.7 trillion, or 26% increase in global GDP.

Industry analyst firm Gartner has also predicted that by 2022 companies will have an average of 35 AI projects in place. 

Of all the end-use industries, the market for manufacturing is expected to grow the fastest. Increasing data volume derived from the manufacturing value chain has led to the involvement of AI-enabled data analytics in the manufacturing sector. In addition, several industry initiatives, such as Industry 4.0, a connected manufacturing initiative by the Government of Germany, have proliferated the growth of AI-enabled devices in manufacturing.

Businesses have also noted that automating tasks, such as invoicing and contract validation, is a crucial use of AI. 

Meanwhile, 80% of retail executives expect their companies to adopt AI-powered intelligent automation by 2027. The most prominent use case for AI in the retail industry is customer engagement (chatbots, predictive behavior analysis, hyper-personalization).

On the downside, a lack of trained and experienced staff is an expected restriction in the AI market’s growth. 

At the AI & Big Data Expo in London this week, all the opportunities and challenges surrounding AI and big data were keenly debated.

The event, part of TechEx Global exhibition and conference, showcased some of next-generation technologies and strategies, and offered an opportunity to explore and discover the practical and successful implementation of AI & Big Data in driving forward business for a smarter future.

Of all the talking points over the two-day event, some of the more prominent discussions focused on the myths and misunderstanding of AI, as well as ethics, algorithms and how companies are coping with vast quantities of data.

AI & Big Data Expo Global

Dispelling the AI myths

Speaking at the event, Myrna Macgregor, head of machine learning – strategy (acting) & lead, responsible for AI+ML at the BBC, said: “It’s important to think about what the challenges are. There are some barriers, obviously. Firstly, the technical point. Very technical terminology is often used in conversation regarding AI and coding algorithms.And these are things that are unfamiliar to some of the stakeholders you’re going to be working with. 

“Secondly, there are a lot of theories about how AI is going to take jobs. That a large percentage of jobs will disappear in the next 10 to 20 years, and that’s really unhelpful. AI, as an assistive technology, is a tool to help people do that work. 

“Also, a lot of people think that AI is an easy thing to do. There is a perception, perhaps, that AI is something you can grab off the shelf and throw at problems, or you can just sprinkle a little AI on problem.”

Ethics and responsibility

Macgregor said: “Ethics and responsibility are not a category unto themselves. You have to think about what you’re trying to achieve from a societal level, but also your existing organisational values and mission, and integrating that into the technology you’re building. It’s not starting from scratch.

“I think the main ingredients of responsibility are to avoid negative inference, building something that works, and maybe most importantly, building something that works for home users. And that kind of corresponds to how we think about the BBC – the universality aspect, as well. And I think the two things you need ub order to achieve that are thoughtfulness – taking a pause and thinking about the impact of what you’re building – and also collaboration. 

“It’s important to bring in different stakeholder perspectives, so that you’re really reflecting that collaboration and different perspectives. From a BBC perspective, responsibility looks like upholding the values that we have as an organisation. So I independence and impartiality are very important to us in an immediate context.”

Robustness of algorithms

Ilya Feige, director of AI, Faculty, commented: “There’s general intelligence and super intelligence. And there’s a whole topic there that people worry about a lot, rightfully. But right now, what organisations really need to have answers to is fairness, explainability, privacy and robustness. 

“I feel like the first three are kind of well understood, I assume everyone’s talked about, or read about them a lot. But maybe robustness is the one least discussed openly, and it’s sort of the endeavour to know when you can trust your algorithm, or when you can have confidence in it. So where is it likely to go wrong? And there are a bunch of different ways that can take place, like, my data distribution has changed and so the model is no longer relevant. It used to be good, but now it’s not. 

“And there are other examples like parts of data space, where the model is just catastrophically bad, or even attacks. So there are examples of ways in which you can attack algorithms and provide data that fools algorithms. So robustness is a big topic that’s discussed less fairness, privacy and explainability.”

AI & Big Data Expo Global

Making sense of your data

Mark Wilson, head of data governance, Handelsbanken UK, said: “Once you’ve got your data in silos or in a warehouse, how are you controlling that? How are you extracting the data? Do you have competency in that? So when it comes to governance I’m coming from the angle of have you got control of governance? When people look at the data, do they actually know what they’re looking at?

“It’s quite sad that a lot of organisations are still not on top of this to a large degree. I think they’re sold – you look around tech conferences, the vendors, everyone’s got a dashboard, everything looks great. But have they actually got the fundamentals in place? The data quality controls. There’s governance in terms of lineage, documentation of losses, and dictionaries. So you can sound safe, you can look at this PowerPoint, or this heat map because I can tell you, with 100% integrity, the data behind it is good. And that’s not happening and so many people are just sold on the dashboard.

“Everyone at a company has a responsibility when it comes to data. Who’s using data, which is invariably everybody. Everyone’s either taking data from something to put together a PowerPoint or to produce a management pack, or they’re taking data from something to be sorted and to be put in something else etc.

“However, you’ve got to have a centralised data governance function to set the standards. You should have data owners in place. So your organisational sets of data. This also comes from that doesn’t have customers, products and agreements. Services, fundamentally NHS a bank, National Trust services, customers products. 

So someone essentially has got to make sure there is a data strategy, framework, rules, and that everyone knows the standards the company has. And then somebody has to make that real. Because no one reads this dusty bit of paper. You need to have a chief data officer. This isnt something that should be left to IT. Data is not an IT problem. It is an enabler. 

“The systems will help you do things. But somebody who understands the business needs to be in charge of the data.”

TechEx Global, hosted on September 6-7 at the Business Design Centre, London, is an enterprise technology exhibition and conference consisting of four co-located events covering IoT, AI & Big Data, Cyber Security & Cloud and Blockchain.

The post AI & Big Data Expo 2021: Ethics, myths and making sense of data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/08/ai-big-data-expo-2021-ethics-myths-and-making-sense-of-data/feed/ 0
Information Commissioner clears Cambridge Analytica of influencing Brexit https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 http://artificialintelligence-news.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0