Duncan MacRae, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Fri, 20 Oct 2023 15:13:45 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Duncan MacRae, Author at AI News https://www.artificialintelligence-news.com 32 32 Jaromir Dzialo, Exfluency: How companies can benefit from LLMs https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/ https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/#respond Fri, 20 Oct 2023 15:13:43 +0000 https://www.artificialintelligence-news.com/?p=13726 Can you tell us a little bit about Exfluency and what the company does? Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate... Read more »

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
Can you tell us a little bit about Exfluency and what the company does?

Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate asset.

What tech trends have you noticed developing in the multilingual communication space?

As in every other walk of life, AI in general and ChatGPT specifically is dominating the agenda. Companies operating in the language space are either panicking or scrambling to play catch-up. The main challenge is the size of the tech deficit in this vertical. Innovation and, more especially AI-innovation is not a plug-in.

What are some of the benefits of using LLMs?

Off the shelf LLMs (ChatGPT, Bard, etc.) have a quick-fix attraction. Magically, it seems, well formulated answers appear on your screen. One cannot fail to be impressed.

The true benefits of LLMs will be realised by the players who can provide immutable data with which feed the models. They are what we feed them.

What do LLMs rely on when learning language?

Overall, LLMs learn language by analysing vast amounts of text data, understanding patterns and relationships, and using statistical methods to generate contextually appropriate responses. Their ability to generalise from data and generate coherent text makes them versatile tools for various language-related tasks.

Large Language Models (LLMs) like GPT-4 rely on a combination of data, pattern recognition, and statistical relationships to learn language. Here are the key components they rely on:

  1. Data: LLMs are trained on vast amounts of text data from the internet. This data includes a wide range of sources, such as books, articles, websites, and more. The diverse nature of the data helps the model learn a wide variety of language patterns, styles, and topics.
  2. Patterns and Relationships: LLMs learn language by identifying patterns and relationships within the data. They analyze the co-occurrence of words, phrases, and sentences to understand how they fit together grammatically and semantically.
  3. Statistical Learning: LLMs use statistical techniques to learn the probabilities of word sequences. They estimate the likelihood of a word appearing given the previous words in a sentence. This enables them to generate coherent and contextually relevant text.
  4. Contextual Information: LLMs focus on contextual understanding. They consider not only the preceding words but also the entire context of a sentence or passage. This contextual information helps them disambiguate words with multiple meanings and produce more accurate and contextually appropriate responses.
  5. Attention Mechanisms: Many LLMs, including GPT-4, employ attention mechanisms. These mechanisms allow the model to weigh the importance of different words in a sentence based on the context. This helps the model focus on relevant information while generating responses.
  6. Transfer Learning: LLMs use a technique called transfer learning. They are pretrained on a large dataset and then fine-tuned on specific tasks. This allows the model to leverage its broad language knowledge from pretraining while adapting to perform specialised tasks like translation, summarisation, or conversation.
  7. Encoder-Decoder Architecture: In certain tasks like translation or summarisation, LLMs use an encoder-decoder architecture. The encoder processes the input text and converts it into a context-rich representation, which the decoder then uses to generate the output text in the desired language or format.
  8. Feedback Loop: LLMs can learn from user interactions. When a user provides corrections or feedback on generated text, the model can adjust its responses based on that feedback over time, improving its performance.

What are some of the challenges of using LLMs?

A fundamental issue, which has been there ever since we started giving away data to Google, Facebook and the like, is that “we” are the product. The big players are earning untold billions on our rush to feed their apps with our data. ChatGPT, for example, is enjoying the fastest growing onboarding in history. Just think how Microsoft has benefitted from the millions of prompts people have already thrown at it.

The open LLMs hallucinate and, because answers to prompts are so well formulated, one can be easily duped into believing what they tell you.
And to make matters worse, there are no references/links to tell you from where they sourced their answers.

How can these challenges be overcome?

LLMs are what we feed them. Blockchain technology allows us to create an immutable audit trail and with it immutable, clean data. No need to trawl the internet. In this manner we are in complete control of what data is going in, can keep it confidential, and support it with a wealth of useful meta data. It can also be multilingual!

Secondly, as this data is stored in our databases, we can also provide the necessary source links. If you can’t quite believe the answer to your prompt, open the source data directly to see who wrote it, when, in which language and which context.

What advice would you give to companies that want to utilise private, anonymised LLMs for multilingual communication?

Make sure your data is immutable, multilingual, of a high quality – and stored for your eyes only. LLMs then become a true game changer.

What do you think the future holds for multilingual communication?

As in many other walks of life, language will embrace forms of hybrid intelligence. For example, in the Exfluency ecosystem, the AI-driven workflow takes care of 90% of the translation – our fantastic bilingual subject matter experts then only need to focus on the final 10%. This balance will change over time – AI will take an ever-increasing proportion of the workload. But the human input will remain crucial. The concept is encapsulated in our strapline: Powered by technology, perfected by people.

What plans does Exfluency have for the coming year?

Lots! We aim to roll out the tech to new verticals and build communities of SMEs to serve them. There is also great interest in our Knowledge Mining app, designed to leverage the information hidden away in the millions of linguistic assets. 2024 is going to be exciting!

  • Jaromir Dzialo is the co-founder and CTO of Exfluency, which offers affordable AI-powered language and security solutions with global talent networks for organisations of all sizes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/feed/ 0
Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/ https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/#respond Mon, 15 May 2023 13:57:46 +0000 https://www.artificialintelligence-news.com/?p=13059 Could you tell us a little bit about SoftServe and what the company does? Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems –... Read more »

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
Could you tell us a little bit about SoftServe and what the company does?

Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems – not about technologies – although we are technology experts. But we always try to find the best technology that will help our customers get to the point where they want to be. 

So we’ve been in the market for quite a while, having originated in Ukraine. But now we have offices all over the globe – US, Latin America, Singapore, Middle East, all over Europe – and we operate in multiple industries. We have some specialised leadership around specific industries, such as retail, financial services, healthcare, energy, oil and gas, and manufacturing. We also work with a lot of digital natives and independent software vendors, helping them adopt this technology in their products, so that they can better serve their customers.

What are the main trends you’ve noticed developing in AI and machine learning?

One of the biggest trends is that, while people used to question whether AI, machine learning and data science are the technologies of the future; that’s no longer the question. This technology is already everywhere. And the vast majority of the innovation that we see right now wouldn’t have been possible without these technologies. 

One of the main reasons is that this tech allows us to address and solve some of the problems that we used to consider intractable. Think of natural language, image recognition or code generation, which are not only hard to solve, they’re also hard to define. And approaching these types of problems with our traditional engineering mindset – where we essentially use programming languages – is just impossible. Instead, we leverage the knowledge stored in the vast amounts of data we collect, and use it to find solutions to the problems we care about. This approach is now called Machine Learning, and it is the most efficient way to address those types of problems nowadays.

But with the amount of data we can now collect, the compute power available in the cloud, the efficiency of training and the algorithms that we’ve developed, we are able to get to the stage where we can get superhuman performance with many tasks that we used to think only humans could perform. We must admit that human intelligence is limited in capacity and ability to process information. And machines can augment our intelligence and help us more efficiently solve problems that our brains were not designed for.

The overall trend that we see now is that machine learning and AI are essentially becoming the industry standard for solving complex problems that require knowledge, computation, perception, reasoning and decision-making. And we see that in many industries, including healthcare, finance and retail.

There are some more specific emerging trends. The topic of my TechEx North America keynote will be about generative AI, which many folk might think is something just recently invented, something new, or they may think of it as just ChatGPT. But these technologies have been evolving for a while. And we, as hands-on practitioners in the industry, have been working with this technology for quite a while. 

What has changed now is that, based on the knowledge and experience we’ve collected, we were able to get this tech to a stage where GenAI models are useful. We can use it to solve some real problems across different industries, from concise document summaries to advanced user experiences, logical reasoning and even the generation of unique knowledge. That said, there are still some challenges with reliability, and understanding the actual potential of these technologies.

How important are AI and machine learning with regards to product innovation?

AI and Machine Learning essentially allow us to address the set of problems that we can’t solve with traditional technology. If you want to innovate, if you want to get the most out of tech, you have to use them. There’s no other choice. It’s a powerful tool for product development, to introduce new features, for improving customer user experiences, for deriving some really deep actionable insights from the data. 

But, at the same time, it’s quite complex technology. There’s quite a lot of expertise involved in applying this tech, training these types of models, evaluating them, deciding what model architecture to use, etc. And, moreover, they’re highly experiment driven, meaning that in traditional software development we often know in advance what to achieve. So we set some specific requirements, and then we write a source code to meet those requirements. 

And that’s primarily because, in traditional engineering, it’s the source code that defines the behaviour of our system. With machine learning and artificial intelligence the behaviour is defined by the data, which means that we hardly ever know in advance what the quality of our data is. What’s the predictive power of our data? What kind of data do we need to use? Whether the data that we collected is enough, or whether we need to collect more data. That’s why we always need to experiment first. 

But I think, in some way, we got used to the uncertainty in the process and the outcomes of AI initiatives. The AI industry gave up on the idea that machine learning will be predictable at some point. Instead, we learned how to experiment efficiently, turning our ideas into hypotheses that we can quickly validate via experimentation and rapid prototyping, and evolving the most successful experiments into full-fledged products. That’s essentially what the modern lifecycle of AI/ML products looks like.

It also requires the product teams to adopt a different mindset of constant ideation and experimentation, though. It starts with selecting those ideas and use cases that have the highest potential, the most feasible ones that may have the biggest impact on the business and the product. From there, the team can ideate around potential solutions, quickly prototyping and selecting those that are most successful. That requires experience in identifying the problems that can benefit from AI/ML the most, and agile, iterative processes of validating and scaling the ideas.

How can businesses use that type of technology to improve personalisation?

That’s a good question because, again, there are some problems that are really hard to define. Personalisation is one of them. What makes me or you a person? What contributes to that? Whether it’s our preferences. How do we define our preferences? They might be stochastic, they might be contextual. It’s a highly multi dimensional problem. 

And, although you can try to approach it with a more traditional tech, you’ll still be limited in that capacity – depths of personalisation that you may get. The most efficient way is to learn those personal signals, preferences from the data, and use those insights to deliver personalised experiences, personalised marketing, and so on. 

Essentially, AI/ML acts as a sort of black box between the signal and the user and specific preferences, specific content that would resonate with that specific user. As of right now, that’s the most efficient way to achieve personalisation. 

One other benefit of modern AI/ML is that you can use various different types of data. You can combine clickstream data from your website, collecting information about how users behave on your website. You can collect text data from Twitter or any other sources. You can collect imagery data, and you can use all that information to derive the insights you care about. So the ability to analyse that heterogeneous set of data is another benefit that AI/ML brings into this game.

How do you think machine learning is impacting the metaverse and how are businesses benefiting from that?

There are two different aspects. ‘Metaverse’ is quite an abstract term, and we used to think of it from two different perspectives. One of them is that you want to replicate your physical assets – part of our physical world in the metaverse. And, of course, you can try to approach it from a traditional engineering standpoint, but many of the processes that we have are just too complex. It’s really hard to replicate them in a digital world. So think of a modern production line in manufacturing. In order for you to have a really precise, let’s call it a digital twin, of some physical assets, you have to be smart and use something that will allow you to get as close as possible in your metaverse to the physical world. And AI/ML is the way to go. It’s one of the most efficient ways to achieve that.

Another aspect of the metaverse is that since it’s digital, it’s unlimited. Thus, we may also want to have some specific types of assets that are purely digital, that don’t have any representation in the real world. And those assets should have similar qualities and behaviour as the real ones, handling a similar level of complexity. In order to program these smart, purely digital processes or assets, you need AI and ML to make them really intelligent.

Are there any examples of companies that you think have been utlising AI and machine learning well?

There are the three giants – Facebook, Google, Amazon. All of them are essentially a key driver behind the industry. And the vast majority of their products are, in some way, powered by AI/ML. Quite a lot has changed since I started my career but, even when I joined SoftServe around 10 years ago, there was a lot of research going on into AI/ML. 

There were some big players using the technology, but the vast majority of the market were just exploring this space. Most of our customers didn’t know anything about it. Some of the first questions they had were ‘can you educate us on this? What is AI/ML? How can we use it?’ 

What has changed now is that almost any company we interact with has already done some AI/ML work, whether they build something internally or they use some AI/ML products. So the perception has changed.

The overall adoption of this technology now is at the scale where you can find some aspects of AI/ML in almost any company.

You may see a company that does a lot of AI/ML on their, let’s say, marketing or distribution, but they have some old school legacy technologies in their production site or in their supply chain. The level of AI/ML adoption may differ across different lines of business. But I think almost everyone is using it now. Even your phone, it’s backed with AI/ML features. So it’s hard to think of a company that doesn’t use any AI/ML right now.

Do you think, in general, companies are using AI and machine learning well? What kind of challenges do they have when they implement it?

That’s a good question. The main challenge of applying these technologies today is not how to be successful with this tech, but rather how to be efficient. With the amount of data that we have now, and data that the companies are collecting, plus the amount of tech that is open source or publicly available – or available as managed services from AWS, from GCP – it’s easy to get some good results.

The question is, how do you decide where to apply this technology? How efficiently can you identify those opportunities, and find the ones that will bring the biggest impact, and can be implemented in the most time-efficient and cost-effective manner? 

Another aspect is how do you quickly turn those ideas into production-grade products? It’s a highly experiment-driven area, and there is a lot of science, but you still need to build reliable software on the research results. 

The key drivers for successful AI adoption are finding the right use cases where you can actually get the desired outcomes in the most efficient way, and turn ideas into full-fledged products. We’ve seen some really innovative companies that had brilliant ideas. They may have built some proof of concepts around their ideas, but they didn’t know how to evolve or how to build reliable products out of it. At the same time, there are some technically savvy and digitally native companies. They have tonnes of smart engineers, but they don’t have the right expertise and experience in AI/ML technologies. They don’t know how to apply this tech to real business problems, or what low-hanging fruits are available to them. They just struggle with finding the best way to leverage this tech.

What do you think the future holds for AI and machine learning?

I generally try to be more optimistic about the future because there are obviously a lot of fears around AI/ML. And I think that’s quite natural. If you look back in history, it was the same with electricity and any other innovative technologies.

One of the fears that I think does have some merit is that this technology may replace some real jobs. I think that’s a bit of a pessimistic view because history also teaches us that whatever technology we get, we still need that human aspect to it. 

Almost all the technology that we use right now augments our intelligence. It does not replace it. And I think that the future of AI will be used in a cooperative way. If you’ve seen products like GitHub Copilot, the purpose of this product is essentially to assist the developer in writing code. We still can’t use AI to write entire programs. We need a human to guide that AI to our desired outcome. What exactly do we want to achieve? What is our objective? What is our user expectation?

Similarly, maybe this technology will be applied to a broader set of use cases where AI will be assisting us, not replacing us. There is a quote that I wish was mine but I still think it’s a very good way of thinking about the role of AI: if you think that AI will replace you or your job, most likely you’re wrong. It’s the people who will be using AI who will replace you at your job. 

So I think one of the most important skills to learn right now is how to leverage this tech to make your work more efficient. And that should help many people get that competitive advantage in the future.

  • Iurii Milovanov is the director of AI and data science at SoftServe, a technology company specialising in consultancy services and software development. 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/feed/ 0
Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/ https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/#respond Tue, 09 May 2023 14:11:35 +0000 https://www.artificialintelligence-news.com/?p=13027 Could you tell us a little bit your company, Infocepts? On a mission to bridge the gap between the worlds of business and analytics, Infocepts was founded in 2004 by me and Rohit Bhayana, both with more than 20 years of experience in the Data and Analytics (D&A) industry. People often use the term business... Read more »

The post Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics appeared first on AI News.

]]>
Could you tell us a little bit your company, Infocepts?

On a mission to bridge the gap between the worlds of business and analytics, Infocepts was founded in 2004 by me and Rohit Bhayana, both with more than 20 years of experience in the Data and Analytics (D&A) industry. People often use the term business analytics as one phrase, but if you have worked in the industry for a long time and if you talk to a lot of people, you’ll realise just how big the gap is.

And that’s Infocepts’ focus. We are an end-end D&A solutions provider with an increasing focus on AI and our solutions combine our processes, expertise, proprietary technologies all packaged together to deliver predictable outcomes to our clients. We work for marquee enterprise clients across industries. Infocepts has the highest overall ranking on Gartner peer reviews amongst our competitors and we are a Great Place to Work certified firm. So, we’re very proud that our clients and our people love us.

The data & analytics technology market is evolving very fast. What’s your view of it?

I love being in the data industry and a large reason is the pace at which it moves. In less than 10 years we have gone from about 60-70 technologies to 1,400+ and growing. But the problems have not grown 20X. That means, we now have multiple ways to solve the same problem.

Similarly, on the buyer side, we have seen a huge change in the buyer persona. Today, I don’t know of any business leader who is not a data leader. Today’s business leaders were born in the digital era and are super comfortable not just with insights but with the lowest level data. They know the modeling methods and have an intuitive sense of where AI can help. Most executives in today’s world also have a deeper understanding about what data quality means, its importance, and how it will change the game in the long run.

So, we are seeing a big change both on the supply and demand side.

What are some of the key challenges you see in front of business & data leaders focused on data-driven transformation?

The gap between the worlds of business and analytics is a very, very real one. I would like to quote this leadership survey which highlights this contradiction. Talking about D&A initiatives which are adding value – 90% of data leaders believe their company’s data products provide real business value, but only 39% of business leaders feel so. That’s a huge gap. Ten years ago, the number would have been lower, but the gap was still the same. This is not a technology issue. What it tells us is that the most common roadblocks to the success of D&A initiatives are all human-related challenges like skills shortages, lack of business engagement, difficulty accepting change and poor data literacy throughout the organisation

We all know the power of data and we spoke about business leaders being data leaders, but there are still people in organisations who need to change. Data leaders are still not speaking the language of business and are under intense pressure to demonstrate the value of D&A initiatives to business executives.

The pace at which technologies have changed and evolved is the pace at which you will see businesses evolving due to human-centric changes. The next five years look very transformational and positive for the industry.

Can you also talk about some of the opportunities you see in front of the D&A leaders?

The first big opportunity is to improve productivity to counter the economic uncertainty. Companies are facing financial crunch because of on-going economic uncertainty including the very real possibility of a recession in the next 12-18 months. Data shows that there are companies that come out strong after a recession, with twice the industry averages in revenue & profits. These are the companies who are proactive in preparing & executing against multiple scenarios backed by insights. They redeploy their resources towards the highest value activities in their strategy and manage other costs. Companies need to stop paying for legacy technologies and fix their broken managed services model. To keep up with the changing technology landscape, it’s important to choose on-demand talent. 

Secondly, companies and people should innovate with data and become data fluent. Many organisations have invested in specialised teams for delivering data. But the real value from data comes only when your employees use it. Data fluency is an organisational capability that enables every employee to understand, access, and apply data as fluently as one can speak in their language. With more fluent people in an organisation, productivity increases, turnover reduces, and innovation thrives without relying only on specialised teams. Companies should assess their organisational fluency and consider establishing a data concierge. It’s like a ten layered structure instead of a very big team. A concierge which can help you become more fluent and introduce interventions across the board to strengthen access, democratise data, increase trust adoption.

Lastly, there’s a huge opportunity to reimagine how we bring value to the business using data. Salesforce and Amazon pioneered commercially successful IaaS, PaaS, and SaaS models in cloud computing that gradually shifted significant portions of responsibility for bringing value from the client to the service provider. The benefits of agility, elasticity, economies of scale, and reach are well known. Similarly, data & analytics technologies need to go through a similar paradigm and go one step further towards productised services, what we call at Infocepts – Solutions as a Service!

Can you talk more about your Solutions as a Service approach?

What we mean by Solutions as a Service is a combination of products, problem solving & expertise together in one easy to use solution. This approach is inevitable given the sheer pace at which technology is evolving. This new category requires a shift in thinking and will give you a source of advantage like how the early cloud adopters received during the last decade. Infocepts offers many domain-driven as-a-service solutions in this category such as e360 for people analytics, AutoMate for business automations and DiscoverYai (also known as AI-as-a-Service) for realising the benefits of AI.

There is a lot of buzz around AI. What does AI mean for the world of D&A and how real is the power of AI?

Oh! It’s very real. In the traditional BI paradigm, business users struggled to get access to their data, but even if they crossed that hurdle, they still needed to know what questions to ask. AI can be an accelerator and educator by helping business folks know what to look for in their data in the first place.

AI-driven insights can help uncover the “why” in your data. For example, augmented analytics can help you discover why sales are increasing and why market penetration varies from city to city, guiding you towards hidden insights for which you didn’t know where to look.

Another example is the use of chatbots or NLP driven generative AI solutions that can understand and translate queries such as, “What are sales for each category and region?” Thanks to modern computing and programming techniques combined with the power of AI, these solutions can run thousands of analyses on billions of rows in seconds, use auto ML capabilities to identify best fit models & produce insights to answer such business questions.

Then, through natural language generation, the system can present the AI-driven insights to the user in an intuitive fashion – including results to questions that the user might not have thought to ask. With user feedback and machine learning, the AI can become more intelligent about which insights are most useful

In addition to insights generation, AI can also play a role in data management & engineering by automating data discovery, data classification, metadata enhancements, data lifecycle governance, data anonymisation and more.

On the data infra side, models trained in machine learning can be used to solve classification, prediction, and control problems to automate activities & add or augment capabilities such as – predictive capacity & availability monitoring, intelligent alert escalation & routing, anomaly detection, ChatOps, root cause identification and more. 

Where can AI create immediate impact for businesses? Can you share some examples?

AI is an enabler for data and analytics as against being a technology vertical by itself. As an example, let’s look at the retail industry – use cases like store activity monitoring, order management, fraud/threat detection, assortment management have existed for a while now. With AI, you can deliver them way faster.

In media, some of the use cases that we are helping our clients with are around demand prediction, content personalisation, content recommendation, synthetic content generation – both text & multimedia. AI also has vast applications in banking. We again have fraud detection, and coupled with automation, now it’s not just detection but you can also put controls in real time to stop fraud.

We have also implemented AI use-cases within Infocepts. We leverage AI to increase our productivity & employee engagement. Our HR team launched ‘Amber’, an AI bot that redefines employee experience. We use AI assistants to record, transcribe and track actions from voice conversations. Our marketing & comms teams use generative AI tools for content generation.

The advancement we have seen in the tech space in the last few years is what you will see in the next 3 to 4 years on the people side. And I think AI assisted tech processes and solutions will play a huge role there.

What advice would you give business leaders who are looking to get started with AI?                             

Embrace an AI-first mindset! Instead of the traditional approach of tackling complex business problems by sifting through data and wrestling with analysis for months before you see any results, it’s important to embrace an AI-first mindset to get things done in no time! AI-driven auto-analysis uncovers hidden patterns and trends so analysts can get to “why” faster and help their business users take actions. The auto-analysis gives data teams access to hidden patterns and the dark corners of their data. Let your AI tools do most of the grunt work faster than your traditional approaches. And now with Generative AI technologies bolted on top of these solutions, you can make it conversational using voice or natural language search capabilities.

Solutions like Infocepts DiscoverYai does just this. It gives organisations the opportunity to make smart choices based on data-driven insights. Our process starts by identifying clients’ objectives and then leveraging advanced AI strategies that quickly assess data quality, highlight key relationships in your data, identify drivers impacting your results and surface useful recommendations as well as provide an impact analysis resulting in actionable recommendations that have maximum impact potential – all delivered through an effective combination of tried-and-tested practices along with cutting edge AI driven processes!

Secondly, to gain the most from AI-driven insights, you’ll need to be ready for a little experimentation. Embrace getting it wrong and use those discoveries as learning opportunities. Hackathons/innovation contests are a great way to generate quick ideas, fail fast and succeed faster.

It’s also essential that your team can confidently understand data; this enables them to recognise useful actions generated by artificial intelligence without hesitation. So, while you use AI, ensure that it is explainable.

Lastly, help your organisation set up systems which will make sure your AI models don’t become obsolete in an ever-evolving landscape – keep upping their training so they remain ready to take on even harder challenges!

About Shashank Garg

Shashank Garg is the co-founder and CEO of Infocepts, a global leader in the Data & AI solutions. As a trusted advisor to CXOs of several Fortune 500 companies, Shashank has helped leaders across the globe to disrupt and transform their businesses using the power of Data Analytics and AI. Learn more about him on LinkedIn.

About Infocepts

Infocepts enables improved business results through more effective use of data, AI & user-friendly analytics. Infocepts partners with its clients to resolve the most common & complex challenges standing in their way of using data to strengthen business decisions. To learn more, visit: infocepts.com or follow Infocepts on LinkedIn.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Infocepts CEO Shashank Garg on the D&A market shifts and impact of AI on data analytics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/09/infocepts-ceo-shashank-garg-on-the-da-market-shifts-and-impact-of-ai-on-data-analytics/feed/ 0
Oxford University spinout invents body scanner for accurate clothing measurements https://www.artificialintelligence-news.com/2023/03/08/oxford-university-spinout-invents-body-scanner-for-accurate-clothing-measurements/ https://www.artificialintelligence-news.com/2023/03/08/oxford-university-spinout-invents-body-scanner-for-accurate-clothing-measurements/#respond Wed, 08 Mar 2023 12:52:31 +0000 https://www.artificialintelligence-news.com/?p=12799 An Oxford University tech spinout has invented a ‘ground breaking’ AI tool that scans users’ bodies to provide accurate clothing measurements, with the intention of streamlining the online shopping experience and saving UK retailers billions in returns. Initially founded in 2019 by Duncan McKay, INSEAD MBA and Phil Torr, Professor of Computer Vision and Deep... Read more »

The post Oxford University spinout invents body scanner for accurate clothing measurements appeared first on AI News.

]]>
An Oxford University tech spinout has invented a ‘ground breaking’ AI tool that scans users’ bodies to provide accurate clothing measurements, with the intention of streamlining the online shopping experience and saving UK retailers billions in returns.

Initially founded in 2019 by Duncan McKay, INSEAD MBA and Phil Torr, Professor of Computer Vision and Deep Learning at the University of Oxford, the tech firm went on to be awarded two Innovate UK Grants, and one Future Fashion Factory Grant in partnership with the University of Leeds with funding totalling approximately £1.2 million.

McKay said: “I have worked for L’Oreal, Unilever and PepsiCo coming up with new product ideas and consumer solutions – I built an £18m net revenue business in a year whilst at PepsiCo. I got into this because I love innovating – I get a kick out of innovation, building and scaling businesses. I founded Aistetic with Phil Torr as I experienced the problem of poor-fitting clothes personally and we both felt that we could solve this with a technology solution. With the development of our patent-pending solution, we quickly realised that our purpose is bigger than that – we want to make next-gen 3D body modelling available to anyone with a mobile device.”

Aistetic is a low-code solution that integrates into retailers’ websites with one short snippet of JavaScript which works across WordPress and Shopify stores. Using Aistetic, users can select a garment, then use AI software to scan their body and receive their measurements and clothes sizing specific to the retailer they’re shopping with within 3 minutes or less. McKay explains that this new tool can reduce rates of return by up to 30%, creating significant savings for retailers.

McKay said: “Using our tool, consumers can record themselves using their phone, tablet or computer to receive their measurements with 98% accuracy. They just need to stand back in front of the camera and turn for 10 seconds. We want to empower people with their body data to make more informed decisions that are right for them and using this tool can reduce rates of return by up to 30%. Our next step is to develop a no code solution for Shopify customers – this will make our grey labelled solution entirely no code and available on Shopify’s app marketplace platform.”

High rates of returns are not only a significant profit drag to retailers growing online businesses, they’re also a huge environmental drain of the polluting fashion sector – which the UN estimates as contributing 10% of global carbon emissions.

McKay said: “The carbon footprint of a return can be as high as 4.2 kg of carbon if the garment is taken back into the store. This is something that we often don’t think about. This technology is easily accessible – it’s a low-code solution that can be pasted into any site – and will help to promote a more responsible approach to clothing.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Oxford University spinout invents body scanner for accurate clothing measurements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/08/oxford-university-spinout-invents-body-scanner-for-accurate-clothing-measurements/feed/ 0
VisitDenmark brings iconic tourist attractions to life in AI-produced campaign https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/ https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/#respond Tue, 07 Mar 2023 09:29:26 +0000 https://www.artificialintelligence-news.com/?p=12792 A new activation campaign from tourism organization VisitDenmark wants to put the land of “hygge” on the map as the antidote to bucket list tourism. Using artificial intelligence, Mona Lisa, the Statue of Liberty, and other iconic tourist attractions come to life with a simple message: Don’t come see me – visit Denmark instead. Other... Read more »

The post VisitDenmark brings iconic tourist attractions to life in AI-produced campaign appeared first on AI News.

]]>
A new activation campaign from tourism organization VisitDenmark wants to put the land of “hygge” on the map as the antidote to bucket list tourism.

Using artificial intelligence, Mona Lisa, the Statue of Liberty, and other iconic tourist attractions come to life with a simple message: Don’t come see me – visit Denmark instead. Other than its cheeky approach, the campaign stands out by being completely written by artificial intelligence.

“Imagine that you are Mona Lisa. Write a speech on why people should visit Denmark instead of standing in line to see you.“

This was the prompt given to an artificial intelligence to create the script of one of a series of videos in which tourist attractions from all over the world turn against themselves and recommend visiting Denmark – rather than standing in line at the Louvre or seeing the Statue of Liberty in a sea of selfie-sticks. Executing on the brand campaign ‘Don’t be a tourist – be an Explorist’, VisitDenmark positions Denmark as the antidote to bucket list tourism.

Louis Pilmark, creative director at Danish advertisement agency Brandhouse/Subsero, said: “Having iconic attractions from popular tourist destinations turn on themselves is a good way to highlight the absurdity of doing and seeing the same things as everyone else. Who better to explain it, than the paintings and statues that see millions of tourists every year.  

Iconic art meets trending tech

Other than the slightly teasing approach, the campaign is unique in that both the scripts and the visuals are created by artificial intelligence. While the new techniques like deepfake and motion synthesis have been used to bring images to life in the last couple of years, the addition of scripts generated completely by AI makes it one of the first campaigns to combine the two technologies.

Kathrine Lind Gustavussen, senior PR at VisitDenmark, said: ”The scripts are 100% generated by AI – we didn’t write a single word, we only removed parts and bits that were too long or simply not true. While it felt somewhat risky to put our entire messaging in the hands of artificial intelligence, we’re excited to be at the forefront of the tourism industry, using cutting-edge technology to bring our creative visions and messages to life.” 

Tourist attractions aren’t so attractive anymore

The overall campaign, developed by London-based creative agency Fold7 builds on an insight that bucket list tourist has lost its lustre. A study conducted in the UK, Sweden and Germany validated our hypothesis that ‘feeling like a tourist’ would ruin a holiday. More than half of the respondents agreed that overcrowded tourist sites and landmark were a reason for holiday disappointment.

Yelena Gaufman, strategy partner at Fold7, said: “Denmark may not be a bucket list destination and the wonders there aren’t big and dramatic, but they are small and plentiful. We saw this as a huge opportunity to attract a different kind of traveller, the anti-tourist, the Explorist.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post VisitDenmark brings iconic tourist attractions to life in AI-produced campaign appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/feed/ 0
£370m plan launched to turn UK into ‘tech superpower’ https://www.artificialintelligence-news.com/2023/03/06/370m-plan-launched-to-turn-uk-into-tech-superpower/ https://www.artificialintelligence-news.com/2023/03/06/370m-plan-launched-to-turn-uk-into-tech-superpower/#respond Mon, 06 Mar 2023 14:25:11 +0000 https://www.artificialintelligence-news.com/?p=12788 The Prime Minister and Technology Secretary have unveiled the Government’s plan to cement the UK’s place as a science and technology superpower by 2030, alongside a raft of new measures backed by over £370 million to boost investment in innovation, bring the world’s best talent to the UK, and seize the potential of ground-breaking new... Read more »

The post £370m plan launched to turn UK into ‘tech superpower’ appeared first on AI News.

]]>
The Prime Minister and Technology Secretary have unveiled the Government’s plan to cement the UK’s place as a science and technology superpower by 2030, alongside a raft of new measures backed by over £370 million to boost investment in innovation, bring the world’s best talent to the UK, and seize the potential of ground-breaking new technologies like AI.

The new Science and Technology Framework is the first major piece of work from the newly created Department for Science, Innovation and Technology and will challenge every part of government to better put the UK at the forefront of global science and technology this decade through 10 key actions – creating a coordinated cross-government approach.

In doing so, the Government will foster the right conditions for industry innovation and world leading scientific research to deliver high-paid jobs of the future, grow the economy in cutting-edge industries, and improve people’s lives from better healthcare to security.

The 10 points of the new Science and Technology Framework centre on:

  • identifying, pursuing and achieving strategic advantage in the technologies that are most critical to achieving UK objectives
  • showcasing the UK’s S&T strengths and ambitions at home and abroad to attract talent, investment and boost our global influence
  • boosting private and public investment in research and development for economic growth and better productivity
  • building on the UK’s already enviable talent and skills base
  • financing innovative science and technology start-ups and companies
  • capitalising on the UK government’s buying power to boost innovation and growth through public sector procurement
  • shaping the global science and tech landscape through strategic international engagement, diplomacy and partnerships
  • ensuring researchers have access to the best physical and digital infrastructure for R&D that attracts talent, investment and discoveries
  • leveraging post-Brexit freedoms to create world-leading pro-innovation regulation and influence global technical standards
  • creating a pro-innovation culture throughout the UK’s public sector to improve the way our public services run

The delivery of this new Framework will begin immediately with an initial raft of projects, worth around £500 million in new and existing funding, which will help ensure the UK has the skills and infrastructure to take a global lead in game-changing technologies.

Prime Minister Rishi Sunak said: “Trailblazing science and innovation have been in our DNA for decades. But in an increasingly competitive world, we can only stay ahead with focus, dynamism and leadership.

“That’s why we’re setting out 10 key actions under a bold new plan to cement our place as a global science and technology superpower by 2030 – from pursuing transformational technologies like AI and supercomputing to attracting top talent and ensuring they have the tools they need to succeed.

“The more we innovate, the more we can grow our economy, create the high-paid jobs of the future, protect our security, and improve lives across the country.”

Science, Innovation and Technology Secretary Michelle Donelan said: “Innovation and technology are our future. They hold the keys to everything from raising productivity and wages, to transforming healthcare, reducing energy prices and ultimately creating jobs and economic growth in the UK, providing the financial firepower allowing us to spend more on public services.

“That is why we are putting the full might of the British government and our private sector partners behind our push to become a scientific and technological superpower, because only through being world-leaders in future industries like AI and quantum will we be able to improve the lives of every Briton.”

The initial package of projects to drive forward the actions of the Science and Technology Framework includes:

  • £250 million investment in three transformational technologies to build on the UK’s global leadership in AI, quantum technologies and engineering biology, so they can help a range of industries tackle the biggest global challenges like climate change and health care. This forms part of our commitment to the five technologies within the science and technology framework, which also includes semiconductors and future telecoms
  • publication of Sir Paul Nurse’s Independent Review of the Research, Development and Innovation Organisational Landscape with recommendations to make the most of the UK’s research organisations, ensuring they are effective, sustainable and responsive to global challenges
  • testing different models of funding science, to support a range of innovative institutional models, such as Focused Research Organisations (known as FROs), working with industry and philanthropic partners to open up new funding for UK research. For example, this could include working with a range of partners to increase investment in the world leading UK Biobank, to support the continued revolution in genetic science
  • up to £50 million to spur co-investment in science from the private sector and philanthropists to drive the discoveries of the future, subject to business cases. The government is already talking to Schmidt Futures, a philanthropic initiative of Eric and Wendy Schmidt, about additional support of up to $20 million as part of this work
  • £117 million of existing funding to create hundreds of new PhDs for AI researchers and £8 million to find the next generation of AI leaders around the world to do their research in the UK
  • a £50 million uplift to World Class Labs funding to help research institutes and universities to improve facilities so UK researchers have access to the best labs and equipment they need to keep producing world-class science, opening up entirely new avenues for economic growth and job creation
  • a £10 million uplift to the UK Innovation and Science Seed Fund, totalling £50 million, to boost the UK’s next tech and science start-ups who could be the next Apple, Google or Tesla
  • plans to set up an Exascale supercomputer facility – the most powerful compute capability which could solve problems as complex as nuclear fusion – as well as a programme to provide dedicated compute capacity for important AI research, as part of the response to the Future of Compute Review
  • £9 million in government funding to support the establishment of a quantum computing research centre by PsiQuantum in Daresbury in the North-West

The Framework has been designed in consultation with industry experts and academics, to help deliver stronger growth, better jobs, and bold discoveries to tackle the challenges of today and tomorrow.

The plan will be a cross-government endeavour led by the Department for Science, Innovation and Technology (DSIT) to bring together responsibility for the UK’s world class research and innovation system with the 5 technologies of tomorrow – quantum, AI, engineering biology, semiconductors, future telecoms plus life sciences and green technologies, into one single department for the first time.

Director of the Francis Crick Institute and lead reviewer of the Landscape Review, Paul Nurse, said: “It is absolutely right, as the Prime Minister has said, that the future of the UK depends upon research, science and technology. Only by being a leading science nation can the UK drive a sustainable economy, increased productivity and generate societal benefits such as improved healthcare and protecting the environment.

“The Government’s endorsement of this approach is to be fully supported. My Review of the research, development and innovation landscape makes a range of recommendations across the whole RDI endeavour, which if adopted together, provides a blueprint for government to make the UK a genuine science superpower.”

Today the government is also announcing a further extension until 30 June 2023 of the financial guarantee provided to the UK’s Horizon Europe applicants so that eligible, successful bids for calls closing on or before this date continue to be guaranteed funding, supporting them to continue their important work in research and innovation.

Science, innovation and technology are the drivers of economic growth and productivity. More than half of the UK’s future labour productivity growth will come from adopting the best available technologies and the rest from ‘pushing the frontier’ of technology even further. Each £1 of public R&D investment leverages £2 of private R&D investment in the long run.

The announcements build on existing Government efforts to support science and technology. This includes setting up the Advanced Research and Invention Agency (ARIA) to fund high-risk, high-reward R&D; investing £100 million in a pilot bringing together national and local partners in Glasgow, Greater Manchester and the West Midlands to accelerate their growth into major, globally competitive centres for research and innovation; and publishing the UK Digital Strategy committed to rolling out world-class digital infrastructure, unlocking the value of data to create growth, innovation and societal benefits across the UK and harnessing digital transformation to build a more inclusive, competitive and innovative digital economy.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post £370m plan launched to turn UK into ‘tech superpower’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/06/370m-plan-launched-to-turn-uk-into-tech-superpower/feed/ 0
AI being used to cherry-pick organs for transplant https://www.artificialintelligence-news.com/2023/03/02/ai-being-used-to-cherry-pick-organs-for-transplant/ https://www.artificialintelligence-news.com/2023/03/02/ai-being-used-to-cherry-pick-organs-for-transplant/#respond Thu, 02 Mar 2023 12:06:49 +0000 https://www.artificialintelligence-news.com/?p=12785 A new method to assess the quality of organs for donation is set to revolutionise the transplant system – and it could help save lives and tens of millions of pounds. The National Institute for Health and Care Research (NIHR) is contributing more than £1 million in funding to develop the new technology, which is... Read more »

The post AI being used to cherry-pick organs for transplant appeared first on AI News.

]]>
A new method to assess the quality of organs for donation is set to revolutionise the transplant system – and it could help save lives and tens of millions of pounds.

The National Institute for Health and Care Research (NIHR) is contributing more than £1 million in funding to develop the new technology, which is known as Organ Quality Assessment (OrQA). It works in the same way as Artificial Intelligence-based facial recognition to evaluate the quality of an organ.

It is estimated the technology could result in up to 200 more patients receiving kidney transplants and 100 more receiving liver transplants a year in the UK.

Colin Wilson, transplant surgeon at Newcastle upon Tyne Hospitals NHS Foundation Trust and co-lead of the project, said: “Transplantation is the best treatment for patients with organ failure, but unfortunately some organs can’t be used due to concerns they won’t function properly once transplanted.

“The software we have developed ‘scores’ the quality of the organ and aims to support surgeons to assess if the organ is healthy enough to be transplanted.

“Our ultimate hope is that OrQA will result in more patients receiving life-saving transplants and enable them to lead healthier, longer lives.”

Professor Hassan Ugail, director of the Centre for Visual Computing at the University of Bradford, whose team is working on image analysis as part of the research, said: “Currently, when an organ becomes available, it is assessed by a surgical team by sight, which means, occasionally, organs will be deemed not suitable for transplant.

“We are developing a deep machine learning algorithm which will be trained using thousands of images of human organs to assess images of donor organs more effectively than what the human eye can see.

“This will ultimately mean a surgeon could take a photo of the donated organ, upload it to OrQA and get an immediate answer as to how best to use the donated organ.”

There are currently nearly 7,000 patients awaiting organ transplant in the UK. An organ can only survive out of the body for a limited time. In most cases, only one journey from the donor hospital to the recipient hospital is possible. This means it is essential that the right decision is made quickly.

The project is being supported by NHS Blood and Transplant (NHSBT), Quality in Organ Donation biobank and an NIHR Blood and Transplant Research Unit to deliver research for the NHS. It also involves academics from the Universities of Oxford and New South Wales.?

Professor Derek Manas, medical director of NHSBT Organ Donation and Transplantation, said: “This is an exciting development in technological infrastructure that, once validated, will enable surgeons and transplant clinicians to make more informed decisions about organ usage and help to close the gap between those patients waiting for and those receiving lifesaving organs. We at NHSBT are extremely committed to making this exciting venture a success.”

Health Minister Neil O’Brien said: “Technology has the ability to revolutionise the way we care for people and this cutting edge technology will improve organ transplant services. Developed here in the UK, this pioneering new method could save hundreds of lives and ensure the best use of donated organs.

“I encourage everyone to register their organ donation decision. Share it with your family so your loved ones can follow your wishes and hopefully save others.”

Chief executive of the NIHR Professor, Lucy Chappell, said: “Funded by our Invention for Innovation Programme, this deep machine learning algorithm aims to increase the number of liver and kidney donor organs suitable for transplantation. This is another example of how AI can enhance our healthcare system and make it more efficient. Once clinically validated and tested, cutting edge technology such as this holds the real promise of saving and improving lives.”

‘Proof of concept’ work has been carried out in liver, kidney and pancreas transplantation as well as at an advanced stage of pre-clinical testing in liver and kidney.

It is hoped the OrQA software will be ready for a licensing study within the NHS within two years. There is also the possibility of marketing the tool worldwide.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI being used to cherry-pick organs for transplant appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/02/ai-being-used-to-cherry-pick-organs-for-transplant/feed/ 0
Informatica launches AI tool for marketers  https://www.artificialintelligence-news.com/2023/03/01/informatica-launches-ai-tool-for-marketers/ https://www.artificialintelligence-news.com/2023/03/01/informatica-launches-ai-tool-for-marketers/#respond Wed, 01 Mar 2023 08:34:02 +0000 https://www.artificialintelligence-news.com/?p=12775 Informatica, an enterprise cloud data management specialist, has launched the industry’s only free cloud data loading, integration and ETL/ELT service – Informatica Cloud Data Integration-Free and PayGo. The new offering targets data practitioners and non-technical users such as in marketing, sales, and revenue operations teams to build data pipelines within minutes. For example, it provides operations... Read more »

The post Informatica launches AI tool for marketers  appeared first on AI News.

]]>
Informatica, an enterprise cloud data management specialist, has launched the industry’s only free cloud data loading, integration and ETL/ELT service – Informatica Cloud Data Integration-Free and PayGo.

The new offering targets data practitioners and non-technical users such as in marketing, sales, and revenue operations teams to build data pipelines within minutes. For example, it provides operations teams with a fast, free, and frictionless way to load, integrate and analyze high-quality campaign, pipeline, forecast, and revenue data. In addition, data analysts and data engineers benefit from increased productivity and rapid development. 

This is the second in a series of releases that began with the Informatica Data Loader launch in May 2022. Taken together, Informatica Data Loader, Cloud Data Integration-Free (CDI-Free), and PayGo (CDI-PayGo) are the industry’s only free data loading and integration solutions. They are natively built in to provide intelligent cloud data management services for all your data-driven use cases. Informatica CDI-Free, CDI-PayGo and Data Loader support all major data warehouses/lake solutions, including Amazon Redshift, Azure Synapse, Databricks Delta Lake, Google BigQuery, and Snowflake.  

Jitesh Ghai, chief product officer at Informatica, said: “We are redefining the data integration market by making it free, easy to use and accessible to everyone. Organisations face the challenge of ingesting huge volumes of data from disparate sources and then making sense of that information. There is a clear need for no setup and no code SaaS data integration tools that are free and pay-as-you-go to quickly get started serving both business-focused data engineers and non-technical business users and analysts.

“By giving business and non-technical users access to simple, cost-optimised data integration solutions, organisations can bring the power of data to the masses.”   

The key to a truly data-driven business is providing self-service data integration to users across the organisation in technical and business roles. Informatica CDI-Free and PayGo provide just that: 

  • CDI-Free: A free service that allows users to process up to 20M rows for ELT or reach 10 processing hours for ETL, per month, whichever comes first. 
  • CDI-PayGo: All the capabilities of CDI-Free with no limit on processing rows or hours of usage. CDI-PayGo comes with essential customer support and SOC2 compliance. In addition, users only pay for what they use with a credit card.  

Users benefit from easy setup, and usage of these data integration services with AI-powered automationnon – need for coding, setup, or any DevOps. In addition, the cloud data loading and integration ETL/ELT services can be easily accessed from each of Informatica’s ecosystem partners including Amazon Web Services, Databricks, Google Cloud, Microsoft Azure and Snowflake.  

Chris Eldredge, VP of data office at Paycor, said: “The ability to harness the power of data is a valuable competitive advantage. Having the right data integration platform enables a data foundation that drives agility, insights, and innovation for superior business results. The new Cloud Data Integration (CDI)-Free and PayGo products lower the barriers to get started with data integration.  These new products will open the door for more data professionals, including tech-savvy business users, to leverage best-in-class data integration tools from Informatica.” 

Matt Wienke, CEO of Infoverity, said: “Cloud Data Integration (CDI)-Free and PayGo are launchpads that will improve and serve those entering the data integration domain. The tools are intuitive to use and easy to navigate. CDI-Free will empower tech-savvy business users to begin moving their data to the cloud without committing to software costs. Furthermore, the option to scale up to Informatica’s enterprise-grade cloud platform minimises risks from the trial and adoption of these products.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Informatica launches AI tool for marketers  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/01/informatica-launches-ai-tool-for-marketers/feed/ 0
Oracle teams up with NVIDIA to quicken enterprise AI adoption https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/ https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/#respond Wed, 19 Oct 2022 09:22:51 +0000 https://www.artificialintelligence-news.com/?p=12382 Oracle and NVIDIA have formed a multi-year partnership to help customers solve business challenges with accelerated computing and AI. The collaboration aims to bring the full NVIDIA accelerated computing stack – from GPUs to systems to software—to Oracle Cloud Infrastructure (OCI). OCI is adding tens of thousands more NVIDIA GPUs, including the A100 and upcoming... Read more »

The post Oracle teams up with NVIDIA to quicken enterprise AI adoption appeared first on AI News.

]]>
Oracle and NVIDIA have formed a multi-year partnership to help customers solve business challenges with accelerated computing and AI.

The collaboration aims to bring the full NVIDIA accelerated computing stack – from GPUs to systems to software—to Oracle Cloud Infrastructure (OCI).

OCI is adding tens of thousands more NVIDIA GPUs, including the A100 and upcoming H100, to its capacity. Combined with OCI’s AI cloud infrastructure of bare metal, cluster networking, and storage, this provides enterprises a broad, easily accessible portfolio of options for AI training and deep learning inference at scale.

Safra Catz, CEO, Oracle, said: “To drive long-term success in today’s business environment, organizations need answers and insight faster than ever.

“Our expanded alliance with NVIDIA will deliver the best of both companies’ expertise to help customers across industries – from healthcare and manufacturing to telecommunications and financial services – overcome the multitude of challenges they face.”

Accelerated computing and AI are key to tackling rising costs in every aspect of operating businesses,” said Jensen Huang, CEO and founder, NVIDIA. “Enterprises are increasingly turning to cloud-first AI strategies that enable fast development and scalable deployment. Our partnership with Oracle will put NVIDIA AI within easy reach for thousands of companies.”

NVIDIA and Oracle have been serving enterprises together for years with accelerated computing instances and software available via OCI. With the full NVIDIA AI platforms available on OCI instances, the extended partnership is designed to accelerate AI-powered innovation for a broad range of industries to better serve customers and support sales.

NVIDIA AI Enterprise, the globally adopted software of the NVIDIA AI platform, includes essential processing engines for each step of the AI workflow, from data processing and AI model training to simulation and large-scale deployment. NVIDIA AI enables organizations to develop predictive models to automate business processes and gain rapid business insights with applications such as conversational AI, recommender systems, computer vision and more. The parties plan to make an upcoming release of NVIDIA AI Enterprise available on OCI, providing customers with easy access to NVIDIA’s accelerated, secure and scalable platform for end-to-end AI development and deployment

Additionally, Oracle is now offering early access to NVIDIA RAPIDS acceleration for Apache Spark data processing on the OCI Data Flow fully-managed Apache Spark service. Data processing is one of the top cloud computing workloads. To support this demand, OCI Data Science plans to offer support for OCI bare metal shapes, including BM.GPU.GM4.8 with NVIDIA A100 Tensor Core GPUs across managed notebook sessions, jobs, and model deployment.

NVIDIA Clara, a healthcare AI and HPC application framework for medical imaging, genomics, natural language processing, and drug discovery, is also coming soon. Oracle and NVIDIA are additionally collaborating on new AI-accelerated Oracle Cerner offerings for healthcare, which span analytics, clinical solutions, operations, patient management systems and more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Oracle teams up with NVIDIA to quicken enterprise AI adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/19/oracle-teams-up-with-nvidia-to-quicken-enterprise-ai-adoption/feed/ 0
Q&A: Felipe Chies, Amazon Web Services: Democratising ML https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/ https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/#respond Wed, 21 Sep 2022 19:24:00 +0000 https://www.artificialintelligence-news.com/?p=12289 Amazon Web Services (AWS), the leader in public cloud infrastructure now has more than 200 fully featured services, including compute, storage, databases, networking, analytics, robotics, Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, application development, deployment, management, and machine learning and artificial intelligence (AI). For the latter, the... Read more »

The post Q&A: Felipe Chies, Amazon Web Services: Democratising ML appeared first on AI News.

]]>
Amazon Web Services (AWS), the leader in public cloud infrastructure now has more than 200 fully featured services, including compute, storage, databases, networking, analytics, robotics, Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, application development, deployment, management, and machine learning and artificial intelligence (AI). For the latter, the message is clear: AWS wants to democratise ML technologies.

AWS has the most comprehensive set of AI and Machine Learning services for all skill levels. The most well-known is arguably the platform Amazon SageMaker, a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the machine learning process, empowering everyday developers and scientists to successfully use machine learning. Since AWS launched SageMaker in 2017, the company has added more than 150 capabilities and features, and already in December 2020 at that year’s re:Invent – when the first machine learning keynote took place – the message was simple.

As SiliconAngle put it, the company’s ‘overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.’

With the AI & Big Data Expo, taking place in Amsterdam on September 20-21, AI News spoke with Felipe Chies, senior business development manager for AI and ML for the Benelux at AWS. Chies has strong experience in the field, having co-founded semiconductor startup Axelera AI, which has since been incubated by Bitfury.

Chies is speaking on the subject of accelerating innovation with no-code and low-code machine learning, and AI News spoke with him about key use cases, industries, and the different AWS products:

AI News: Tell us about the overall AWS ML and AI product set, how you talk about them with clients and how they help democratise machine learning.

Felipe Chies: We are very proud to have the most robust and most complete set of machine learning capabilities, and at AWS, we always approach everything we do by focusing on our customers. We think of our machine learning offerings in three different layers. First comes Frameworks and Interfaces for machine learning practitioners. These are people comfortable building deep learning models, working with deep learning frameworks, building clusters, etc. They can get extremely deep. Secondly the middle layer makes it much easier and more accessible for developers and data scientists to build, train, tune, and deploy machine learning models today with Amazon SageMaker. And last, Application Services, which enable developers to plug-in pre-built AI functionality into their apps without having to worry about the machine learning models that power these services. Many of our API services require no machine learning for customers, and in some cases, end-users may not even realize machine learning is being used to power experiences with services like Amazon Kendra, Amazon CodeGuru, Contact Lens for Amazon Connect, and Amazon HealthLake. The services make it really easy to incorporate AI into applications without having to build and train ML algorithms.

How does that help to democratise?

If we want machine learning to be as expansive as we really want it to be, we need to make it much more accessible to people who aren’t machine learning practitioners. Today, there are very few of these experts out there. So, when we built Amazon SageMaker, we designed it as a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the machine learning process, empowering everyday developers and scientists to successfully use machine learning. SageMaker is a step-level change for everyday developers and data scientists being able to access and build machine learning models.

To further democratize machine learning, we launched Amazon SageMaker Canvas, which enables business users and analysts to generate highly accurate machine-learning predictions using a visual point-and-click interface—with no coding required.

AI: How sophisticated does a customer of AWS have to be to use your AI/ML tools?

FC: AWS wants to take technology that until a few years ago was only within reach of a small number of well-funded organizations and make it as broadly distributed as possible. We’ve done that with storage, computing, analytics, databases and data warehousing, and we’ve taken the exact same approach with machine learning. We want it to be as broadly distributed as possible.

AI: What are the common use cases and industries that you see, and how can you help?

FC: Today, more than 100,000 customers use AWS Machine learning.  One example of an industry where we see a lot of usage is manufacturing; and supply chain. With what has happened in the world most recently, there are many challenges in the supply chain area – so being able to forecast demand is very important. Customers ask us; ‘how can you help us to anticipate changes, to anticipate demand, to save cost to make our customers happy and deliver on time?’ Those kinds of things are common. For manufacturing, predictive maintenance, quality control – those are easy use cases to apply machine learning. For predictive maintenance, you can use computer vision to do quality control and more inspection. In marketing and sales, it is again forecasts. Forecasts are an area where it is easier to understand the value it brings to the business.

AI: What are the key roadblocks to ML adoption in your opinion and why?

FC: Many of the organisations I talk to already have a machine learning mindset so that is not a problem. One of the biggest challenges nowadays is the backlog of human resources– there’s just a lot to do for the development teams. One way to solve it is to get more people, but that’s another challenge – there’s just not enough specialists – it can be data science, machine learning, engineering – it’s really hard to find the people in the market.

This is really where the democratisation of machine learning comes in. Why not enable more people in the company to do machine learning? Instead of having only data scientists and machine learning engineers, why not also business analysts, or finance, or marketing people? An example of this is a tool like Amazon SageMaker Canvas. It enables business users and analysts to generate highly accurate machine-learning predictions using a visual point-and-click interface—with no coding required.

AI: What would you like attendees at the AI & Big Data Expo to learn from your keynote presentation?

FC: There are people who think maybe machine learning is something out of their reach, they need to go and send a requirement to the data science team and wait for weeks. This is not really the case – they can get started in a few minutes. This awareness that people can use machine learning nowadays without needing to know about it, how to build models – that is a key take away.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

The post Q&A: Felipe Chies, Amazon Web Services: Democratising ML appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/21/qa-felipe-chies-amazon-web-services-democratising-ml/feed/ 0