ai expo Archives - AI News https://www.artificialintelligence-news.com/tag/ai-expo/ Artificial Intelligence News Wed, 25 Oct 2023 13:31:08 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai expo Archives - AI News https://www.artificialintelligence-news.com/tag/ai-expo/ 32 32 Bob Briski, DEPT®:  A dive into the future of AI-powered experiences https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/ https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/#respond Wed, 25 Oct 2023 10:25:58 +0000 https://www.artificialintelligence-news.com/?p=13782 AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences. At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their... Read more »

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences.

At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their tagline, “pioneering work on a global scale with a boutique culture.”

While ‘pioneering’ and ’boutique’ evokes innovation and personalised attention, ‘global scale’ signifies the broad outreach. DEPT® harnesses large language models to disseminate highly targeted, personalised messages to expansive audiences. These models, Briski pointed out, enable DEPT® to comprehend individuals at a massive scale and foster meaningful and individualised interactions.

“The way that we have been using a lot of these large language models is really to deliver really small and targeted messages to a large audience,” says Briski.

However, the integration of AI into various domains – such as retail, sports, education, and healthcare – poses both opportunities and challenges. DEPT® navigates this complexity by leveraging generative AI and large language models trained on diverse datasets, including vast repositories like Wikipedia and the Library of Congress.

Briski emphasised the importance of marrying pre-trained data with DEPT®’s domain expertise to ensure precise contextual responses. This approach guarantees that clients receive accurate and relevant information tailored to their specific sectors.

“The pre-training of these models allows them to really expound upon a bunch of different domains,” explains Briski. “We can be pretty sure that the answer is correct and that we want to either send it back to the client or the consumer or some other system that is sitting in front of the consumer.”

One of DEPT®’s standout achievements lies in its foray into the web3 space and the metaverse. Briski shared the company’s collaboration with Roblox, a platform synonymous with interactive user experiences. DEPT®’s collaboration with Roblox revolves around empowering users to create and enjoy user-generated content at an unprecedented scale. 

DEPT®’s internal project, Prepare to Pioneer, epitomises its commitment to innovation by nurturing embryonic ideas within its ‘Greenhouse’. DEPT® hones concepts to withstand the rigours of the external world, ensuring only the most robust ideas reach their clients.

“We have this internal project called The Greenhouse where we take these seeds of ideas and try to grow them into something that’s tough enough to handle the external world,” says Briski. “The ones that do survive are much more ready to use with our clients.”

While the allure of AI-driven solutions is undeniable, Briski underscored the need for caution. He warns that AI is not inherently transparent and trustworthy and emphasises the imperative of constructing robust foundations for quality assurance.

DEPT® employs automated testing to ensure responses align with expectations. Briski also stressed the importance of setting stringent parameters to guide AI conversations, ensuring alignment with the company’s ethos and desired consumer interactions.

“It’s important to really keep focused on the exact conversation you want to have with your consumer or your customer and put really strict guardrails around how you would like the model to answer those questions,” explains Briski.

In December, DEPT® is sponsoring AI & Big Data Expo Global and will be in attendance to share its unique insights. Briski is a speaker at the event and will be providing a deep dive into business intelligence (BI), illuminating strategies to enhance responsiveness through large language models.

“I’ll be diving into how we can transform BI to be much more responsive to the business, especially with the help of large language models,” says Briski.

As DEPT® continues to redefine digital paradigms, we look forward to observing how the company’s innovations deliver a new era in AI-powered experiences.

DEPT® is a key sponsor of this year’s AI & Big Data Expo Global on 30 Nov – 1 Dec 2023. Swing by DEPT®’s booth to hear more about AI and LLMs from the company’s experts and watch Briski’s day one presentation.

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
Twilio Segment: Transforming customer experiences with AI https://www.artificialintelligence-news.com/2023/09/26/twilio-segment-transforming-customer-experiences-with-ai/ https://www.artificialintelligence-news.com/2023/09/26/twilio-segment-transforming-customer-experiences-with-ai/#respond Tue, 26 Sep 2023 07:28:02 +0000 https://www.artificialintelligence-news.com/?p=13642 AI News caught up with Hema Thanki, EMEA Senior Product Marketing Manager for Twilio Segment, to discuss how the company is using AI to transform customer engagement and personalisation. AI is fundamentally reshaping customer engagement and personalisation by enabling businesses to deliver tailored experiences and responses to individual customer preferences on a large scale. During... Read more »

The post Twilio Segment: Transforming customer experiences with AI appeared first on AI News.

]]>
AI News caught up with Hema Thanki, EMEA Senior Product Marketing Manager for Twilio Segment, to discuss how the company is using AI to transform customer engagement and personalisation.

AI is fundamentally reshaping customer engagement and personalisation by enabling businesses to deliver tailored experiences and responses to individual customer preferences on a large scale.

During our conversation, we discussed how Twilio is at the forefront of this revolution as well as addressing the stark contrast between companies’ claims of personalisation and customers’ actual experiences.

AI News: How are you using AI to deliver more personalised and satisfactory customer experiences? 

Hema Thanki: According to Twilio’s recent State of Customer Engagement report, 91 percent of companies say they always or often personalise engagement with consumers. But, consumers don’t agree. Only 56 percent of consumers report that their interactions with brands are always or often personalised. 

Instead of becoming “customer-centric,” most companies have become “system-centric.” Exploding tech stacks and patchwork solutions lead to fragmented data, an incomplete view of the customer and, ultimately, disjointed experiences.

The reality is that every consumer is a complex individual with unique wants and needs that change from moment to moment. In order to truly put customers at the heart of your business, you need to be able to know who your customers are, understand how to best meet their needs and exceed their expectations, and then activate those insights to engage them how, when, and where is most relevant and meaningful to them. 

These elements make up the engagement flywheel that is key to powering dynamic customer engagement that adapts to every individual customer at scale.

We recently announced Twilio CustomerAI to unlock the power of AI for hundreds of thousands of businesses and supercharge the engagement flywheel. With CustomerAI, brands can expand their perception of customer data, activate it more extensively, and be better informed by a deeper understanding of their customers. 

AN: What are some examples of how businesses can use Twilio CustomerAI? 

HT: Today’s marketers need to not only understand past customer behaviour but must be able to anticipate and act on customers’ future wants and needs. AI and machine learning (ML) models are incredibly effective at doing this but are complex to build and require data science expertise.

With CustomerAI Predictions now generally available, Twilio Segment is putting the power of predictive AI at marketers’ fingertips. Without having to tap technical teams, marketers can now create precisely targeted audiences out-of-the-box, trigger customer journeys, and personalise multichannel experiences based on a customer’s predicted lifetime value (LTV), likelihood to purchase or churn, or propensity to perform any other event tracked in Segment.

Brands like Box are using Predictions to save time, optimise campaign performance, and discover revenue opportunities:

“As marketers, the holy grail is to reach your customers and prospects in a way that is meaningful, relevant and additive to them. CustomerAI Predictions has equipped Box’s marketing team with the ability to forecast customer behaviour to a degree that was simply unavailable to us before.

We’ve been able to explore segmenting our audience based on predictive traits like who is most likely to join us at in-person events or who is more likely to purchase, and this allows us to meet those people where they are in their customer journey. Tools like Predictions put marketers at the centre of this new era of AI which is transforming how companies engage and retain their customers.” – Chris Koehler, CMO at Box.

AN: What other emerging AI trends should people be keeping an eye on? 

HT: When companies rely on managing data in a customer data platform (CDP) in tandem with AI, they can create strong, personalised campaigns that reach and inspire their customers. Here are four trends in AI personalisation.

  1. Personalised product recommendations: Using AI to serve personalised product recommendations is a way to ensure your customers are being served optimized content. It can also build trust with your brand, leading to repeat purchases. Take Norrøna, an outdoor clothing brand in Scandinavia. They built a complete recommendation platform – from data collection to machine learning predictions, in just six months. Norrøna relied on Segment for the collection and management of client–side and server–side customer data. Segment assigned an ID to each customer and ensured the data collected on them was clean, thanks to the schema.
  2. Behaviour–based email campaigns: AI is getting us as close as possible to identifying patterns in user interactions and helping with creating behaviour–based email campaigns. If a customer frequently clicks on one type of email content, an AI–powered system could trigger an email to that customer containing content related to what they are clicking. Using Twilio Engage, you can send emails right from the Segment app, relying on your first-party customer data to lead the way.
  3. Dynamic website content: The days of static landing pages are in the past, thanks to tools that collect user behaviour and churn out personalised website content in real-time. In fact, Segment’s website displays dynamic content to customers based on their own interests—thanks to integration with Mutiny. Whenever a visitor lands on the Segment website from a particular IP address, they will be met with personalised landing pages based on their unique behaviour. This means personalised content for every single visitor.
  4. Predictive customer segmentation: Predictive Audiences let businesses target users who have an increased likelihood of performing an event. It works with out-of-the-box audience templates that are pre-built with Predictions. They include templates like “ready to buy” or “potential VIPs.” AI can then analyse these profiles to create predictive segments. For instance, AI could identify a group of customers who are likely to churn based on their behaviour. This allows you to proactively engage with these customers through tailored retention campaigns.

AN: Do you have any best practices and tools that you use for testing, monitoring, and debugging your AI models and applications to ensure quality and reliability? 

HT: Customer data unlocks the promise of AI as a unique market advantage, but your AI is only as good as the data you put into it. If your data is siloed, stale, inconsistent, and incomplete, your AI outputs will reflect that. At Twilio Segment, we have a long history of helping companies build trusted data infrastructures with unified, real-time, consistent, and consented data that is critical to your AI strategy. 

Our composable CDP ensures your data is AI-ready, helping you collect, clean, and activate customer data with our open, API-first platform and 450+ pre-built connectors that enable you to start with data anywhere and activate it everywhere.

With Segment, you choose where you start. Whether that’s getting data from SaaS products into your data warehouse, or activating existing data with reverse ETL, Segment gives you the flexibility and extensibility to move fast, scale with ease, and efficiently achieve your business goals as they evolve.

AN: Are you seeing more customers looking for AI solutions to improve operational efficiencies amid global economic uncertainties? 

HT: Marketers spend massive amounts of time writing, designing, and building campaigns and customer journeys. With generative AI soon available (scheduled for public beta in 2024) inside Twilio Engage and Segment CDP, marketers can save precious time, boost productivity, and optimise for stronger results. 

Using the new CustomerAI Generative Email coming to Twilio Engage, marketers will be able to enter simple text prompts that turn ideas into HTML in minutes. This builds on the AI capabilities available in Twilio Engage today, such as our Smart Email Content Editor which suggests conversion-worthy email headlines, images, and calls-to-action to drive better engagement with the click of a button.

Meanwhile, marketers will be able to skip the manual process of architecting customer journeys thanks to CustomerAI Generative Journeys. Soon, they will be able to simply describe campaign type (promotional, win-back, etc.), audience definition, and which channels they want to use, then Twilio Engage will automatically build the journey using generative AI—saving marketers time while accelerating growth.

AN: What will Twilio be sharing with the audience at this year’s AI & Big Data Expo Europe?

HT: Twilio Segment is excited to be taking part in AI & Big Data Expo Europe in 2023! As a proud sponsor, we’ll have a strong exhibiting presence – feel free to drop by stand 242 to meet a member of the team and discuss how Segment can collect, unify and activate your customers’ first-party data.

We’ll be joined by Folkert Mudde from Albert Heijn who’ll be talking through their positive partnership highlighting how the supermarket chain supercharges personalisation for 4+ million recognised users with Twilio Segment. Check out this session on 26 September within the ‘Applied Digital, Data & AI’ track at 14:30 CET.

Arthur Viegers, SVP of Global Engineering at Twilio Segment, will be taking part in a dynamic panel discussion on 27 September at 11:40 CET on the topic of ‘The Future of AI-Enabled Experiences’.

Twilio Segment is a key sponsor of this year’s AI & Big Data Expo Europe on 26-27 September 2023. Swing by the company’s booth at stand #242 to hear more about AI from the company’s experts.

The post Twilio Segment: Transforming customer experiences with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/26/twilio-segment-transforming-customer-experiences-with-ai/feed/ 0
Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/ https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/#respond Mon, 24 Jul 2023 11:27:02 +0000 https://www.artificialintelligence-news.com/?p=13305 AI News spoke with Damian Bogunowicz, a machine learning engineer at Neural Magic, to shed light on the company’s innovative approach to deep learning model optimisation and inference on CPUs. One of the key challenges in developing and deploying deep learning models lies in their size and computational requirements. However, Neural Magic tackles this issue... Read more »

The post Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs appeared first on AI News.

]]>
AI News spoke with Damian Bogunowicz, a machine learning engineer at Neural Magic, to shed light on the company’s innovative approach to deep learning model optimisation and inference on CPUs.

One of the key challenges in developing and deploying deep learning models lies in their size and computational requirements. However, Neural Magic tackles this issue head-on through a concept called compound sparsity.

Compound sparsity combines techniques such as unstructured pruning, quantisation, and distillation to significantly reduce the size of neural networks while maintaining their accuracy. 

“We have developed our own sparsity-aware runtime that leverages CPU architecture to accelerate sparse models. This approach challenges the notion that GPUs are necessary for efficient deep learning,” explains Bogunowicz.

Bogunowicz emphasised the benefits of their approach, highlighting that more compact models lead to faster deployments and can be run on ubiquitous CPU-based machines. The ability to optimise and run specified networks efficiently without relying on specialised hardware is a game-changer for machine learning practitioners, empowering them to overcome the limitations and costs associated with GPU usage.

When asked about the suitability of sparse neural networks for enterprises, Bogunowicz explained that the vast majority of companies can benefit from using sparse models.

By removing up to 90 percent of parameters without impacting accuracy, enterprises can achieve more efficient deployments. While extremely critical domains like autonomous driving or autonomous aeroplanes may require maximum accuracy and minimal sparsity, the advantages of sparse models outweigh the limitations for the majority of businesses.

Looking ahead, Bogunowicz expressed his excitement about the future of large language models (LLMs) and their applications.

“I’m particularly excited about the future of large language models LLMs. Mark Zuckerberg discussed enabling AI agents, acting as personal assistants or salespeople, on platforms like WhatsApp,” says Bogunowicz.

One example that caught his attention was a chatbot used by Khan Academy—an AI tutor that guides students to solve problems by providing hints rather than revealing solutions outright. This application demonstrates the value that LLMs can bring to the education sector, facilitating the learning process while empowering students to develop problem-solving skills.

“Our research has shown that you can optimise LLMs efficiently for CPU deployment. We have published a research paper on SparseGPT that demonstrates the removal of around 100 billion parameters using one-shot pruning without compromising model quality,” explains Bogunowicz.

“This means there may not be a need for GPU clusters in the future of AI inference. Our goal is to soon provide open-source LLMs to the community and empower enterprises to have control over their products and models, rather than relying on big tech companies.”

As for Neural Magic’s future, Bogunowicz revealed two exciting developments they will be sharing at the upcoming AI & Big Data Expo Europe.

Firstly, they will showcase their support for running AI models on edge devices, specifically x86 and ARM architectures. This expands the possibilities for AI applications in various industries.

Secondly, they will unveil their model optimisation platform, Sparsify, which enables the seamless application of state-of-the-art pruning, quantisation, and distillation algorithms through a user-friendly web app and simple API calls. Sparsify aims to accelerate inference without sacrificing accuracy, providing enterprises with an elegant and intuitive solution.

Neural Magic’s commitment to democratising machine learning infrastructure by leveraging CPUs is impressive. Their focus on compound sparsity and their upcoming advancements in edge computing demonstrate their dedication to empowering businesses and researchers alike.

As we eagerly await the developments presented at AI & Big Data Expo Europe, it’s clear that Neural Magic is poised to make a significant impact in the field of deep learning.

You can watch our full interview with Bogunowicz below:

(Photo by Google DeepMind on Unsplash)

Neural Magic is a key sponsor of this year’s AI & Big Data Expo Europe, which is being held in Amsterdam between 26-27 September 2023.

Swing by Neural Magic’s booth at stand #178 to learn more about how the company enables organisations to use compute-heavy models in a cost-efficient and scalable way.

The post Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/24/damian-bogunowicz-neural-magic-revolutionising-deep-learning-cpus/feed/ 0
Piero Molino, Predibase: On low-code machine learning and LLMs https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/ https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/#respond Mon, 26 Jun 2023 15:19:41 +0000 https://www.artificialintelligence-news.com/?p=13223 AI News sat down with Piero Molino, CEO and co-founder of Predibase, during this year’s AI & Big Data Expo to discuss the importance of low-code in machine learning and trends in LLMs (Large Language Models). At its core, Predibase is a declarative machine learning platform that aims to streamline the process of developing and... Read more »

The post Piero Molino, Predibase: On low-code machine learning and LLMs appeared first on AI News.

]]>
AI News sat down with Piero Molino, CEO and co-founder of Predibase, during this year’s AI & Big Data Expo to discuss the importance of low-code in machine learning and trends in LLMs (Large Language Models).

At its core, Predibase is a declarative machine learning platform that aims to streamline the process of developing and deploying machine learning models. The company is on a mission to simplify and democratise machine learning, making it accessible to both expert organisations and developers who are new to the field.

The platform empowers organisations with in-house experts, enabling them to supercharge their capabilities and reduce development times from months to just days. Additionally, it caters to developers who want to integrate machine learning into their products but lack the expertise.

By using Predibase, developers can avoid writing extensive lines of low-level machine learning code and instead work with a simple configuration file – known as a YAML file – which contains just 10 lines specifying the data schema.

Predibase reaches general availability

During the expo, Predibase announced the general availability of its platform.

One of the key features of the platform is its ability to abstract away the complexity of infrastructure provisioning. Users can seamlessly run training, deployment, and inference jobs on a single CPU machine or scale up to 1000 GPU machines with just a few clicks. The platform also facilitates easy integration with various data sources, including data warehouses, databases, and object stores, regardless of the data structure.

“The platform is designed for teams to collaborate on developing models, with each model represented as a configuration that can have multiple versions. You can analyse the differences and performance of the models,” explains Molino.

Once a model meets the required performance criteria, it can be deployed for real-time predictions as a REST endpoint or for batch predictions using SQL-like queries that include prediction capabilities.

Importance of low-code in machine learning

The conversation then shifted to the importance of low-code development in machine learning adoption. Molino emphasised that simplifying the process is essential for wider industry adoption and increased return on investment.

By reducing the development time from months to a matter of days, Predibase lowers the entry barrier for organisations to experiment with new use cases and potentially unlock significant value.

“If every project takes months or even years to develop, organisations won’t be incentivised to explore valuable use cases. Lowering the bar is crucial for experimentation, discovery, and increasing return on investment,” says Molino.

“With a low-code approach, development times are reduced to a couple of days, making it easier to try out different ideas and determine their value.”

Trends in LLMs

The discussion also touched on the rising interest in large language models. Molino acknowledged the tremendous power of these models and their ability to transform the way people think about AI and machine learning.

“These models are powerful and revolutionizing the way people think about AI and machine learning. Previously, collecting and labelling data was necessary before training a machine learning model. But now, with APIs, people can query the model directly and obtain predictions, opening up new possibilities,” explains Molino.

However, Molino highlighted some limitations, such as the cost and scalability of per-query pricing models, the relatively slow inference speeds, and concerns about data privacy when using third-party APIs.

In response to these challenges, Predibase is introducing a mechanism that allows customers to deploy their models in a virtual private cloud, ensuring data privacy and providing greater control over the deployment process.

Common mistakes

As more businesses venture into machine learning for the first time, Molino shared his insights into some of the common mistakes they make. He emphasised the importance of understanding the data, the use case, and the business context before diving headfirst into development. 

“One common mistake is having unrealistic expectations and a mismatch between what they expect and what is actually achievable. Some companies jump into machine learning without fully understanding the data or the use case, both technically and from a business perspective,” says Molino.

Predibase addresses this challenge by offering a platform that facilitates hypothesis testing, integrating data understanding and model training to validate the suitability of models for specific tasks. With guardrails in place, even users with less experience can engage in machine learning with confidence.

The general availability launch of Predibase’s platform marks an important milestone in their mission to democratise machine learning. By simplifying the development process, Predibase aims to unlock the full potential of machine learning for organisations and developers alike.

You can watch our full interview with Molino below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Piero Molino, Predibase: On low-code machine learning and LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/feed/ 0
Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/ https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/#respond Thu, 25 May 2023 11:56:59 +0000 https://www.artificialintelligence-news.com/?p=13110 In an interview at AI & Big Data Expo with Steve Frederickson, Chief Product Officer at Lucy.ai, we gained valuable insights into how AI is powering a next-gen “answer engine” for enterprises. Lucy is designed to unlock and harness the vast knowledge residing within a company’s data repositories, regardless of format or source. From SharePoint... Read more »

The post Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ appeared first on AI News.

]]>
In an interview at AI & Big Data Expo with Steve Frederickson, Chief Product Officer at Lucy.ai, we gained valuable insights into how AI is powering a next-gen “answer engine” for enterprises.

Lucy is designed to unlock and harness the vast knowledge residing within a company’s data repositories, regardless of format or source. From SharePoint and Google Drive to Dropbox and third-party tools, Lucy can seamlessly search and connect with all types of content, facilitating efficient knowledge retrieval for employees.

“Lucy 4, our latest version, is very exciting for us. We went through a significant development phase, incorporating feedback from customers who used Lucy 3. We re-envisioned what knowledge discovery means for large companies,” says Frederickson.

The team went back to the basics of what an answer engine should be, prioritising the content itself and the individuals who created it. The ultimate aim was to foster new connections and opportunities for collaboration within the enterprise, breaking down silos and facilitating knowledge-sharing across departments.

When measuring success, Lucy.ai focuses not only on engagement metrics but also on the tangible impact it has on saving employees’ time.

Through interviews with customers, Frederickson has received feedback emphasising how Lucy has become a time-saving tool within their organisations. One notable outcome has been the breaking down of data silos between different departments and fostering a sense of unity and cooperation across the company.

The rise of remote work, particularly in a post-pandemic world, has further amplified the need for knowledge-surfacing solutions like Lucy.

As employees continue to work remotely, maintaining a connection with their company’s knowledge base and colleagues becomes crucial. Frederickson highlighted that employees often resort to reaching out to co-workers directly for information, bypassing the need for traditional search methods.

To address this challenge, the company developed Lucy Synopsis, a feature that allows users to interact with Lucy as if they were conversing with a co-worker on platforms like Microsoft Teams. By asking Lucy questions in a conversational manner, employees can receive precise answers and even have relevant content presented in an easily understandable format.

While surfacing information is essential, not all data within a company should be readily accessible to everyone.

Frederickson addressed this concern by highlighting the robust access controls provided by Lucy. These controls encompass both role-based and attribute-based access, tailored to fit the specific taxonomy and security requirements of each organisation. By aligning with a company’s access levels, Lucy can provide users with answers based on their permissions, ensuring the confidentiality and integrity of sensitive information.

In a market with several answer-surfacing solutions, Lucy aims to stand out by adopting a holistic approach to the search process.

The company redefines search as a comprehensive journey, extending beyond the initial question to encompass the entire knowledge cycle.

“We consider search as an end-to-end journey that goes beyond finding a document. Users may need to identify specific pages, contact document authors for clarification, or contribute contextual information for future reference,” explains Frederickson.

Lucy recognises the importance of these extended interactions and strives to facilitate them seamlessly within their platform. Furthermore, Lucy AI excels at connecting with various data sources, not limited to internal repositories but also integrating with third-party tools like Confluence and ServiceNow. This versatility allows companies to leverage their existing knowledge repositories while making information accessible through Lucy’s unified interface.

As an agile startup, Lucy.ai embraces the fast-paced nature of the industry. Frederickson emphasised that maintaining a set of core principles is essential to the company’s success.

Empowering individuals with knowledge lies at the heart of Lucy.ai’s mission, and they constantly explore how new tools and developments in generative AI can support that objective. By closely engaging with customers and prospects, Lucy.ai stays attuned to their needs and adapts its feature set to align with their evolving policies and requirements.

Looking ahead, Lucy AI has ambitious plans for the future.

“We are excited to continue building on the strong foundation of Lucy 4. We aim to foster conversations and connections between departments, using Lucy as a tool to empower people and foster collaboration within the enterprise. We have exciting developments in this area that we look forward to sharing,” says Frederickson.

Lucy.ai’s innovative approach to knowledge discovery and its commitment to empowering individuals within organisations make it a formidable player in the field. As the remote work trend continues and the need for efficient knowledge surfacing grows, Lucy.ai’s comprehensive answer engine offers a unique solution.

By bridging the gap between employees and their company’s collective knowledge, Lucy not only saves time and improves productivity, but also facilitates meaningful connections that drive innovation and collaboration within the enterprise.

You can watch our full interview with Steve Frederickson below:

(Photo by Jack Carter on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/feed/ 0
Tim van Kasteren, Adevinta: On using AI to improve online classifieds https://www.artificialintelligence-news.com/2022/11/03/tim-van-kasteren-adevinta-on-using-ai-to-improve-online-classifieds/ https://www.artificialintelligence-news.com/2022/11/03/tim-van-kasteren-adevinta-on-using-ai-to-improve-online-classifieds/#respond Thu, 03 Nov 2022 16:39:45 +0000 https://www.artificialintelligence-news.com/?p=12454 Amid global economic turmoil, using AI to improve online experiences and extract the most value from every investment is more important than ever. The advertising industry is a pioneer of AI and machine learning; harnessing the technologies to deliver personalised experiences that ensure the right content is put in front of the right people at... Read more »

The post Tim van Kasteren, Adevinta: On using AI to improve online classifieds appeared first on AI News.

]]>
Amid global economic turmoil, using AI to improve online experiences and extract the most value from every investment is more important than ever.

The advertising industry is a pioneer of AI and machine learning; harnessing the technologies to deliver personalised experiences that ensure the right content is put in front of the right people at the right time.

AI News caught up with Tim van Kasteren, Head of Engineering at Adevinta, to learn more about how one of Europe’s online classifieds leaders is using AI.

AI News: From the top, how is AI improving online classifieds? 

Tim van Kasteren: Online classifieds are a form of two-sided marketplaces where a buyer and a seller come together to close a deal. Historically, we are a group of marketplaces where we connect people who transact outside of our platform. We are now adding more and more services to allow people to transact safely on our platforms (integrated shipping options, payments through the platform, buyer protection, etc). 

We apply AI in many stages of the user journeys of both the buyer and seller. 

We support buyers by offering relevant content in the form of recommendations and search results. For example, when a user returns to one of our marketplaces they are presented with a personalised homepage feed that immediately gives the user an overview of relevant ads that have recently been added or have not been viewed yet by the user. This brings the user straight back into the search experience. By clicking on one of the ads, the user is shown a detailed ad page with similar recommendations.

To understand the complexity of such a solution, it is important to understand the difference between online classifieds and your typical e-commerce business/platform. In an e-commerce business/platform you typically have a stock of several (perhaps hundreds) of items, whereas in online classifieds every item is unique and once sold it is removed from the marketplace. This means that a marketplace’s inventory is extremely volatile; as a result, traditional recommender algorithms do not perform very well in this context.

We also support sellers in adding new items to our marketplaces and making sure their ads perform well. Adding an item means the seller needs to provide pictures, a title, category, price, and a description of an item. We use computer vision to pre-fill some of these fields and to provide a price range that we believe the item will be sold at.

Getting these things right means that the item will be sold quickly, providing a good user experience for both seller and buyer and improving what is called the liquidity of the marketplace. We also provide sellers with a variety of stats about the performance of their ad and suggestions on how they might improve that performance. 

Other areas where AI plays a role are more behind the scenes. For example, we have applications in advertising (targeting the right user for a given ad), marketing (engaging users with relevant content), and fraud detection (making sure fraudulent ads or ads that violate our policies don’t make it onto the platform).

AN: Do you think it helps to keep the flow of goods and services going in uncertain economic times? 

TvK: Online classifieds are platforms that help people give their products a second life.

As well as getting some money in return for an item that’s no longer used by the seller, the sustainable nature of the transaction (reusing an item instead of buying a newly manufactured item) is an important motivator for many of our users.

Furthermore, our platforms reduce the socio-economic gap in society as certain expensive products (for example certain technology items) become available at a more affordable price.

AN: One problem that Adevinta highlighted in a blog post that is specific to recommender systems is the “cold-start problem” – can you explain what that is and how to overcome it?

TvK: The cold start problem in recommenders refers to items for which you have too little data (i.e. clicks) to apply your data-driven algorithm.

As described above, because our marketplaces have a highly volatile inventory compared to e-commerce, this problem is highly relevant. If we don’t have enough clicks to recommend an item, we fall back on natural language processing (NLP)-based techniques to determine which items are similar.

This requires a different data source (the metadata of an ad) and a different algorithmic approach in order to calculate similarity. Our personalisation services automatically determine if we have enough click data to build recommendations on or whether we have to fall back to the NLP-based approach.

AN: How do you ensure that the data you’re using for models is of sufficiently high quality?

TvK: All our marketplaces pipe their data to a centrally-created databus, which is basically a streaming solution that allows us to process behavioural data and other types of data in real-time.

We have recreated an in-house data quality monitoring platform that checks for basic anomalies in quality and allows for custom rules to check if quality levels are as expected. When we roll out a data product to one of our marketplaces, we check the data quality using this tooling and collaborate with the marketplaces team to improve the data quality if needed.

Once the product is live, a quality monitoring service monitors whether data quality drops below acceptable levels. 

AN: Are you using a machine learning platform like SageMaker for your NLP models? 

TvK: We have developed an internal machine learning (ML) platform based on Kubeflow and Kubernetes on which we run our services and workloads.

This ML platform allows us to have high utilisation of our computing clusters without having to pay an additional markup as we would for SageMaker. The ML platform furthermore allows us to integrate seamlessly with the rest of our CI/CD pipeline.

AWS SageMaker can still be useful for rapid prototyping, but we struggled to apply it successfully in a production setup due to the increased cost and the limited source control features.

AN: How important is the human element still in creative tasks? Do you foresee that changing much in the coming years?

TvK: Product development remains a highly creative and insightful task that many parties in our organisation are involved in.

Analytics helps us to make data-driven decisions, and toolkits and platforms help us to increase the time to market in rolling out a prototype, MVP, and finally the production-ready version, but the process of product development as a whole still requires very active human creativity.

AN: You recently presented a session at this year’s AI & Big Data Expo Europe, can you give us a brief overview for readers who were unable to attend? 

In my talk at the AI & Big Data Expo, I explained some of the challenges that federated businesses face in building data products. I went into both the challenges and how we solved them in the areas of data collection, data processing, privacy, and experimentation. I then gave examples of data products we built in the personalisation and computer vision space.

You can view a recording of Tim van Kasteren’s session below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tim van Kasteren, Adevinta: On using AI to improve online classifieds appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/03/tim-van-kasteren-adevinta-on-using-ai-to-improve-online-classifieds/feed/ 0
Justin Swansburg, DataRobot: On combining human and machine intelligence https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/ https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/#respond Tue, 04 Oct 2022 10:55:51 +0000 https://www.artificialintelligence-news.com/?p=12349 Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount. Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that... Read more »

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount.

Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that issues can be more quickly identified before they cause wider damage.

AI News caught up with Justin Swansburg, VP of Americas Data Science Practice at DataRobot, to discuss how the company is driving AI adoption using concepts like XAI to combine the strengths of human and machine intelligence.

AI News: Can you give us a brief overview of DataRobot’s core solutions?

Justin Swansburg: DataRobot’s AI Cloud platform is uniquely built to democratise and accelerate the use of AI while delivering critical insights that drive clear business results. 

DataRobot helps organisations across industries harness the transformational power of AI, from restoring supply chain resiliency to accelerating the treatment and prevention of disease and enhancing patient care to combating the climate crisis.

As one of the most widely deployed and proven AI platforms in the market today, DataRobot AI Cloud brings together a broad range of data, giving businesses comprehensive insights to drive revenue growth, manage operations, and reduce risk.

DataRobot has delivered over 1.4 trillion predictions for customers around the world, including the U.S. Army, CBS Interactive, and CVS.

AN: What is “augmented intelligence” and how does it differ from artificial intelligence?

JS: Artificial intelligence and augmented intelligence share the same objective but have different ways of accomplishing it.

Augmented intelligence brings together qualities of human intuition and experience with the efficiency and power of machine learning. Whereas artificial intelligence is often used as a replacement or substitute for human processes and decision-making.

AN: Do you need machine learning or programming experience to build predictive analytics with DataRobot?  

JS: DataRobot is a unified platform designed to democratise and accelerate the use of AI. This means that anyone in an organisation – with or without specialist knowledge of AI – can use DataRobot to build, deploy, and manage AI applications to transform their products, services, and operations.

AN: How does DataRobot support the idea of explainable AI and why is that important?

JS: DataRobot Explainable AI helps organisations understand the behaviour of models and gain confidence in their results. When AI is not transparent, it can be difficult to trust the system and translate insights and predictions into business outcomes.

With Explainable AI, users can easily understand the model inputs while bridging the gap between development and actionable results.

AN: DataRobot recently earned a coveted spot among Forrester’s leading AI/ML platforms – what makes you stand out from rivals?

JS: We’re very proud of this achievement. We believe that our innovative platform and customer loyalty sets us apart from competitors.

Over the last year, we’ve focused on improving our AI platform through new tooling and functionality, as well as several acquisitions.

Our main goal is to provide customers with the best possible technology to help solve their business problems and we’ve heard that our platform’s ease of use, model documentation, and explainability have been appreciated by customers. 

AN: Your report, AI and the Power of Perception, found that 72 percent of businesses are positively impacted by AI but consumer scepticism remains – how do you think that can be addressed?

JS: That’s a great question. While there is significant scepticism, we believe that this can be addressed with some form of increased regulatory guidance and education on the benefits of AI for both businesses and consumers.

We believe that increased training for businesses would help to demonstrate to consumers a commitment to higher standards. It would also give consumers more confidence that responsible data practices were being followed.

Other consumer concerns, like the potential of AI to replace jobs, will take longer to address. But, it is too early to make a call on the extent to which these concerns are warranted, overblown, or somewhere in between.

We’re interested to see how perceptions change over time and are hopeful that more and more people will start to realise the great benefits AI has to offer. 

Justin Swansburg and the DataRobot team will be sharing their invaluable insights at this year’s AI & Big Data Expo North America. You can find out more about Justin’s sessions here and be sure to swing by DataRobot’s booth at stand #176

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/feed/ 0
Ash Damle, TMDC: Data-based business decisions in real-time https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/ https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/#respond Thu, 22 Sep 2022 12:53:12 +0000 https://www.artificialintelligence-news.com/?p=12267 Ash Damle, Head of AI and Data Science at TMDC, explains how the company is humanising and democratising data access. AI News: The Modern Data Company (TMDC) aims to “democratise” data access. What are the benefits to enterprises?  Ash Damle: Modern companies are data companies. When a data company’s best asset, its data, is only... Read more »

The post Ash Damle, TMDC: Data-based business decisions in real-time appeared first on AI News.

]]>
Ash Damle, Head of AI and Data Science at TMDC, explains how the company is humanising and democratising data access.

AI News: The Modern Data Company (TMDC) aims to “democratise” data access. What are the benefits to enterprises? 

Ash Damle: Modern companies are data companies. When a data company’s best asset, its data, is only accessible by a handful of individuals, then the company is only scratching the surface of what data can do.

Democratisation of data enables every individual in the company to better perform, innovate, and meet business goals. Modern offers enterprises the ability to put data to work — that requires data to be available and to be trusted. 

AN: Can you still apply different levels of access to data based on individuals/teams? 

AD: You can absolutely still apply different levels of access to data within an organisation. In fact, our approach to governance is a key factor in enabling unprecedented levels of data access, transparency, and usability.

Our ABAC approach provides granular governance controls so that admins can open data to flow to stakeholders without risking privacy or security loopholes. Users can search for and see what data is available for use, while stewards can see who is using data, when, and why.

Regardless of business size or industry, it is fully scalable and allows the organisation to apply compliance and governance rules to all data systematically. This is an entirely new way to approach governance. 

AN: What features are in place to ensure compliance with data regulations? 

AD: Modern gives companies the flexibility to define and apply governance rules at the most granular levels. Our approach also enables admins and decision-makers to view their data ecosystem as a whole for critical governance questions such as: 

  • Who is using data and how are they using it? 
  • Where is data located and stored? 
  • Which business and risk processes does data impact? 
  • What dependencies exist downstream? 

AN: Another key goal of Modern is to “humanise” data. What does that mean in practice? 

AD: Being human involves intelligence and the ability to use that intelligence to inform and formulate dialog. DataOS gives data an organised “voice,” enabling users to trust data to inform their decision-making. It acts as a data engineering partner, allowing users to have a real dialogue with data. 

AN: What are some of the other key problems with traditional data platforms that your solution, DataOS, aims to fix? 

AD: Most data solutions look at a database like it’s just a box of data. Most also operate within a data silo, which may help solve one problem but it can’t serve as an end solution.

The challenge for enterprises is they don’t exist on just one database. DataOS accounts for that, offering a unified source of truth and then empowering users to easily act on the data — no matter the source — with outcome-based data engineering. A user can choose the outcome they need and DataOS will build the right process for them while ensuring that the process is compliant with all security and governance policies.  

AN: How do you ensure your platform is accessible for all employees regardless of their technical skills or background? 

AD: DataOS allows data access and use for individuals according to granular rules set by the organisation. How the company manages access often depends on particular roles and responsibilities, as well as their in-house approach to security.  

AN: What data formats are supported by DataOS? 

AD: DataOS deals with heterogeneous formats, such as Sequel, CSBS, Excel files, and many, many more. It also extracts data and allows enterprises to do more intelligent things with imagery, access essential data easily, and see metadata so they can leverage all data assets across the board. 

AN: Bad data is worse than no data. How do you check and report the quality of data? 

AD: With DataOS, organisations define their own rules for what to do with data before making it available. DataOS then automates enforcement of those rules to ensure they’re adhering to the right distributions and applying necessary quality checks. DataOS ensures that you’re always getting the best data possible and that you are always alerted of any data quality issues.

The Modern Data Company (TMDC) is sponsoring this year’s AI & Big Data Expo North America. Swing by their booth at stand #66 to learn more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

The post Ash Damle, TMDC: Data-based business decisions in real-time appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/22/ash-damle-tmdc-data-based-business-decisions-in-real-time/feed/ 0
AI Expo: Protecting ethical standards in the age of AI https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/ https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/#respond Wed, 21 Sep 2022 11:34:58 +0000 https://www.artificialintelligence-news.com/?p=12264 Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral. During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence. Here’s a list of the panel’s participants: Moderator: Frans van... Read more »

The post AI Expo: Protecting ethical standards in the age of AI appeared first on AI News.

]]>
Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral.

During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence.

Here’s a list of the panel’s participants:

  • Moderator: Frans van Bruggen, Policy Officer for AI and FinTech at De Nederlandsche Bank (DNB)
  • Aoibhinn Reddington, Artificial Intelligence Consultant at Deloitte
  • Sabiha Majumder, Model Validator – Innovation & Projects at ABN AMRO Bank N.V
  • Laura De Boel, Partner at Wilson Sonsini Goodrich & Rosati

The first question called for thoughts about current and upcoming regulations that affect AI deployments. As a lawyer, De Boel kicked things off by giving her take.

De Boel highlights the EU’s upcoming AI Act which builds upon the foundations of similar legislation such as GDPR but extends it for artificial intelligence.

“I think that it makes sense that the EU wants to regulate AI, and I think it makes sense that they are focusing on the highest risk AI systems,” says De Boel. “I just have a few concerns.”

De Boel’s first concern is how complex it will be for lawyers like herself.

“The AI Act creates many different responsibilities for different players. You’ve got providers of AI systems, users of AI systems, importers of AI systems into the EU — they all have responsibilities, and lawyers will have to figure it out,” De Boel explains.

The second concern is how costly this will all be for businesses.

“A concern that I have is that all these responsibilities are going to be burdensome, a lot of red tape for companies. That’s going to be costly — costly for SMEs, and costly for startups.”

Similar concerns were raised about GDPR. Critics argue that overreaching regulation drives innovation, investment, and jobs out of the Eurozone and leaves countries like the USA and China to lead the way.

Peter Wright, Solicitor and MD of Digital Law UK, once told AI News about GDPR: “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”

The concerns raised by De Boel echo Wright and it’s true that it will have a greater impact on startups and smaller companies who already face an uphill battle against established industry titans.

De Boel’s final concern on the topic is about enforcement and how the AI Act goes even further than GDPR’s already strict penalties for breaches.

“The AI act really copies the enforcement of GDPR but sets even higher fines of 30 million euros or six percent of annual turnover. So it’s really high fines,” comments De Boel.

“And we see with GDPR that when you give these types of powers, it is used.”

Outside of Europe, different laws apply. In the US, rules such as those around biometric recognition can vary greatly from state-to-state. China, meanwhile, recently introduced a law that requires companies to give the option for consumers to opt-out from things like personalised advertising.

Keeping up with all the ever-changing laws around the world that may impact your AI deployments is going to be a difficult task, but a failure to do so could result in severe penalties.

The financial sector is already subject to very strict regulations and has used statistical models for decades for things such as lending. The industry is now increasingly using AI for decision-making, which brings with it both great benefits and substantial risks.

“The EU requires auditing of all high-risk AI systems in all sectors, but the problem with external auditing is there could be internal data, decisions, or confidential information which cannot be shared with an external party,” explains Majumder.

Majumder goes on to explain that it’s therefore important to have a second line of opinions -which is internal to the organisation – but they look at it from an independent perspective, from a risk management perspective.

“So there are three lines of defense: First, when developing the model. Second, we’re assessing independently through risk management. Third, the auditors as the regulators,” Majumder concludes.

Of course, when AI is always making the right decisions then everything is great. When it inevitably doesn’t, it can be seriously damaging.

The EU is keen on banning AI for “unacceptable” risk purposes that may damage the livelihoods, safety, and rights of people. Three other categories (high risk, limited risk, and minimal/no risk) will be permitted, with decreasing amounts of legal obligations as you go down the scale.

“We can all agree that transparency is really important, right? Because let me ask you a question: If you apply for some kind of service, and you get denied, what do you want to know? Why am I being denied the service?” says Reddington.

“If you’re denied service by an algorithm who cannot come up with a reason, what is your reaction?”

There’s a growing consensus that XAI (Explainable AI) should be used in decision-making so that reasons for the outcome can always be traced. However, Bruggen makes the point that transparency may not always be a good thing — you may not want to give a terrorist or someone accused of a financial crime the reason why they’ve been denied a loan, for example.

Reddington believes this is why humans should not be taken out of the loop. The industry is far from reaching that level of AI anyway, but even if/when available there are the ethical reasons we shouldn’t want to remove human input and oversight entirely.

However, AI can also increase fairness.

Mojumder gives the example from her field of expertise, finance, where historical data is often used for decisions such as credit. Over time, people’s situations change but they could be stuck with struggling to get credit based on historical data.

“Instead of using historical credit rating as input, we can use new kinds of data like mobile data, utility bills, or education, and AI has made it possible for us,” explains Mojumder.

Of course, using such a relatively small dataset then poses its own problems.

The panel offered some fascinating insights on ethics in AI and the current and future regulatory environment. As with the AI industry generally, it’s rapidly advancing and hard to keep up with but critical to do so.

You can find out more about upcoming events in the global AI & Big Data Expo series here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Expo: Protecting ethical standards in the age of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/feed/ 0