ai & big data expo Archives - AI News https://www.artificialintelligence-news.com/tag/ai-big-data-expo/ Artificial Intelligence News Wed, 25 Oct 2023 13:31:08 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai & big data expo Archives - AI News https://www.artificialintelligence-news.com/tag/ai-big-data-expo/ 32 32 Bob Briski, DEPT®:  A dive into the future of AI-powered experiences https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/ https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/#respond Wed, 25 Oct 2023 10:25:58 +0000 https://www.artificialintelligence-news.com/?p=13782 AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences. At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their... Read more »

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences.

At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their tagline, “pioneering work on a global scale with a boutique culture.”

While ‘pioneering’ and ’boutique’ evokes innovation and personalised attention, ‘global scale’ signifies the broad outreach. DEPT® harnesses large language models to disseminate highly targeted, personalised messages to expansive audiences. These models, Briski pointed out, enable DEPT® to comprehend individuals at a massive scale and foster meaningful and individualised interactions.

“The way that we have been using a lot of these large language models is really to deliver really small and targeted messages to a large audience,” says Briski.

However, the integration of AI into various domains – such as retail, sports, education, and healthcare – poses both opportunities and challenges. DEPT® navigates this complexity by leveraging generative AI and large language models trained on diverse datasets, including vast repositories like Wikipedia and the Library of Congress.

Briski emphasised the importance of marrying pre-trained data with DEPT®’s domain expertise to ensure precise contextual responses. This approach guarantees that clients receive accurate and relevant information tailored to their specific sectors.

“The pre-training of these models allows them to really expound upon a bunch of different domains,” explains Briski. “We can be pretty sure that the answer is correct and that we want to either send it back to the client or the consumer or some other system that is sitting in front of the consumer.”

One of DEPT®’s standout achievements lies in its foray into the web3 space and the metaverse. Briski shared the company’s collaboration with Roblox, a platform synonymous with interactive user experiences. DEPT®’s collaboration with Roblox revolves around empowering users to create and enjoy user-generated content at an unprecedented scale. 

DEPT®’s internal project, Prepare to Pioneer, epitomises its commitment to innovation by nurturing embryonic ideas within its ‘Greenhouse’. DEPT® hones concepts to withstand the rigours of the external world, ensuring only the most robust ideas reach their clients.

“We have this internal project called The Greenhouse where we take these seeds of ideas and try to grow them into something that’s tough enough to handle the external world,” says Briski. “The ones that do survive are much more ready to use with our clients.”

While the allure of AI-driven solutions is undeniable, Briski underscored the need for caution. He warns that AI is not inherently transparent and trustworthy and emphasises the imperative of constructing robust foundations for quality assurance.

DEPT® employs automated testing to ensure responses align with expectations. Briski also stressed the importance of setting stringent parameters to guide AI conversations, ensuring alignment with the company’s ethos and desired consumer interactions.

“It’s important to really keep focused on the exact conversation you want to have with your consumer or your customer and put really strict guardrails around how you would like the model to answer those questions,” explains Briski.

In December, DEPT® is sponsoring AI & Big Data Expo Global and will be in attendance to share its unique insights. Briski is a speaker at the event and will be providing a deep dive into business intelligence (BI), illuminating strategies to enhance responsiveness through large language models.

“I’ll be diving into how we can transform BI to be much more responsive to the business, especially with the help of large language models,” says Briski.

As DEPT® continues to redefine digital paradigms, we look forward to observing how the company’s innovations deliver a new era in AI-powered experiences.

DEPT® is a key sponsor of this year’s AI & Big Data Expo Global on 30 Nov – 1 Dec 2023. Swing by DEPT®’s booth to hear more about AI and LLMs from the company’s experts and watch Briski’s day one presentation.

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/feed/ 0
Piero Molino, Predibase: On low-code machine learning and LLMs https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/ https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/#respond Mon, 26 Jun 2023 15:19:41 +0000 https://www.artificialintelligence-news.com/?p=13223 AI News sat down with Piero Molino, CEO and co-founder of Predibase, during this year’s AI & Big Data Expo to discuss the importance of low-code in machine learning and trends in LLMs (Large Language Models). At its core, Predibase is a declarative machine learning platform that aims to streamline the process of developing and... Read more »

The post Piero Molino, Predibase: On low-code machine learning and LLMs appeared first on AI News.

]]>
AI News sat down with Piero Molino, CEO and co-founder of Predibase, during this year’s AI & Big Data Expo to discuss the importance of low-code in machine learning and trends in LLMs (Large Language Models).

At its core, Predibase is a declarative machine learning platform that aims to streamline the process of developing and deploying machine learning models. The company is on a mission to simplify and democratise machine learning, making it accessible to both expert organisations and developers who are new to the field.

The platform empowers organisations with in-house experts, enabling them to supercharge their capabilities and reduce development times from months to just days. Additionally, it caters to developers who want to integrate machine learning into their products but lack the expertise.

By using Predibase, developers can avoid writing extensive lines of low-level machine learning code and instead work with a simple configuration file – known as a YAML file – which contains just 10 lines specifying the data schema.

Predibase reaches general availability

During the expo, Predibase announced the general availability of its platform.

One of the key features of the platform is its ability to abstract away the complexity of infrastructure provisioning. Users can seamlessly run training, deployment, and inference jobs on a single CPU machine or scale up to 1000 GPU machines with just a few clicks. The platform also facilitates easy integration with various data sources, including data warehouses, databases, and object stores, regardless of the data structure.

“The platform is designed for teams to collaborate on developing models, with each model represented as a configuration that can have multiple versions. You can analyse the differences and performance of the models,” explains Molino.

Once a model meets the required performance criteria, it can be deployed for real-time predictions as a REST endpoint or for batch predictions using SQL-like queries that include prediction capabilities.

Importance of low-code in machine learning

The conversation then shifted to the importance of low-code development in machine learning adoption. Molino emphasised that simplifying the process is essential for wider industry adoption and increased return on investment.

By reducing the development time from months to a matter of days, Predibase lowers the entry barrier for organisations to experiment with new use cases and potentially unlock significant value.

“If every project takes months or even years to develop, organisations won’t be incentivised to explore valuable use cases. Lowering the bar is crucial for experimentation, discovery, and increasing return on investment,” says Molino.

“With a low-code approach, development times are reduced to a couple of days, making it easier to try out different ideas and determine their value.”

Trends in LLMs

The discussion also touched on the rising interest in large language models. Molino acknowledged the tremendous power of these models and their ability to transform the way people think about AI and machine learning.

“These models are powerful and revolutionizing the way people think about AI and machine learning. Previously, collecting and labelling data was necessary before training a machine learning model. But now, with APIs, people can query the model directly and obtain predictions, opening up new possibilities,” explains Molino.

However, Molino highlighted some limitations, such as the cost and scalability of per-query pricing models, the relatively slow inference speeds, and concerns about data privacy when using third-party APIs.

In response to these challenges, Predibase is introducing a mechanism that allows customers to deploy their models in a virtual private cloud, ensuring data privacy and providing greater control over the deployment process.

Common mistakes

As more businesses venture into machine learning for the first time, Molino shared his insights into some of the common mistakes they make. He emphasised the importance of understanding the data, the use case, and the business context before diving headfirst into development. 

“One common mistake is having unrealistic expectations and a mismatch between what they expect and what is actually achievable. Some companies jump into machine learning without fully understanding the data or the use case, both technically and from a business perspective,” says Molino.

Predibase addresses this challenge by offering a platform that facilitates hypothesis testing, integrating data understanding and model training to validate the suitability of models for specific tasks. With guardrails in place, even users with less experience can engage in machine learning with confidence.

The general availability launch of Predibase’s platform marks an important milestone in their mission to democratise machine learning. By simplifying the development process, Predibase aims to unlock the full potential of machine learning for organisations and developers alike.

You can watch our full interview with Molino below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Piero Molino, Predibase: On low-code machine learning and LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/26/piero-molino-predibase-low-code-machine-learning-llms/feed/ 0
Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/ https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/#respond Thu, 25 May 2023 11:56:59 +0000 https://www.artificialintelligence-news.com/?p=13110 In an interview at AI & Big Data Expo with Steve Frederickson, Chief Product Officer at Lucy.ai, we gained valuable insights into how AI is powering a next-gen “answer engine” for enterprises. Lucy is designed to unlock and harness the vast knowledge residing within a company’s data repositories, regardless of format or source. From SharePoint... Read more »

The post Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ appeared first on AI News.

]]>
In an interview at AI & Big Data Expo with Steve Frederickson, Chief Product Officer at Lucy.ai, we gained valuable insights into how AI is powering a next-gen “answer engine” for enterprises.

Lucy is designed to unlock and harness the vast knowledge residing within a company’s data repositories, regardless of format or source. From SharePoint and Google Drive to Dropbox and third-party tools, Lucy can seamlessly search and connect with all types of content, facilitating efficient knowledge retrieval for employees.

“Lucy 4, our latest version, is very exciting for us. We went through a significant development phase, incorporating feedback from customers who used Lucy 3. We re-envisioned what knowledge discovery means for large companies,” says Frederickson.

The team went back to the basics of what an answer engine should be, prioritising the content itself and the individuals who created it. The ultimate aim was to foster new connections and opportunities for collaboration within the enterprise, breaking down silos and facilitating knowledge-sharing across departments.

When measuring success, Lucy.ai focuses not only on engagement metrics but also on the tangible impact it has on saving employees’ time.

Through interviews with customers, Frederickson has received feedback emphasising how Lucy has become a time-saving tool within their organisations. One notable outcome has been the breaking down of data silos between different departments and fostering a sense of unity and cooperation across the company.

The rise of remote work, particularly in a post-pandemic world, has further amplified the need for knowledge-surfacing solutions like Lucy.

As employees continue to work remotely, maintaining a connection with their company’s knowledge base and colleagues becomes crucial. Frederickson highlighted that employees often resort to reaching out to co-workers directly for information, bypassing the need for traditional search methods.

To address this challenge, the company developed Lucy Synopsis, a feature that allows users to interact with Lucy as if they were conversing with a co-worker on platforms like Microsoft Teams. By asking Lucy questions in a conversational manner, employees can receive precise answers and even have relevant content presented in an easily understandable format.

While surfacing information is essential, not all data within a company should be readily accessible to everyone.

Frederickson addressed this concern by highlighting the robust access controls provided by Lucy. These controls encompass both role-based and attribute-based access, tailored to fit the specific taxonomy and security requirements of each organisation. By aligning with a company’s access levels, Lucy can provide users with answers based on their permissions, ensuring the confidentiality and integrity of sensitive information.

In a market with several answer-surfacing solutions, Lucy aims to stand out by adopting a holistic approach to the search process.

The company redefines search as a comprehensive journey, extending beyond the initial question to encompass the entire knowledge cycle.

“We consider search as an end-to-end journey that goes beyond finding a document. Users may need to identify specific pages, contact document authors for clarification, or contribute contextual information for future reference,” explains Frederickson.

Lucy recognises the importance of these extended interactions and strives to facilitate them seamlessly within their platform. Furthermore, Lucy AI excels at connecting with various data sources, not limited to internal repositories but also integrating with third-party tools like Confluence and ServiceNow. This versatility allows companies to leverage their existing knowledge repositories while making information accessible through Lucy’s unified interface.

As an agile startup, Lucy.ai embraces the fast-paced nature of the industry. Frederickson emphasised that maintaining a set of core principles is essential to the company’s success.

Empowering individuals with knowledge lies at the heart of Lucy.ai’s mission, and they constantly explore how new tools and developments in generative AI can support that objective. By closely engaging with customers and prospects, Lucy.ai stays attuned to their needs and adapts its feature set to align with their evolving policies and requirements.

Looking ahead, Lucy AI has ambitious plans for the future.

“We are excited to continue building on the strong foundation of Lucy 4. We aim to foster conversations and connections between departments, using Lucy as a tool to empower people and foster collaboration within the enterprise. We have exciting developments in this area that we look forward to sharing,” says Frederickson.

Lucy.ai’s innovative approach to knowledge discovery and its commitment to empowering individuals within organisations make it a formidable player in the field. As the remote work trend continues and the need for efficient knowledge surfacing grows, Lucy.ai’s comprehensive answer engine offers a unique solution.

By bridging the gap between employees and their company’s collective knowledge, Lucy not only saves time and improves productivity, but also facilitates meaningful connections that drive innovation and collaboration within the enterprise.

You can watch our full interview with Steve Frederickson below:

(Photo by Jack Carter on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Steve Frederickson, Lucy.ai: How AI powers a next-gen ‘answer engine’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/25/steve-frederickson-lucy-ai-how-powers-next-gen-answer-engine/feed/ 0
AI & Big Data Expo: Exploring ethics in AI and the guardrails required  https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/ https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/#respond Fri, 16 Dec 2022 11:14:27 +0000 https://www.artificialintelligence-news.com/?p=12565 The tipping point between acceptability and antipathy when it comes to ethical implications of artificial intelligence have long been thrashed out. Recently, the lines feel increasingly blurred; AI-generated art, or photography, not to mention the possibilities of OpenAI’s ChatGPT, reveals a greater sophistication of the technology. But at what cost?  A recent panel session at... Read more »

The post AI & Big Data Expo: Exploring ethics in AI and the guardrails required  appeared first on AI News.

]]>
The tipping point between acceptability and antipathy when it comes to ethical implications of artificial intelligence have long been thrashed out. Recently, the lines feel increasingly blurred; AI-generated art, or photography, not to mention the possibilities of OpenAI’s ChatGPT, reveals a greater sophistication of the technology. But at what cost? 

A recent panel session at the AI & Big Data Expo` in London explored these ethical grey areas, from beating inbuilt bias to corporate mechanisms and mitigating the risk of job losses. 

James Fletcher leads the responsible application of AI at the BBC. His job is to, as he puts it, ‘make sure what [the BBC] is doing with AI aligns with our values.’  He says that AI’s purpose, within the context of the BBC, is automating decision making. Yet ethics are a serious challenge and one that is easier to talk about than act upon – partly down to the pace of change. Fletcher took three months off for parental leave, and the changes upon his return, such as Stable Diffusion, ‘blew his mind [as to] how quickly this technology is progressing.’ 

“I kind of worry that the train is pulling away a bit in terms of technological advancement, from the effort required in order to solve those difficult problems,” said Fletcher. “This is a socio-technical challenge, and it is the socio part of it that is really hard. We have to engage not just as technologists, but as citizens.” 

Daniel Gagar of PA Consulting, who moderated the session, noted the importance of ‘where the buck stops’ in terms of responsibility, and for more serious consequences such as law enforcement. Priscila Chaves Martinez, director at the Transformation Management Office, was keen to point out inbuilt inequalities which would be difficult to solve.  

“I think it’s a great improvement, the fact we’ve been able to progress from a principled standpoint,” she said. “What concerns me the most is that this wave of principles will be diluted without a basic sense that it applies differently for every community and every country.” In other words, what works in Europe or the US may not apply to the global south. “Everywhere we incorporate humans into the equation, we will get bias,” she added, referring to the socio-technical argument. “So social first, technical afterwards.” 

“There is need for concern and need for having an open dialogue,” commented Elliot Frazier, head of AI infrastructure at the AI for Good Foundation, adding there needed to be introduction of frameworks and principles into the broader AI community. “At the moment, we’re significantly behind in having standard practices, standard ways of doing risk assessments,” Frazier added.  

“I would advocate [that] as a place to start – actually sitting down at the start of any AI project, assessing the potential risks.” Frazier noted that the foundation is looking along these lines with an AI ethics audit programme where organisations can get help on how they construct the correct leading questions of their AI, and to ensure the right risk management is in place. 

For Ghanasham Apte, lead AI developer behaviour analytics and personalisation at BT Group, it is all about guardrails. “We need to realise that AI is a tool – it is a dangerous tool if you apply it in the wrong way,” said Apte. Yet with steps such as explainable AI, or ensuring bias in the data is taken care of, multiple guardrails are ‘the only way we will overcome this problem,’ Apte added.  

Chaves Martinez, to an extent, disagreed. “I don’t think adding more guardrails is sufficient,” she commented. “It’s certainly the right first step, but it’s not sufficient. It’s not a conversation between data scientists and users, or policymakers and big companies; it’s a conversation of the entire ecosystem, and not all the ecosystem is well represented.” 

Guardrails may be a useful step, but Fletcher, to his original point, noted the goalposts continue to shift. “We need to be really conscious of the processes that need to be in place to ensure AI is accountable and contestable; that this is not just a framework where we can tick things off, but ongoing, continual engagement,” said Fletcher. 

“If you think about things like bias, what we think now is not what we thought of it five, 10 years ago. There’s a risk if we take the solutionist approach, we bake a type of bias into AI, then we have problems [and] we would need to re-evaluate our assumptions.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Exploring ethics in AI and the guardrails required  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/feed/ 0
Straight from AI & Big Data Expo: Generating business value with AI https://www.artificialintelligence-news.com/2022/11/15/straight-from-ai-big-data-expo-generating-business-value-with-ai/ https://www.artificialintelligence-news.com/2022/11/15/straight-from-ai-big-data-expo-generating-business-value-with-ai/#respond Tue, 15 Nov 2022 09:46:10 +0000 https://www.artificialintelligence-news.com/?p=12472 If you ask ten different data practitioners to define AI, you’ll get ten different answers. In its simplest form, AI is software that recognizes and reacts to complex patterns—but the way in which businesses derive value from those patterns can vary drastically.  In recent years, we’ve seen a number of incredible AI applications in healthcare,... Read more »

The post Straight from AI & Big Data Expo: Generating business value with AI appeared first on AI News.

]]>
If you ask ten different data practitioners to define AI, you’ll get ten different answers. In its simplest form, AI is software that recognizes and reacts to complex patterns—but the way in which businesses derive value from those patterns can vary drastically. 

In recent years, we’ve seen a number of incredible AI applications in healthcare, manufacturing, finance, and beyond. So, why is it that up to 92% of AI projects still fail to yield business results? 

AI has significantly evolved over the last several decades, making it more challenging than ever for businesses to clearly understand and design AI successfully. Continue reading to learn more about top AI challenges shared by companies today—and how to solve them. 

Four solutions to top AI challenges 

Despite the complexity of AI, there are a number of ways companies can position themselves for success with machine learning models. Here are a few. 

1. Cultivate data and AI literacy

In the 2000s, companies were most focused on digital literacy (think: word processing and spreadsheets). In the 2010s, industries shifted their focus to data literacy—can we acquire the data and can we build models with that data? Today, AI literacy is top-of-mind.

According to Harvard Business Review, fewer than 25% of the workforce would consider themselves data literate. Defined as the ability to assess, understand, and utilize data, data literacy is a skill that directly enables individuals to work with tools like machine learning models.

Cultivating data and AI literacy within your organization, through educational workshops or insightful articles, will significantly improve AI adoption rates and employee trust in AI-based initiatives.  

2. Clearly define your business value

With AI, the path to defining and deriving business value is often unclear. Oftentimes, companies will have the right data, design an adequate model, and identify the level of accuracy the model can achieve, but the team does not consider the actual human or group of humans that will be making decisions based on the model. This is one area where we see a high failure rate.

When developing your AI strategy, be sure to account for how the AI’s recommendations will be interpreted and used by your team. Will your team need a dashboard explaining the results? How else can you ensure your team trusts and accurately uses the information? 

3. Understand the journey to AI is iterative

AI strategy and design can often be broken down into two processes:

  1. Design. Where you are working to build a statistically valid model that can solve your problem. This process often requires experimentation with data and redefined requirements based on revealed constraints. 
  2. Develop. Where you are developing the solution and translating it into the hands of the end user(s). 

One of the most important phases of AI design is building resilience. You will likely encounter instances where data in the real world doesn’t match the training data used to build the model. Or, you may realize decision makers or other end users don’t trust the model enough to use it. Working through these challenges to design a resilient, trustworthy model will result in higher success rates compared to companies that ignore the complexity of the AI process. 

4. Mitigate unintended bias and risk

Risk mitigation and bias prevention must be at the forefront of your AI strategy in order to truly generate business value with AI. Involve diverse humans in your feedback loop, test your AI against unexpected situations, and understand the costs of undetected bias in your solution. 

Reducing the chance of negative bias in your solution protects end users from harm, and cultivates a deeper level of trust between your organization, your solution, and stakeholders. 

Improve your AI literacy with Trusted AI insights  

Improving your AI literacy—educating yourself and your team—is key to successfully strategizing and designing trustworthy AI. To stay up-to-date on AI news and gather more insights from data scientists, subscribe to Pandata’s monthly email digest: The Voices of Trusted AI.

(Photo by Hunters Race on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Straight from AI & Big Data Expo: Generating business value with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/straight-from-ai-big-data-expo-generating-business-value-with-ai/feed/ 0
Rob Mellor, WhereScape: On data warehouse automation https://www.artificialintelligence-news.com/2021/12/09/rob-mellor-wherescape-on-data-warehouse-automation/ https://www.artificialintelligence-news.com/2021/12/09/rob-mellor-wherescape-on-data-warehouse-automation/#respond Thu, 09 Dec 2021 13:25:18 +0000 https://artificialintelligence-news.com/?p=11489 Leading analysts and organisations have begun recognising data warehouse automation as being key to running a truly data-driven business. AI News caught up with Rob Mellor, GM & VP, EMEA at WhereScape, to discuss this industry shift. AI News: Only earlier this year did Gartner really begin recognising data warehouse automation after publishing a paper... Read more »

The post Rob Mellor, WhereScape: On data warehouse automation appeared first on AI News.

]]>
Leading analysts and organisations have begun recognising data warehouse automation as being key to running a truly data-driven business.

AI News caught up with Rob Mellor, GM & VP, EMEA at WhereScape, to discuss this industry shift.

AI News: Only earlier this year did Gartner really begin recognising data warehouse automation after publishing a paper on the subject. Is this indicative of a shift in how companies view automation?

Rob Mellor: At WhereScape, we feel the increased recent activity from Gartner around data warehouse automation is reflective of an industry shift. Organisations are beginning to realise that automation is really necessary if companies are to be truly data-driven.

By using a tool to automate repetitive and mundane tasks such as hand-coding, developers can be more productive and focus on adding features specific to their unique business requirements. This means their business can react faster to BI trends.

This is obvious to companies who have been data-driven for some time and are enjoying the results. However, we are now seeing Data Automation tools crossing the chasm into the mainstream, and so Gartner has moved to inform those who are considering tools like WhereScape for the first time and help those familiar with automation tools to choose the best one for their needs.

AN: What is the difference between modern data warehouse automation and ETL (extract, transform, load) tools?

RM: ETL tools are typically server-based, data integration solutions for moving and manipulating data from its sources to a target data warehouse. When ETL tools first emerged four decades ago, the servers that databases ran on did not have the computing power of today, so ETL solutions were developed to alleviate the data processing workload. They typically provided additional database and application connectivity and data manipulation functions that were previously limited in database engines.

Instead of using the older ETL method, today some vendors take an ELT approach. With ELT, data transformation happens in the target data warehouse rather than requiring a middle-tier ETL server. This approach takes advantage of today’s database engines that support massively parallel processing (MPP) as well as its availability within cloud-based data platforms such as Snowflake, Amazon Redshift and Microsoft Azure SQL Data Warehouse.

While ELT certainly represented a step forward in thinking compared to ETL, both types of data movement solutions still only cover a small portion of the data warehousing lifecycle. This means that organizations must rely on many disparate tools to support everything else involved in designing, developing, deploying, documenting and operating their data warehouses and other data infrastructure.

In comparison to the limited scope of ETL and ELT tools, data infrastructure automation encompasses the entire data warehousing lifecycle. From planning, data discovery and design through development, deployment, operations, change management — and even documentation — automation unifies it all.

AN: What are the main factors driving the adoption of data warehouse automation?

RM: Given the broad reach of Data Automation tools across the data warehousing lifecycle, we hear an array of reasons from companies looking to adopt them. Here are some of the most common reasons.

The small to medium size businesses we speak to typically look for automation tools to allow them to standardise their current data warehouse and scale the business effectively. They might typically start with custom data warehouse solutions, the knowledge of which is limited to one individual and so makes it hard to democratise the use of data to colleagues, especially non-technical staff.

WhereScape offers a templated, best practice approach for the design and implementation of effective data warehouse solutions, enabling more robust architectures to be built faster. All actions taken are fully documented with full data lineage, which saves many hours of repetitive work. Automation then handles the day-to-day and change management, so it does not take up a large portion of developers’ time.

Larger companies want all of the above, but they often look to WhereScape when embarking on a data warehouse modernisation project involving a switch in architecture or database. They want an automation tool to handle the complexity and ensure the new architecture works the first time.

Two big examples we have seen recently are a switch to Data Vault modelling, or a cloud migration project. These complex, large-scale projects can be prone to human error. WhereScape has specific tools and enablement packs for these projects, so while it may be the first time the company has implemented a project like this, the automation tool is fine-tuned in accordance with many previous similar projects. The benefit of this experience ensures the implementation works as it should the first time and so can save many months of work.

An overarching reason to adopt Data Automation tools is a desire to increase developer productivity, handing accurate business insight to those that need it, faster. This increases trust in IT and means the business can be more ambitious in its data-driven projects.

Automation tools also enable agile principles by increasing communication between IT and the business. For example, using a drag and drop GUI to design data infrastructure means that visual prototypes can be produced in minutes, ensuring all requirements have been understood before the build takes place.

Typically, we find data teams look for an automation tool to solve a specific problem, then expand its usage to other areas once they see a leap in productivity and understand what this can mean for the future of their organisation.

(Photo by Tim Mossholder on Unsplash)

WhereScape sponsored this year’s AI & Big Data Expo and shared their invaluable insights during the event. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Rob Mellor, WhereScape: On data warehouse automation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/09/rob-mellor-wherescape-on-data-warehouse-automation/feed/ 0
Brian Flüg, Qubole: On the benefits of data lakes for machine learning https://www.artificialintelligence-news.com/2021/11/22/brian-flug-qubole-benefits-data-lakes-for-machine-learning/ https://www.artificialintelligence-news.com/2021/11/22/brian-flug-qubole-benefits-data-lakes-for-machine-learning/#respond Mon, 22 Nov 2021 17:35:14 +0000 https://artificialintelligence-news.com/?p=11411 Data lakes offer a number of advantages for machine learning, but it takes an experienced partner to unlock their full benefit. AI News caught up with Brian Flüg, Solutions Architect at Qubole, to find out how the company is helping data scientists with their workloads. What are the advantages of using a data lake for... Read more »

The post Brian Flüg, Qubole: On the benefits of data lakes for machine learning appeared first on AI News.

]]>
Data lakes offer a number of advantages for machine learning, but it takes an experienced partner to unlock their full benefit.

AI News caught up with Brian Flüg, Solutions Architect at Qubole, to find out how the company is helping data scientists with their workloads.

What are the advantages of using a data lake for machine learning?

The advantages of using a secure and open data lake for machine learning are numerous. It is simple to deploy and companies can reduce risk while decreasing costs.

Data Scientists can build, deploy and iterate on their models faster with experiment tracking and out-of-box Integrations for front-end tools: RStudio, H2O.ai, Datarobot. They can also benefit from end-to-end workflows anchored by schedulers and Airflow. Managed Notebooks also offer serverless (Offline) editing.

In addition to this, developers can achieve higher developer productivity by skipping steps and building applications with code auto-complete, code compare, code-free visualisations (QVIZ), version control, hands-free dependency management, and easy access to cloud storage and data catalog.

You can also automate infrastructure provisioning for machine learning by minimising costs automatically while supporting concurrent user growth without a performance impact. You can benefit from having near-zero management overhead regardless of the number of users or model versions and scale up or down automatically to support all workloads at any point in time.

How can businesses use automation to limit the impact of disruptive and rapidly-evolving situations like we saw over the pandemic?

As enterprises look to navigate fast-changing conditions brought about by the pandemic, data leaders are being tasked with harnessing massive volumes of data across the organisation and leveraging streaming analytics, machine learning, and artificial intelligence to help organisations make smarter decisions and adapt to the new surroundings. It has been more crucial than ever to unlock the potential of data lakes and automation for unmatched success.

Through conversations with our customers and partners, it was clear that data lakes support the analytics capabilities that businesses needed to see them through this crisis, including real-time data pipelines, machine learning, and artificial intelligence. Data lakes are at the cutting edge of analytics and data science today, and optimising them is critical to business success.

What new capabilities does Qubole provide for data scientists?

Qubole caters to data scientists wherever their skills and experience lie. Regardless of whether you are a rookie or a machine learning wizard, Qubole has the capabilities to support your activities. These capabilities include machine learning, artificial intelligence, analytical automation, streaming, and ad-hoc analytics.

Qubole provides your data science teams with the best tool for every task in the data science life cycle — in a single, cloud-native platform. Data scientists can prepare data with end-to-end visibility of the entire pipeline. They can explore, query, and visualise data through Qubole’s SQL Workbench. Integrations with JDBC and ODBC connectors to the BI tool of their choice allow data scientists to explore and visualise data.

Another capability is building and training models with rapid prototyping, flawless execution, and broad support for machine learning ecosystems such as Spark, MLib, MXNet, Tensorflow, Keras, SciKit Learn, Python, or R, with integrated Notebook service for ease of use and collaboration.

Finally, data scientists can collaborate to deploy trained models, schedule production jobs (monitoring end-to-end data science workflows with complete visibility into the data pipeline), and take take advantage of Qubole’s hosted Airflow service to create production workflows.

How can a streaming data pipeline unlock the benefits of real-time data for machine learning?

You can build streaming data pipelines to capture the benefits of real-time data for machine learning and ad-hoc analytics, which has huge benefits for machine learning. Qubole Pipelines Service is a Stream Processing Service that addresses real-time ingestion, decision, machine learning, and reporting use-cases.

A streaming data pipeline will enable an accelerated development cycle. You can develop a pipeline within minutes without writing even a single line of code and deploy it instantly. Test run and debug new pipelines to check the connectivity and business logic with a built-in test framework and experience near-zero downtime and no data loss.

In addition to this, a simplified data lake operation provides data management and better data consistency. It allows for the detection of invalid records and schema mismatches by setting alerts and preventing data loss by cleansing and reprocessing these records (by storing them in a configurable cloud storage location). Importantly, comprehensive operational management and deep insights help companies to keep costs in check.

(Photo by Tim Foster on Unsplash)

Qubole will be sharing their invaluable insights during this year’s AI & Big Data Expo Europe, which runs from 23-24 November 2021. Qubole’s booth number is 309. Brian Flüg will also be speaking at the virtual edition of this year’s event on 1 December 2021. You can find out more about his sessions and register to attend here.

The post Brian Flüg, Qubole: On the benefits of data lakes for machine learning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/22/brian-flug-qubole-benefits-data-lakes-for-machine-learning/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0