AI Chatbots News | Latest Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ Artificial Intelligence News Mon, 02 Oct 2023 13:07:12 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Chatbots News | Latest Chatbot Developments | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-chatbots/ 32 32 How information retrieval is being revolutionised with RAG technology https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/ https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/#respond Mon, 02 Oct 2023 13:07:10 +0000 https://www.artificialintelligence-news.com/?p=13659 In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse... Read more »

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse of digital information, a revolutionary technology has emerged, promising to transform the way we interact with data in the enterprise. Enter the power of Retrieval-Augmented Generation (RAG) to redefine our relationship with information.

The internet, once seen as a source of knowledge for all, has now become a complex maze. Although traditional search engines are powerful, they often inundate users with a flood of results, making it difficult to find what they are searching for. The emergence of new technologies like ChatGPT from OpenAI has been impressive, along with other language models such as Bard. However, these models also come with certain drawbacks for business users, such as the risk of generating inaccurate information, a lack of proper citation, potential copyright infringements, and a scarcity of reliable information in the business domain. The challenge lies not only in finding information but in finding the right information. In order to make Generative AI effective in the business world, we must address these concerns, which is the focal point of RAG.

The digital challenge: A sea of information

At the corner of platforms like Microsoft Copilot and Lucy is the transformative approach of the Retrieval-Augmented Generation (RAG) model.

Understanding RAG

What precisely is RAG, and how does it work? In simple terms, RAG is a two-step process:

1. Retrieval: Prior to providing an answer, the system delves into an extensive database, meticulously retrieving pertinent documents or passages. This isn’t a rudimentary matching of keywords; it’s a cutting-edge process that comprehends the intricate context and nuances of the query. RAG systems rely on the data owned or licensed by companies, and ensure that Enterprise Levels of access control are impeccably managed and preserved.

2. Generation: Once the pertinent information is retrieved, it serves as the foundation for generating a coherent and contextually accurate response. This isn’t just about regurgitating data; it’s about crafting a meaningful and informative answer.

By integrating these two critical processes, RAG ensures that the responses delivered are not only precise but also well-informed. It’s akin to having a dedicated team of researchers at your disposal, ready to delve into a vast library, select the most appropriate sources, and present you with a concise and informative summary.

Why RAG matters

Leading technology platforms that have embraced RAG – such as Microsoft Copilot for content creation or federated search platforms like Lucy – represent a significant breakthrough for several reasons:

1. Efficiency: Traditional models often demand substantial computational resources, particularly when dealing with extensive datasets. RAG, with its process segmentation, ensures efficiency, even when handling complex queries.

2. Accuracy: By first retrieving relevant data and then generating a response based on that data, RAG guarantees that the answers provided are firmly rooted in credible sources, enhancing accuracy and reliability.

3. Adaptability: RAG’s adaptability shines through as new information is continually added to the database. This ensures that the answers generated by platforms remain up-to-date and relevant.

RAG platforms in action

Picture yourself as a financial analyst seeking insights into market trends. Traditional research methods would require hours, if not days, to comb through reports, articles, and data sets. Lucy, however, simplifies the process – you merely pose your question. Behind the scenes, the RAG model springs into action, retrieving relevant financial documents and promptly generating a comprehensive response, all within seconds.

Similarly, envision a student conducting research on a historical event. Instead of becoming lost in a sea of search results, Lucy, powered by RAG, provides a concise, well-informed response, streamlining the research process and enhancing efficiency.

Take this one step further, Lucy feeds these answers across a complex data ecosystem to Microsoft Copilot and new presentations or documentation is created leveraging all of the institutional knowledge an organisation has created or purchased..

The road ahead

The potential applications of RAG are expansive, spanning academia, industry, and everyday inquiries. Beyond its immediate utility, RAG signifies a broader shift in our interaction with information. In an age of information overload, tools like Microsoft Copilot and Lucy, powered by RAG, are not merely conveniences; they are necessities.

Furthermore, as technology continues to evolve, we can anticipate even more sophisticated iterations of the RAG model, promising heightened accuracy, efficiency, and user experience. Working with platforms that have embraced RAG from the onset (or before even a term) will keep your organisation ahead of the curve.

Conclusion

In the digital era, we face both challenges and opportunities. While the sheer volume of information can be overwhelming, technologies like Microsoft Copilot or Lucy, underpinned by the potency of Retrieval-Augmented Generation, offer a promising path forward. This is a testament to technology’s potential not only to manage but also to meaningfully engage with the vast reservoirs of knowledge at our disposal. These aren’t just platforms; they are a glimpse into the future of information retrieval.

Photo by Markus Winkler on Unsplash

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/feed/ 0
Amazon invests $4B in Anthropic to boost AI capabilities https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/ https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/#respond Mon, 25 Sep 2023 13:36:26 +0000 https://www.artificialintelligence-news.com/?p=13635 Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot. Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena. “We are... Read more »

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot.

Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena.

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers.

“By significantly expanding our partnership, we can unlock new possibilities for organisations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

While Amazon’s investment in Anthropic may seem overshadowed by Microsoft’s reported $13 billion commitment to OpenAI, it is a clear indication of Amazon’s ambition in the rapidly-evolving AI landscape. The collaboration between Amazon and Anthropic holds the promise of reshaping the AI sector with innovative developments.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, CEO of Amazon.

“Customers are quite excited about Amazon Bedrock, AWS’ new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’ AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

Anthropic’s flagship product, the Claude AI model, distinguishes itself by claiming a higher level of safety compared to its competitors.

Claude and its advanced iteration, Claude 2, are large language model-based chatbots similar in functionality to OpenAI’s ChatGPT and Google’s Bard. They excel in tasks like text translation, code generation, and answering a variety of questions.

What sets Claude apart is its ability to autonomously revise responses, eliminating the need for human moderation. This unique feature positions Claude as a safer and more dependable AI tool, especially in contexts where precise, unbiased information is crucial.

Claude’s capacity to handle larger prompts also makes it particularly suitable for tasks involving extensive business or legal documents, offering a valuable edge in industries reliant on meticulous data analysis.

As part of this strategic investment, Amazon will acquire a minority ownership stake in Anthropic. Amazon is set to integrate Anthropic’s cutting-edge technology into a range of its products, including the Amazon Bedrock service, designed for building AI applications. 

In return, Anthropic will leverage Amazon’s custom-designed chips for the development, training, and deployment of its future AI foundation models. The partnership also solidifies Anthropic’s commitment to Amazon Web Services (AWS) as its primary cloud provider.

In the initial phase, Amazon has committed $1.25 billion to Anthropic, with an option to increase its investment by an additional $2.75 billion. If the full $4 billion investment materialises, it will become the largest publicly-known investment linked to AWS.

Anthropic’s partnership with Amazon comes alongside its existing collaboration with Google, where Google holds approximately a 10 percent stake following a $300 million investment earlier this year. Anthropic has affirmed its intent to maintain this relationship with Google and continue offering its technology through Google Cloud, showcasing its commitment to broadening its reach across the industry.

In a rapidly-advancing landscape, Amazon’s strategic investment in Anthropic underscores its determination to remain at the forefront of AI innovation and sets the stage for exciting future developments.

(Image Credit: Anthropic)

See also: OpenAI reveals DALL-E 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/feed/ 0
OpenAI reveals DALL-E 3 text-to-image model https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/ https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/#respond Thu, 21 Sep 2023 15:21:57 +0000 https://www.artificialintelligence-news.com/?p=13626 OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model.  DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT. One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts: "A middle-aged woman... Read more »

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model. 

DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT.

One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts:

Even if a user struggles to articulate their vision precisely, ChatGPT can step in to assist in crafting comprehensive prompts.

DALL-E 3 has been engineered to excel in creating elements that its predecessors and other AI generators have historically struggled with, such as rendering intricate depictions of hands and incorporating text into images:

OpenAI has also implemented robust security measures, ensuring the AI system refrains from generating explicit or offensive content by identifying and ignoring certain keywords in prompts.

Beyond technical advancements, OpenAI has taken steps to mitigate potential legal issues. 

While the current DALL-E version can mimic the styles of living artists, the forthcoming DALL-E 3 has been designed to decline requests to replicate their copyrighted works. Artists will also have the option to submit their original creations through a dedicated form on the OpenAI website, allowing them to request removal if necessary.

OpenAI’s rollout plan for DALL-E 3 involves an initial release to ChatGPT ‘Plus’ and ‘Enterprise’ customers next month. The enhanced image generator will then become available to OpenAI’s research labs and API customers in the upcoming fall season.

As OpenAI continues to push the boundaries of AI technology, DALL-E 3 represents a major step forward in text-to-image generation.

(Image Credit: OpenAI)

See also: Stability AI unveils ‘Stable Audio’ model for controllable audio generation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/feed/ 0
Baidu deploys its ERNIE Bot generative AI to the public https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/ https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/#respond Thu, 31 Aug 2023 15:15:49 +0000 https://www.artificialintelligence-news.com/?p=13552 Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website. ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model. The... Read more »

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website.

ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model.

The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the natural language understanding capabilities of a model that combines both text and knowledge graph data.

Later that year, Baidu released ERNIE 2.0 which became the first model model to set a score higher than 90 on the GLUE benchmark for evaluating natural language understanding systems.

In 2021, Baidu’s researchers posted a paper on ERNIE 3.0 in which they claim the model exceeds human performance on the SuperGLUE natural language benchmark. ERNIE 3.0 set a new top score on SuperGLUE and displaced efforts from Google and Microsoft.

According to Baidu’s CEO Robin Li, opening up ERNIE Bot to the public will enable the company to obtain more human feedback and improve the user experience. He said that ERNIE Bot is a showcase of the four core abilities of generative AI: understanding, generation, reasoning, and memory. He also said that ERNIE Bot can help users with various tasks such as writing, learning, entertainment, and work.

Baidu first unveiled ERNIE Bot in March this year, demonstrating its capabilities in different domains such as literature, art, and science. For example, ERNIE Bot can summarise a sci-fi novel and offer suggestions on how to continue the story in an expanded universe. It can also generate images and videos based on text inputs, such as creating a portrait of a fictional character or a scene from a movie.

Earlier this month, Baidu revealed that ERNIE Bot’s training throughput had increased three-fold since March and that it had achieved new milestones in data analysis and visualisation. ERNIE Bot can now generate results more quickly and handle image inputs as well. For instance, ERNIE Bot can analyse an image of a pie chart and generate a summary of the data in natural language.

Baidu is one of the first Chinese companies to obtain approval from authorities to release generative AI experiences to the public, according to Bloomberg. The report suggests that officials see AI as a “business and political imperative” for China and want to ensure that the technology is used in a responsible and ethical manner.

Beijing is keen on putting guardrails in place to prevent the spread of harmful or illegal content while still enabling Chinese companies to compete with overseas rivals in the field of AI.

Beijing’s AI guardrails

The “guardrails” include the rules published by the Chinese authorities in July 2023 that govern generative AI in China.

China’s rules go substantially beyond current regulations in other parts of the world and aim to ensure that generative AI is used in a responsible and ethical manner. The rules cover various aspects of generative AI, such as content, data, technology, fairness, and licensing.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are also prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information.

Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI. The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.

Intellectual property rights – an often contentious issue – must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health. Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

China’s rules not only have implications for domestic AI operators but also serve as a benchmark for international discussions on AI governance and ethical practices.

(Image Credit: Alpha Photo under CC BY-NC 2.0 license)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
OpenAI launches ChatGPT Enterprise to accelerate business operations https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/ https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/#respond Tue, 29 Aug 2023 11:19:44 +0000 https://www.artificialintelligence-news.com/?p=13534 OpenAI has unveiled ChatGPT Enterprise, a version of the AI assistant tailored for businesses seeking advanced capabilities and reliable performance. The crux of its appeal lies in its enhanced features, including an impressive 32,000-token context window. This upgrade enables ChatGPT Enterprise to process extended pieces of text or hold prolonged conversations, allowing for more nuanced... Read more »

The post OpenAI launches ChatGPT Enterprise to accelerate business operations appeared first on AI News.

]]>
OpenAI has unveiled ChatGPT Enterprise, a version of the AI assistant tailored for businesses seeking advanced capabilities and reliable performance.

The crux of its appeal lies in its enhanced features, including an impressive 32,000-token context window. This upgrade enables ChatGPT Enterprise to process extended pieces of text or hold prolonged conversations, allowing for more nuanced and comprehensive exchanges.

One of the most significant leaps forward is the elimination of usage limits. Enterprise users will enjoy unrestricted access to GPT-4 queries that are delivered at accelerated speeds, heralding a new era of streamlined interactions and rapid data analysis.

Jorge Zuniga, Head of Data Systems and Integrations at Asana, said:

“ChatGPT Enterprise has cut down research time by an average of an hour per day, increasing productivity for people on our team. It’s been a powerful tool that has accelerated testing hypotheses and improving our internal systems.”

Security-conscious businesses can rest assured as ChatGPT Enterprise boasts a robust security framework. Data encryption “at rest” and “in transit” ensures data privacy through AES 256 and TLS 1.2+ technologies respectively. Customer prompts and sensitive corporate data also remain untapped for OpenAI model training.

In an era where data security is paramount, ChatGPT Enterprise has obtained SOC 2 compliance—providing some extra confidence in its stringent adherence to security, availability, processing integrity, and privacy standards.

Furthermore, the introduction of an administrative console enables efficient member management, domain verification, and single sign-on (SSO), catering to the complex needs of large-scale deployments.

OpenAI’s blog post touts ChatGPT’s impressive adoption. With over 80 percent uptake in Fortune 500 companies, industry titans such as Block, Canva, and PwC are utilising ChatGPT Enterprise to expedite tasks ranging from coding to crafting clearer communications.

Based on a Deloitte survey of CEOs, 79 percent of chief executives are of the opinion that generative AI will enhance operational efficiencies. Additionally, 52 percent of the surveyed CEOs hold the view that it will open up growth prospects, while 55 percent acknowledge that they are currently exploring or testing AI solutions.

Another study by Gartner revealed that 45 percent of top-level executives mentioned that exposure to ChatGPT had motivated them to boost their investments in AI. This trend is likely to continue with the introduction of ChatGPT Enterprise.

Claire Trachet, CEO and founder of business advisory Trachet, commented:

“As we saw with the debut of ChatGPT, investor confidence naturally grew with everyone wanting to capitalise on new technology that will inevitably change the way we work on a day-to-day basis. 

This is also coming at a time when the AI arms race is becoming more competitive, and consumers are becoming more familiar with AI technology. As a result, consumers and businesses are becoming more inclined to use and integrate this technology into their lives and businesses.

For startups and smaller businesses, this will act as a way to help them scale up in a more cost-effective way through M&A deals and gain investor interest.”

Amidst the fervour surrounding ChatGPT Enterprise, questions emerge about its potential to transform business processes. Andrej Karpathy of OpenAI believes it may become as essential as spreadsheets.

Danny Wu, Head of AI Products at Canva, said:

“From engineers troubleshooting bugs, to data analysts clustering free-form data, to finance analysts writing tricky spreadsheet formulas—the use cases for ChatGPT Enterprise are plenty.

It’s become a true enabler of productivity, with the dependable security and data privacy controls we need.”

However, it’s crucial to reiterate that GPT-4’s strengths lie more in analysis, explanation, summary, and translation, rather than being an infallible source of facts.

Pricing for ChatGPT Enterprise remains undisclosed. Enterprises looking to get started will have to wait for more information on how much this potentially groundbreaking AI tool will cost them.

(Photo by Sean Pollock on Unsplash)

See also: ChatGPT’s political bias highlighted in study

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI launches ChatGPT Enterprise to accelerate business operations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
ChatGPT expands ‘Custom Instructions’ to free users https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/ https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/#respond Thu, 10 Aug 2023 13:27:52 +0000 https://www.artificialintelligence-news.com/?p=13451 After initially launching for paid ChatGPT users, “Custom Instructions” are now accessible to users on the free plan. Custom Instructions empower users to tailor their interactions with ChatGPT according to their unique needs and preferences, making conversations more dynamic and relevant. Whether a student seeking homework help, an aspiring writer brainstorming ideas, or a curious... Read more »

The post ChatGPT expands ‘Custom Instructions’ to free users appeared first on AI News.

]]>
After initially launching for paid ChatGPT users, “Custom Instructions” are now accessible to users on the free plan.

Custom Instructions empower users to tailor their interactions with ChatGPT according to their unique needs and preferences, making conversations more dynamic and relevant.

Whether a student seeking homework help, an aspiring writer brainstorming ideas, or a curious mind exploring various topics, the AI model can now take into account specific instructions to generate more relevant and personalised responses:

As users set their preferences or requirements using Custom Instructions, ChatGPT will consider these inputs in every subsequent interaction—eliminating the need to repeat instructions. This feature streamlines conversations and fosters a more engaging and productive dialogue with the AI.

Furthermore, the integration of Custom Instructions augments the utility of ChatGPT’s plugins. By incorporating specific details provided by users – such as location or preferences – the AI can seamlessly interact with plugins to provide more accurate and contextually relevant responses.

OpenAI has adapted safety measures to accommodate the introduction of Custom Instructions. Instructions violating usage policies will be identified and disregarded, in a bid to maintain a secure environment for all users.

As part of its ongoing efforts to enhance the model’s performance, OpenAI may use Custom Instructions to refine ChatGPT’s capabilities. However, the company maintains transparency and control by allowing users to manage their data settings and opt out of this feature if desired.

To embrace the personalisation benefits of Custom Instructions, free plan users can navigate to their account settings and select the option to enable this feature.

Custom Instructions are currently unavailable to users in the EU and UK but OpenAI plans to expand access “soon”.

(Photo by Jonathan Kemper on Unsplash)

See also: OpenAI deploys web crawler in preparation for GPT-5

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT expands ‘Custom Instructions’ to free users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/feed/ 0
Meta bets on AI chatbots to retain users https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/ https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/#respond Tue, 01 Aug 2023 11:44:17 +0000 https://www.artificialintelligence-news.com/?p=13411 Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts. Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots... Read more »

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts.

Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots will showcase various personalities and are expected to be rolled out as early as next month.

Referred to as “personas” by Meta staff, these chatbots will take on the form of different characters, each embodying a distinct persona. For instance, insiders mentioned that Meta has explored the creation of a chatbot that mimics the speaking style of former US President Abraham Lincoln, as well as another designed to offer travel advice with the laid-back language of a surfer.

While the primary objective of these chatbots will be to offer personalised recommendations and improved search functionality, they are also being positioned as a source of entertainment for users to enjoy. The chatbots are expected to engage users in playful and interactive conversations, a move that could potentially increase user engagement and retention.

However, with such sophisticated AI capabilities, concerns arise about the potential for rule-breaking speech and inaccuracies. In response, sources mentioned that Meta may implement automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules.

This strategic development comes at a time when Meta is doubling down on user retention efforts.

During the company’s 2023 second-quarter earnings call on July 26, CEO Mark Zuckerberg highlighted the positive response to the company’s latest product, Threads, which aims to rival X (formerly Twitter.)

Zuckerberg expressed satisfaction with the increased number of users returning to Threads daily and confirmed that Meta’s primary focus was on the platform’s user retention.

Meta’s chatbots venture raises concerns about data privacy and security. The company will gain access to a treasure trove of user data that has already led to legal challenges for AI companies such as OpenAI.

Whether these chatbots will revolutionise user experiences and boost Meta’s ailing user retention – or just present new challenges for data privacy – remains to be seen. For now, users and experts alike will be closely monitoring Meta’s next moves.

(Photo by Edge2Edge Media on Unsplash)

See also: Meta launches Llama 2 open-source LLM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/feed/ 0
Anthropic launches ChatGPT rival Claude 2 https://www.artificialintelligence-news.com/2023/07/12/anthropic-launches-chatgpt-rival-claude-2/ https://www.artificialintelligence-news.com/2023/07/12/anthropic-launches-chatgpt-rival-claude-2/#respond Wed, 12 Jul 2023 15:28:16 +0000 https://www.artificialintelligence-news.com/?p=13274 Anthropic has launched Claude 2, an advanced large language model (LLM) that excels in coding, mathematics, and reasoning tasks. Claude 2 is designed to simulate conversations with a helpful colleague or personal assistant. The latest version has been fine-tuned to deliver an improved user experience, with enhanced conversational abilities, clearer explanations, reduced production of harmful... Read more »

The post Anthropic launches ChatGPT rival Claude 2 appeared first on AI News.

]]>
Anthropic has launched Claude 2, an advanced large language model (LLM) that excels in coding, mathematics, and reasoning tasks.

Claude 2 is designed to simulate conversations with a helpful colleague or personal assistant. The latest version has been fine-tuned to deliver an improved user experience, with enhanced conversational abilities, clearer explanations, reduced production of harmful outputs, and extended memory.

In coding proficiency, Claude 2 outperforms its predecessor and achieves a higher score on the Codex HumanEval Python programming test. Its proficiency in solving grade-school math problems, evaluated through GSM8k, has also seen a notable improvement.

“When it comes to AI coding, devs need fast and reliable access to context about their unique codebase and a powerful LLM with a large context window and strong general reasoning capabilities,” says Quinn Slack, CEO & Co-founder of Sourcegraph.

“The slowest and most frustrating parts of the dev workflow are becoming faster and more enjoyable. Thanks to Claude 2, Cody’s helping more devs build more software that pushes the world forward.”

Claude 2 introduces expanded input and output length capabilities, allowing it to process prompts of up to 100,000 tokens. This enhancement enables the model to analyse lengthy documents such as technical guides or entire books, and generate longer compositions as outputs.

“We are really happy to be among the first to offer Claude 2 to our customers, bringing enhanced semantics, up-to-date knowledge training, improved reasoning for complex prompts, and the ability to effortlessly remix existing content with a 3X larger context window,” said Greg Larson, VP of Engineering at Jasper.

“We are proud to help our customers stay ahead of the curve through partnerships like this one with Anthropic.”

Anthropic has focused on minimising the generation of harmful or offensive outputs by Claude 2. While measuring such qualities is challenging, an internal evaluation showed that Claude 2 was twice as effective at providing harmless responses compared to its predecessor, Claude 1.3.

Anthropic acknowledges that while Claude 2 can analyse complex works, it is vital to recognise the limitations of language models. Users should exercise caution and not rely on them as factual references. Instead, Claude 2 should be utilised to process data provided by users who are already knowledgeable about the subject matter and can validate the results.

As users leverage Claude 2’s capabilities, it is crucial to understand its limitations and use it responsibly for tasks that align with its strengths, such as information summarisation and organisation.

Users can explore Claude 2 for free here.

(Image Credit: Anthropic)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic launches ChatGPT rival Claude 2 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/12/anthropic-launches-chatgpt-rival-claude-2/feed/ 0