chatgpt Archives - AI News https://www.artificialintelligence-news.com/tag/chatgpt/ Artificial Intelligence News Fri, 20 Oct 2023 15:13:45 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png chatgpt Archives - AI News https://www.artificialintelligence-news.com/tag/chatgpt/ 32 32 Jaromir Dzialo, Exfluency: How companies can benefit from LLMs https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/ https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/#respond Fri, 20 Oct 2023 15:13:43 +0000 https://www.artificialintelligence-news.com/?p=13726 Can you tell us a little bit about Exfluency and what the company does? Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate... Read more »

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
Can you tell us a little bit about Exfluency and what the company does?

Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate asset.

What tech trends have you noticed developing in the multilingual communication space?

As in every other walk of life, AI in general and ChatGPT specifically is dominating the agenda. Companies operating in the language space are either panicking or scrambling to play catch-up. The main challenge is the size of the tech deficit in this vertical. Innovation and, more especially AI-innovation is not a plug-in.

What are some of the benefits of using LLMs?

Off the shelf LLMs (ChatGPT, Bard, etc.) have a quick-fix attraction. Magically, it seems, well formulated answers appear on your screen. One cannot fail to be impressed.

The true benefits of LLMs will be realised by the players who can provide immutable data with which feed the models. They are what we feed them.

What do LLMs rely on when learning language?

Overall, LLMs learn language by analysing vast amounts of text data, understanding patterns and relationships, and using statistical methods to generate contextually appropriate responses. Their ability to generalise from data and generate coherent text makes them versatile tools for various language-related tasks.

Large Language Models (LLMs) like GPT-4 rely on a combination of data, pattern recognition, and statistical relationships to learn language. Here are the key components they rely on:

  1. Data: LLMs are trained on vast amounts of text data from the internet. This data includes a wide range of sources, such as books, articles, websites, and more. The diverse nature of the data helps the model learn a wide variety of language patterns, styles, and topics.
  2. Patterns and Relationships: LLMs learn language by identifying patterns and relationships within the data. They analyze the co-occurrence of words, phrases, and sentences to understand how they fit together grammatically and semantically.
  3. Statistical Learning: LLMs use statistical techniques to learn the probabilities of word sequences. They estimate the likelihood of a word appearing given the previous words in a sentence. This enables them to generate coherent and contextually relevant text.
  4. Contextual Information: LLMs focus on contextual understanding. They consider not only the preceding words but also the entire context of a sentence or passage. This contextual information helps them disambiguate words with multiple meanings and produce more accurate and contextually appropriate responses.
  5. Attention Mechanisms: Many LLMs, including GPT-4, employ attention mechanisms. These mechanisms allow the model to weigh the importance of different words in a sentence based on the context. This helps the model focus on relevant information while generating responses.
  6. Transfer Learning: LLMs use a technique called transfer learning. They are pretrained on a large dataset and then fine-tuned on specific tasks. This allows the model to leverage its broad language knowledge from pretraining while adapting to perform specialised tasks like translation, summarisation, or conversation.
  7. Encoder-Decoder Architecture: In certain tasks like translation or summarisation, LLMs use an encoder-decoder architecture. The encoder processes the input text and converts it into a context-rich representation, which the decoder then uses to generate the output text in the desired language or format.
  8. Feedback Loop: LLMs can learn from user interactions. When a user provides corrections or feedback on generated text, the model can adjust its responses based on that feedback over time, improving its performance.

What are some of the challenges of using LLMs?

A fundamental issue, which has been there ever since we started giving away data to Google, Facebook and the like, is that “we” are the product. The big players are earning untold billions on our rush to feed their apps with our data. ChatGPT, for example, is enjoying the fastest growing onboarding in history. Just think how Microsoft has benefitted from the millions of prompts people have already thrown at it.

The open LLMs hallucinate and, because answers to prompts are so well formulated, one can be easily duped into believing what they tell you.
And to make matters worse, there are no references/links to tell you from where they sourced their answers.

How can these challenges be overcome?

LLMs are what we feed them. Blockchain technology allows us to create an immutable audit trail and with it immutable, clean data. No need to trawl the internet. In this manner we are in complete control of what data is going in, can keep it confidential, and support it with a wealth of useful meta data. It can also be multilingual!

Secondly, as this data is stored in our databases, we can also provide the necessary source links. If you can’t quite believe the answer to your prompt, open the source data directly to see who wrote it, when, in which language and which context.

What advice would you give to companies that want to utilise private, anonymised LLMs for multilingual communication?

Make sure your data is immutable, multilingual, of a high quality – and stored for your eyes only. LLMs then become a true game changer.

What do you think the future holds for multilingual communication?

As in many other walks of life, language will embrace forms of hybrid intelligence. For example, in the Exfluency ecosystem, the AI-driven workflow takes care of 90% of the translation – our fantastic bilingual subject matter experts then only need to focus on the final 10%. This balance will change over time – AI will take an ever-increasing proportion of the workload. But the human input will remain crucial. The concept is encapsulated in our strapline: Powered by technology, perfected by people.

What plans does Exfluency have for the coming year?

Lots! We aim to roll out the tech to new verticals and build communities of SMEs to serve them. There is also great interest in our Knowledge Mining app, designed to leverage the information hidden away in the millions of linguistic assets. 2024 is going to be exciting!

  • Jaromir Dzialo is the co-founder and CTO of Exfluency, which offers affordable AI-powered language and security solutions with global talent networks for organisations of all sizes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/feed/ 0
How information retrieval is being revolutionised with RAG technology https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/ https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/#respond Mon, 02 Oct 2023 13:07:10 +0000 https://www.artificialintelligence-news.com/?p=13659 In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse... Read more »

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse of digital information, a revolutionary technology has emerged, promising to transform the way we interact with data in the enterprise. Enter the power of Retrieval-Augmented Generation (RAG) to redefine our relationship with information.

The internet, once seen as a source of knowledge for all, has now become a complex maze. Although traditional search engines are powerful, they often inundate users with a flood of results, making it difficult to find what they are searching for. The emergence of new technologies like ChatGPT from OpenAI has been impressive, along with other language models such as Bard. However, these models also come with certain drawbacks for business users, such as the risk of generating inaccurate information, a lack of proper citation, potential copyright infringements, and a scarcity of reliable information in the business domain. The challenge lies not only in finding information but in finding the right information. In order to make Generative AI effective in the business world, we must address these concerns, which is the focal point of RAG.

The digital challenge: A sea of information

At the corner of platforms like Microsoft Copilot and Lucy is the transformative approach of the Retrieval-Augmented Generation (RAG) model.

Understanding RAG

What precisely is RAG, and how does it work? In simple terms, RAG is a two-step process:

1. Retrieval: Prior to providing an answer, the system delves into an extensive database, meticulously retrieving pertinent documents or passages. This isn’t a rudimentary matching of keywords; it’s a cutting-edge process that comprehends the intricate context and nuances of the query. RAG systems rely on the data owned or licensed by companies, and ensure that Enterprise Levels of access control are impeccably managed and preserved.

2. Generation: Once the pertinent information is retrieved, it serves as the foundation for generating a coherent and contextually accurate response. This isn’t just about regurgitating data; it’s about crafting a meaningful and informative answer.

By integrating these two critical processes, RAG ensures that the responses delivered are not only precise but also well-informed. It’s akin to having a dedicated team of researchers at your disposal, ready to delve into a vast library, select the most appropriate sources, and present you with a concise and informative summary.

Why RAG matters

Leading technology platforms that have embraced RAG – such as Microsoft Copilot for content creation or federated search platforms like Lucy – represent a significant breakthrough for several reasons:

1. Efficiency: Traditional models often demand substantial computational resources, particularly when dealing with extensive datasets. RAG, with its process segmentation, ensures efficiency, even when handling complex queries.

2. Accuracy: By first retrieving relevant data and then generating a response based on that data, RAG guarantees that the answers provided are firmly rooted in credible sources, enhancing accuracy and reliability.

3. Adaptability: RAG’s adaptability shines through as new information is continually added to the database. This ensures that the answers generated by platforms remain up-to-date and relevant.

RAG platforms in action

Picture yourself as a financial analyst seeking insights into market trends. Traditional research methods would require hours, if not days, to comb through reports, articles, and data sets. Lucy, however, simplifies the process – you merely pose your question. Behind the scenes, the RAG model springs into action, retrieving relevant financial documents and promptly generating a comprehensive response, all within seconds.

Similarly, envision a student conducting research on a historical event. Instead of becoming lost in a sea of search results, Lucy, powered by RAG, provides a concise, well-informed response, streamlining the research process and enhancing efficiency.

Take this one step further, Lucy feeds these answers across a complex data ecosystem to Microsoft Copilot and new presentations or documentation is created leveraging all of the institutional knowledge an organisation has created or purchased..

The road ahead

The potential applications of RAG are expansive, spanning academia, industry, and everyday inquiries. Beyond its immediate utility, RAG signifies a broader shift in our interaction with information. In an age of information overload, tools like Microsoft Copilot and Lucy, powered by RAG, are not merely conveniences; they are necessities.

Furthermore, as technology continues to evolve, we can anticipate even more sophisticated iterations of the RAG model, promising heightened accuracy, efficiency, and user experience. Working with platforms that have embraced RAG from the onset (or before even a term) will keep your organisation ahead of the curve.

Conclusion

In the digital era, we face both challenges and opportunities. While the sheer volume of information can be overwhelming, technologies like Microsoft Copilot or Lucy, underpinned by the potency of Retrieval-Augmented Generation, offer a promising path forward. This is a testament to technology’s potential not only to manage but also to meaningfully engage with the vast reservoirs of knowledge at our disposal. These aren’t just platforms; they are a glimpse into the future of information retrieval.

Photo by Markus Winkler on Unsplash

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/feed/ 0
OpenAI reveals DALL-E 3 text-to-image model https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/ https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/#respond Thu, 21 Sep 2023 15:21:57 +0000 https://www.artificialintelligence-news.com/?p=13626 OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model.  DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT. One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts: "A middle-aged woman... Read more »

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model. 

DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT.

One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts:

Even if a user struggles to articulate their vision precisely, ChatGPT can step in to assist in crafting comprehensive prompts.

DALL-E 3 has been engineered to excel in creating elements that its predecessors and other AI generators have historically struggled with, such as rendering intricate depictions of hands and incorporating text into images:

OpenAI has also implemented robust security measures, ensuring the AI system refrains from generating explicit or offensive content by identifying and ignoring certain keywords in prompts.

Beyond technical advancements, OpenAI has taken steps to mitigate potential legal issues. 

While the current DALL-E version can mimic the styles of living artists, the forthcoming DALL-E 3 has been designed to decline requests to replicate their copyrighted works. Artists will also have the option to submit their original creations through a dedicated form on the OpenAI website, allowing them to request removal if necessary.

OpenAI’s rollout plan for DALL-E 3 involves an initial release to ChatGPT ‘Plus’ and ‘Enterprise’ customers next month. The enhanced image generator will then become available to OpenAI’s research labs and API customers in the upcoming fall season.

As OpenAI continues to push the boundaries of AI technology, DALL-E 3 represents a major step forward in text-to-image generation.

(Image Credit: OpenAI)

See also: Stability AI unveils ‘Stable Audio’ model for controllable audio generation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/feed/ 0
OpenAI launches ChatGPT Enterprise to accelerate business operations https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/ https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/#respond Tue, 29 Aug 2023 11:19:44 +0000 https://www.artificialintelligence-news.com/?p=13534 OpenAI has unveiled ChatGPT Enterprise, a version of the AI assistant tailored for businesses seeking advanced capabilities and reliable performance. The crux of its appeal lies in its enhanced features, including an impressive 32,000-token context window. This upgrade enables ChatGPT Enterprise to process extended pieces of text or hold prolonged conversations, allowing for more nuanced... Read more »

The post OpenAI launches ChatGPT Enterprise to accelerate business operations appeared first on AI News.

]]>
OpenAI has unveiled ChatGPT Enterprise, a version of the AI assistant tailored for businesses seeking advanced capabilities and reliable performance.

The crux of its appeal lies in its enhanced features, including an impressive 32,000-token context window. This upgrade enables ChatGPT Enterprise to process extended pieces of text or hold prolonged conversations, allowing for more nuanced and comprehensive exchanges.

One of the most significant leaps forward is the elimination of usage limits. Enterprise users will enjoy unrestricted access to GPT-4 queries that are delivered at accelerated speeds, heralding a new era of streamlined interactions and rapid data analysis.

Jorge Zuniga, Head of Data Systems and Integrations at Asana, said:

“ChatGPT Enterprise has cut down research time by an average of an hour per day, increasing productivity for people on our team. It’s been a powerful tool that has accelerated testing hypotheses and improving our internal systems.”

Security-conscious businesses can rest assured as ChatGPT Enterprise boasts a robust security framework. Data encryption “at rest” and “in transit” ensures data privacy through AES 256 and TLS 1.2+ technologies respectively. Customer prompts and sensitive corporate data also remain untapped for OpenAI model training.

In an era where data security is paramount, ChatGPT Enterprise has obtained SOC 2 compliance—providing some extra confidence in its stringent adherence to security, availability, processing integrity, and privacy standards.

Furthermore, the introduction of an administrative console enables efficient member management, domain verification, and single sign-on (SSO), catering to the complex needs of large-scale deployments.

OpenAI’s blog post touts ChatGPT’s impressive adoption. With over 80 percent uptake in Fortune 500 companies, industry titans such as Block, Canva, and PwC are utilising ChatGPT Enterprise to expedite tasks ranging from coding to crafting clearer communications.

Based on a Deloitte survey of CEOs, 79 percent of chief executives are of the opinion that generative AI will enhance operational efficiencies. Additionally, 52 percent of the surveyed CEOs hold the view that it will open up growth prospects, while 55 percent acknowledge that they are currently exploring or testing AI solutions.

Another study by Gartner revealed that 45 percent of top-level executives mentioned that exposure to ChatGPT had motivated them to boost their investments in AI. This trend is likely to continue with the introduction of ChatGPT Enterprise.

Claire Trachet, CEO and founder of business advisory Trachet, commented:

“As we saw with the debut of ChatGPT, investor confidence naturally grew with everyone wanting to capitalise on new technology that will inevitably change the way we work on a day-to-day basis. 

This is also coming at a time when the AI arms race is becoming more competitive, and consumers are becoming more familiar with AI technology. As a result, consumers and businesses are becoming more inclined to use and integrate this technology into their lives and businesses.

For startups and smaller businesses, this will act as a way to help them scale up in a more cost-effective way through M&A deals and gain investor interest.”

Amidst the fervour surrounding ChatGPT Enterprise, questions emerge about its potential to transform business processes. Andrej Karpathy of OpenAI believes it may become as essential as spreadsheets.

Danny Wu, Head of AI Products at Canva, said:

“From engineers troubleshooting bugs, to data analysts clustering free-form data, to finance analysts writing tricky spreadsheet formulas—the use cases for ChatGPT Enterprise are plenty.

It’s become a true enabler of productivity, with the dependable security and data privacy controls we need.”

However, it’s crucial to reiterate that GPT-4’s strengths lie more in analysis, explanation, summary, and translation, rather than being an infallible source of facts.

Pricing for ChatGPT Enterprise remains undisclosed. Enterprises looking to get started will have to wait for more information on how much this potentially groundbreaking AI tool will cost them.

(Photo by Sean Pollock on Unsplash)

See also: ChatGPT’s political bias highlighted in study

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI launches ChatGPT Enterprise to accelerate business operations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/29/openai-chatgpt-enterprise-accelerate-business-operations/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
ChatGPT expands ‘Custom Instructions’ to free users https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/ https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/#respond Thu, 10 Aug 2023 13:27:52 +0000 https://www.artificialintelligence-news.com/?p=13451 After initially launching for paid ChatGPT users, “Custom Instructions” are now accessible to users on the free plan. Custom Instructions empower users to tailor their interactions with ChatGPT according to their unique needs and preferences, making conversations more dynamic and relevant. Whether a student seeking homework help, an aspiring writer brainstorming ideas, or a curious... Read more »

The post ChatGPT expands ‘Custom Instructions’ to free users appeared first on AI News.

]]>
After initially launching for paid ChatGPT users, “Custom Instructions” are now accessible to users on the free plan.

Custom Instructions empower users to tailor their interactions with ChatGPT according to their unique needs and preferences, making conversations more dynamic and relevant.

Whether a student seeking homework help, an aspiring writer brainstorming ideas, or a curious mind exploring various topics, the AI model can now take into account specific instructions to generate more relevant and personalised responses:

As users set their preferences or requirements using Custom Instructions, ChatGPT will consider these inputs in every subsequent interaction—eliminating the need to repeat instructions. This feature streamlines conversations and fosters a more engaging and productive dialogue with the AI.

Furthermore, the integration of Custom Instructions augments the utility of ChatGPT’s plugins. By incorporating specific details provided by users – such as location or preferences – the AI can seamlessly interact with plugins to provide more accurate and contextually relevant responses.

OpenAI has adapted safety measures to accommodate the introduction of Custom Instructions. Instructions violating usage policies will be identified and disregarded, in a bid to maintain a secure environment for all users.

As part of its ongoing efforts to enhance the model’s performance, OpenAI may use Custom Instructions to refine ChatGPT’s capabilities. However, the company maintains transparency and control by allowing users to manage their data settings and opt out of this feature if desired.

To embrace the personalisation benefits of Custom Instructions, free plan users can navigate to their account settings and select the option to enable this feature.

Custom Instructions are currently unavailable to users in the EU and UK but OpenAI plans to expand access “soon”.

(Photo by Jonathan Kemper on Unsplash)

See also: OpenAI deploys web crawler in preparation for GPT-5

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT expands ‘Custom Instructions’ to free users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/10/chatgpt-expands-custom-instructions-free-users/feed/ 0
Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/ https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/#respond Mon, 15 May 2023 13:57:46 +0000 https://www.artificialintelligence-news.com/?p=13059 Could you tell us a little bit about SoftServe and what the company does? Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems –... Read more »

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
Could you tell us a little bit about SoftServe and what the company does?

Sure. We’re a 30-year-old global IT services and professional services provider. We specialise in using emerging state-of-the-art technologies, such as artificial intelligence, big data and blockchain, to solve real business problems. We’re highly obsessed with our customers, about their problems – not about technologies – although we are technology experts. But we always try to find the best technology that will help our customers get to the point where they want to be. 

So we’ve been in the market for quite a while, having originated in Ukraine. But now we have offices all over the globe – US, Latin America, Singapore, Middle East, all over Europe – and we operate in multiple industries. We have some specialised leadership around specific industries, such as retail, financial services, healthcare, energy, oil and gas, and manufacturing. We also work with a lot of digital natives and independent software vendors, helping them adopt this technology in their products, so that they can better serve their customers.

What are the main trends you’ve noticed developing in AI and machine learning?

One of the biggest trends is that, while people used to question whether AI, machine learning and data science are the technologies of the future; that’s no longer the question. This technology is already everywhere. And the vast majority of the innovation that we see right now wouldn’t have been possible without these technologies. 

One of the main reasons is that this tech allows us to address and solve some of the problems that we used to consider intractable. Think of natural language, image recognition or code generation, which are not only hard to solve, they’re also hard to define. And approaching these types of problems with our traditional engineering mindset – where we essentially use programming languages – is just impossible. Instead, we leverage the knowledge stored in the vast amounts of data we collect, and use it to find solutions to the problems we care about. This approach is now called Machine Learning, and it is the most efficient way to address those types of problems nowadays.

But with the amount of data we can now collect, the compute power available in the cloud, the efficiency of training and the algorithms that we’ve developed, we are able to get to the stage where we can get superhuman performance with many tasks that we used to think only humans could perform. We must admit that human intelligence is limited in capacity and ability to process information. And machines can augment our intelligence and help us more efficiently solve problems that our brains were not designed for.

The overall trend that we see now is that machine learning and AI are essentially becoming the industry standard for solving complex problems that require knowledge, computation, perception, reasoning and decision-making. And we see that in many industries, including healthcare, finance and retail.

There are some more specific emerging trends. The topic of my TechEx North America keynote will be about generative AI, which many folk might think is something just recently invented, something new, or they may think of it as just ChatGPT. But these technologies have been evolving for a while. And we, as hands-on practitioners in the industry, have been working with this technology for quite a while. 

What has changed now is that, based on the knowledge and experience we’ve collected, we were able to get this tech to a stage where GenAI models are useful. We can use it to solve some real problems across different industries, from concise document summaries to advanced user experiences, logical reasoning and even the generation of unique knowledge. That said, there are still some challenges with reliability, and understanding the actual potential of these technologies.

How important are AI and machine learning with regards to product innovation?

AI and Machine Learning essentially allow us to address the set of problems that we can’t solve with traditional technology. If you want to innovate, if you want to get the most out of tech, you have to use them. There’s no other choice. It’s a powerful tool for product development, to introduce new features, for improving customer user experiences, for deriving some really deep actionable insights from the data. 

But, at the same time, it’s quite complex technology. There’s quite a lot of expertise involved in applying this tech, training these types of models, evaluating them, deciding what model architecture to use, etc. And, moreover, they’re highly experiment driven, meaning that in traditional software development we often know in advance what to achieve. So we set some specific requirements, and then we write a source code to meet those requirements. 

And that’s primarily because, in traditional engineering, it’s the source code that defines the behaviour of our system. With machine learning and artificial intelligence the behaviour is defined by the data, which means that we hardly ever know in advance what the quality of our data is. What’s the predictive power of our data? What kind of data do we need to use? Whether the data that we collected is enough, or whether we need to collect more data. That’s why we always need to experiment first. 

But I think, in some way, we got used to the uncertainty in the process and the outcomes of AI initiatives. The AI industry gave up on the idea that machine learning will be predictable at some point. Instead, we learned how to experiment efficiently, turning our ideas into hypotheses that we can quickly validate via experimentation and rapid prototyping, and evolving the most successful experiments into full-fledged products. That’s essentially what the modern lifecycle of AI/ML products looks like.

It also requires the product teams to adopt a different mindset of constant ideation and experimentation, though. It starts with selecting those ideas and use cases that have the highest potential, the most feasible ones that may have the biggest impact on the business and the product. From there, the team can ideate around potential solutions, quickly prototyping and selecting those that are most successful. That requires experience in identifying the problems that can benefit from AI/ML the most, and agile, iterative processes of validating and scaling the ideas.

How can businesses use that type of technology to improve personalisation?

That’s a good question because, again, there are some problems that are really hard to define. Personalisation is one of them. What makes me or you a person? What contributes to that? Whether it’s our preferences. How do we define our preferences? They might be stochastic, they might be contextual. It’s a highly multi dimensional problem. 

And, although you can try to approach it with a more traditional tech, you’ll still be limited in that capacity – depths of personalisation that you may get. The most efficient way is to learn those personal signals, preferences from the data, and use those insights to deliver personalised experiences, personalised marketing, and so on. 

Essentially, AI/ML acts as a sort of black box between the signal and the user and specific preferences, specific content that would resonate with that specific user. As of right now, that’s the most efficient way to achieve personalisation. 

One other benefit of modern AI/ML is that you can use various different types of data. You can combine clickstream data from your website, collecting information about how users behave on your website. You can collect text data from Twitter or any other sources. You can collect imagery data, and you can use all that information to derive the insights you care about. So the ability to analyse that heterogeneous set of data is another benefit that AI/ML brings into this game.

How do you think machine learning is impacting the metaverse and how are businesses benefiting from that?

There are two different aspects. ‘Metaverse’ is quite an abstract term, and we used to think of it from two different perspectives. One of them is that you want to replicate your physical assets – part of our physical world in the metaverse. And, of course, you can try to approach it from a traditional engineering standpoint, but many of the processes that we have are just too complex. It’s really hard to replicate them in a digital world. So think of a modern production line in manufacturing. In order for you to have a really precise, let’s call it a digital twin, of some physical assets, you have to be smart and use something that will allow you to get as close as possible in your metaverse to the physical world. And AI/ML is the way to go. It’s one of the most efficient ways to achieve that.

Another aspect of the metaverse is that since it’s digital, it’s unlimited. Thus, we may also want to have some specific types of assets that are purely digital, that don’t have any representation in the real world. And those assets should have similar qualities and behaviour as the real ones, handling a similar level of complexity. In order to program these smart, purely digital processes or assets, you need AI and ML to make them really intelligent.

Are there any examples of companies that you think have been utlising AI and machine learning well?

There are the three giants – Facebook, Google, Amazon. All of them are essentially a key driver behind the industry. And the vast majority of their products are, in some way, powered by AI/ML. Quite a lot has changed since I started my career but, even when I joined SoftServe around 10 years ago, there was a lot of research going on into AI/ML. 

There were some big players using the technology, but the vast majority of the market were just exploring this space. Most of our customers didn’t know anything about it. Some of the first questions they had were ‘can you educate us on this? What is AI/ML? How can we use it?’ 

What has changed now is that almost any company we interact with has already done some AI/ML work, whether they build something internally or they use some AI/ML products. So the perception has changed.

The overall adoption of this technology now is at the scale where you can find some aspects of AI/ML in almost any company.

You may see a company that does a lot of AI/ML on their, let’s say, marketing or distribution, but they have some old school legacy technologies in their production site or in their supply chain. The level of AI/ML adoption may differ across different lines of business. But I think almost everyone is using it now. Even your phone, it’s backed with AI/ML features. So it’s hard to think of a company that doesn’t use any AI/ML right now.

Do you think, in general, companies are using AI and machine learning well? What kind of challenges do they have when they implement it?

That’s a good question. The main challenge of applying these technologies today is not how to be successful with this tech, but rather how to be efficient. With the amount of data that we have now, and data that the companies are collecting, plus the amount of tech that is open source or publicly available – or available as managed services from AWS, from GCP – it’s easy to get some good results.

The question is, how do you decide where to apply this technology? How efficiently can you identify those opportunities, and find the ones that will bring the biggest impact, and can be implemented in the most time-efficient and cost-effective manner? 

Another aspect is how do you quickly turn those ideas into production-grade products? It’s a highly experiment-driven area, and there is a lot of science, but you still need to build reliable software on the research results. 

The key drivers for successful AI adoption are finding the right use cases where you can actually get the desired outcomes in the most efficient way, and turn ideas into full-fledged products. We’ve seen some really innovative companies that had brilliant ideas. They may have built some proof of concepts around their ideas, but they didn’t know how to evolve or how to build reliable products out of it. At the same time, there are some technically savvy and digitally native companies. They have tonnes of smart engineers, but they don’t have the right expertise and experience in AI/ML technologies. They don’t know how to apply this tech to real business problems, or what low-hanging fruits are available to them. They just struggle with finding the best way to leverage this tech.

What do you think the future holds for AI and machine learning?

I generally try to be more optimistic about the future because there are obviously a lot of fears around AI/ML. And I think that’s quite natural. If you look back in history, it was the same with electricity and any other innovative technologies.

One of the fears that I think does have some merit is that this technology may replace some real jobs. I think that’s a bit of a pessimistic view because history also teaches us that whatever technology we get, we still need that human aspect to it. 

Almost all the technology that we use right now augments our intelligence. It does not replace it. And I think that the future of AI will be used in a cooperative way. If you’ve seen products like GitHub Copilot, the purpose of this product is essentially to assist the developer in writing code. We still can’t use AI to write entire programs. We need a human to guide that AI to our desired outcome. What exactly do we want to achieve? What is our objective? What is our user expectation?

Similarly, maybe this technology will be applied to a broader set of use cases where AI will be assisting us, not replacing us. There is a quote that I wish was mine but I still think it’s a very good way of thinking about the role of AI: if you think that AI will replace you or your job, most likely you’re wrong. It’s the people who will be using AI who will replace you at your job. 

So I think one of the most important skills to learn right now is how to leverage this tech to make your work more efficient. And that should help many people get that competitive advantage in the future.

  • Iurii Milovanov is the director of AI and data science at SoftServe, a technology company specialising in consultancy services and software development. 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Iurii Milovanov, SoftServe: How AI/ML is helping boost innovation and personalisation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/15/iurii-milovanov-softserve-how-ai-ml-is-helping-boost-innovation-and-personalisation/feed/ 0
OpenAI is not currently training GPT-5 https://www.artificialintelligence-news.com/2023/04/17/openai-is-not-currently-training-gpt-5/ https://www.artificialintelligence-news.com/2023/04/17/openai-is-not-currently-training-gpt-5/#respond Mon, 17 Apr 2023 10:36:35 +0000 https://www.artificialintelligence-news.com/?p=12963 Experts calling for a pause on AI development will be glad to hear that OpenAI isn’t currently training GPT-5. OpenAI CEO Sam Altman spoke remotely at an MIT event and was quizzed about AI by computer scientist and podcaster Lex Fridman. Altman confirmed that OpenAI is not currently developing a fifth version of its Generative... Read more »

The post OpenAI is not currently training GPT-5 appeared first on AI News.

]]>
Experts calling for a pause on AI development will be glad to hear that OpenAI isn’t currently training GPT-5.

OpenAI CEO Sam Altman spoke remotely at an MIT event and was quizzed about AI by computer scientist and podcaster Lex Fridman.

Altman confirmed that OpenAI is not currently developing a fifth version of its Generative Pre-trained Transformer model and is instead focusing on enhancing the capabilities of GPT-4, the latest version.

Altman was asked about the open letter that urged developers to pause training AI models larger than GPT-4 for six months. While he supported the idea of ensuring AI models are safe and aligned with human values, he believed that the letter lacked technical nuance regarding where to pause.

“An earlier version of the letter claims we are training GPT-5 right now. We are not, and won’t for some time. So in that sense, it was sort of silly,” said Altman.

“We are doing things on top of GPT-4 that I think have all sorts of safety issues that we need to address.”

GPT-4 is a significant improvement over its predecessor, GPT-3, which was released in 2020. 

GPT-3 has 175 billion parameters, making it one of the largest language models in existence. OpenAI has not confirmed GPT-4’s exact number of parameters but it’s estimated to be in the region of one trillion.

OpenAI said in a blog post that GPT-4 is “more creative and collaborative than ever before” and “can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.”

In a simulated law bar exam, GPT-3.5 scored around the bottom 10 percent. GPT-4, however, passed the exam among the top 10 percent.

OpenAI is one of the leading AI research labs in the world, and its GPT models have been used for a wide range of applications, including language translation, chatbots, and content creation. However, the development of such large language models has raised concerns about their safety and ethical implications.

Altman’s comments suggest that OpenAI is aware of the concerns surrounding its GPT models and is taking steps to address them.

While GPT-5 may not be on the horizon, the continued development of GPT-4 and the creation of other models on top of it will undoubtedly raise further questions about the safety and ethical implications of such AI models.

(Photo by Victor Freitas on Unsplash)

Related: ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI is not currently training GPT-5 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/17/openai-is-not-currently-training-gpt-5/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0
Alibaba unveils ChatGPT rival and custom LLMs https://www.artificialintelligence-news.com/2023/04/11/alibaba-unveils-chatgpt-rival-custom-llms/ https://www.artificialintelligence-news.com/2023/04/11/alibaba-unveils-chatgpt-rival-custom-llms/#respond Tue, 11 Apr 2023 12:40:51 +0000 https://www.artificialintelligence-news.com/?p=12910 Chinese tech giant Alibaba has unveiled a ChatGPT rival and the ability to create custom LLMs (Large Language Models) for customers. Alibaba’s ChatGPT rival is called Tongyi Qianwen and will be integrated across the company’s various businesses in the “near future,” but it is yet to give a rollout timeline. “We are at a technological... Read more »

The post Alibaba unveils ChatGPT rival and custom LLMs appeared first on AI News.

]]>
Chinese tech giant Alibaba has unveiled a ChatGPT rival and the ability to create custom LLMs (Large Language Models) for customers.

Alibaba’s ChatGPT rival is called Tongyi Qianwen and will be integrated across the company’s various businesses in the “near future,” but it is yet to give a rollout timeline.

“We are at a technological watershed moment driven by generative AI and cloud computing, and businesses across all sectors have started to embrace intelligence transformation to stay ahead of the game,” said Daniel Zhang, Chairman and CEO of Alibaba Group and CEO of Alibaba Cloud Intelligence.

“As a leading global cloud computing service provider, Alibaba Cloud is committed to making computing and AI services more accessible and inclusive for enterprises and developers, enabling them to uncover more insights, explore new business models for growth, and create more cutting-edge products and services for society.”

Tongyi Qianwen roughly translates to “seeking an answer by asking a thousand questions” and will support both English and Chinese languages.

Alibaba has stated that the chatbot will first be added to DingTalk, its workplace messaging app. Tongyi Qianwen will be able to perform several tasks at launch, including taking notes in meetings, writing emails, and drafting business proposals.

The chatbot will be integrated into Tmall Genie, similar to Amazon’s line of Echo smart speakers. That integration will give Alibaba an advantage over its Western counterparts such as Google which are yet to integrate their own equivalents into their smart speakers. 

Tongyi Qianwen is powered by an LLM that reportedly consists of ten trillion parameters, which is significantly more than GPT-4 (estimated to consist of around one trillion parameters.)

The model will be used as the foundation for a new service by Alibaba that will see the company build custom LLMs for customers. The LLMs will use “customers’ proprietary intelligence and industrial know-how” to build AI-infused apps without developing a model from scratch. A beta version of a Tongyi Qianwen API is already available for Chinese developers.

“Generative AI powered by large language models is ushering in an unprecedented new phase. In this latest AI era, we can create additional value for our customers and broader communities through our resilient public cloud infrastructure and proven AI capabilities,” said Jingren Zhou, CTO of Alibaba Cloud Intelligence.

“We are witnessing a new paradigm of AI development where cloud and AI models play an essential role. By making this paradigm more inclusive, we hope to facilitate businesses from all industries with their intelligence transformation and, ultimately, help boost their business productivity and expand their expertise and capabilities while unlocking more exciting opportunities through innovations.”

Last month, a group of high-profile figures in the technology industry called for the suspension of training powerful AI systems. Twitter CEO Elon Musk and Apple co-founder Steve Wozniak were among those who signed an open letter warning of potential risks and said the race to develop AI systems is out of control.

A report by investment bank Goldman Sachs estimated that AI could replace the equivalent of 300 million full-time jobs. An AI think tank, meanwhile, called GPT-4 a risk to public safety.

Alibaba’s announcements were made at its Cloud Summit, which also featured the debut of three-month trials for its Infrastructure-as-a-Service (IaaS) and PolarDB services. The company is offering a 50 percent discount for its storage-as-a-service offering if users reserve capacity in a specific region for a year.

The company has not yet revealed the cost of using Tongyi Qianwen.

(Image Source: www.alibabagroup.com)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alibaba unveils ChatGPT rival and custom LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/11/alibaba-unveils-chatgpt-rival-custom-llms/feed/ 0