generative ai Archives - AI News https://www.artificialintelligence-news.com/tag/generative-ai/ Artificial Intelligence News Thu, 26 Oct 2023 15:49:01 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png generative ai Archives - AI News https://www.artificialintelligence-news.com/tag/generative-ai/ 32 32 UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
OpenAI reveals DALL-E 3 text-to-image model https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/ https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/#respond Thu, 21 Sep 2023 15:21:57 +0000 https://www.artificialintelligence-news.com/?p=13626 OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model.  DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT. One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts: "A middle-aged woman... Read more »

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model. 

DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT.

One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts:

Even if a user struggles to articulate their vision precisely, ChatGPT can step in to assist in crafting comprehensive prompts.

DALL-E 3 has been engineered to excel in creating elements that its predecessors and other AI generators have historically struggled with, such as rendering intricate depictions of hands and incorporating text into images:

OpenAI has also implemented robust security measures, ensuring the AI system refrains from generating explicit or offensive content by identifying and ignoring certain keywords in prompts.

Beyond technical advancements, OpenAI has taken steps to mitigate potential legal issues. 

While the current DALL-E version can mimic the styles of living artists, the forthcoming DALL-E 3 has been designed to decline requests to replicate their copyrighted works. Artists will also have the option to submit their original creations through a dedicated form on the OpenAI website, allowing them to request removal if necessary.

OpenAI’s rollout plan for DALL-E 3 involves an initial release to ChatGPT ‘Plus’ and ‘Enterprise’ customers next month. The enhanced image generator will then become available to OpenAI’s research labs and API customers in the upcoming fall season.

As OpenAI continues to push the boundaries of AI technology, DALL-E 3 represents a major step forward in text-to-image generation.

(Image Credit: OpenAI)

See also: Stability AI unveils ‘Stable Audio’ model for controllable audio generation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/feed/ 0
Stability AI unveils ‘Stable Audio’ model for controllable audio generation https://www.artificialintelligence-news.com/2023/09/14/stability-ai-unveils-stable-audio-model-controllable-audio-generation/ https://www.artificialintelligence-news.com/2023/09/14/stability-ai-unveils-stable-audio-model-controllable-audio-generation/#respond Thu, 14 Sep 2023 15:57:28 +0000 https://www.artificialintelligence-news.com/?p=13589 Stability AI has introduced “Stable Audio,” a latent diffusion model designed to revolutionise audio generation. This breakthrough promises to be another leap forward for generative AI and combines text metadata, audio duration, and start time conditioning to offer unprecedented control over the content and length of generated audio—even enabling the creation of complete songs. Audio... Read more »

The post Stability AI unveils ‘Stable Audio’ model for controllable audio generation appeared first on AI News.

]]>
Stability AI has introduced “Stable Audio,” a latent diffusion model designed to revolutionise audio generation.

This breakthrough promises to be another leap forward for generative AI and combines text metadata, audio duration, and start time conditioning to offer unprecedented control over the content and length of generated audio—even enabling the creation of complete songs.

Audio diffusion models traditionally faced a significant limitation in generating audio of fixed durations, often leading to abrupt and incomplete musical phrases. This was primarily due to the models being trained on random audio chunks cropped from longer files and then forced into predetermined lengths.

Stable Audio effectively tackles this historic challenge, enabling the generation of audio with specified lengths, up to the training window size.

One of the standout features of Stable Audio is its use of a heavily downsampled latent representation of audio, resulting in vastly accelerated inference times compared to raw audio. Through cutting-edge diffusion sampling techniques, the flagship Stable Audio model can generate 95 seconds of stereo audio at a 44.1 kHz sample rate in under a second utilising the power of an NVIDIA A100 GPU.

A sound foundation

The core architecture of Stable Audio comprises a variational autoencoder (VAE), a text encoder, and a U-Net-based conditioned diffusion model.

The VAE plays a pivotal role by compressing stereo audio into a noise-resistant, lossy latent encoding that significantly expedites both generation and training processes. This approach, based on the Descript Audio Codec encoder and decoder architectures, facilitates encoding and decoding of arbitrary-length audio while ensuring high-fidelity output.

To harness the influence of text prompts, Stability AI utilises a text encoder derived from a CLAP model specially trained on their dataset. This enables the model to imbue text features with information about the relationships between words and sounds. These text features, extracted from the penultimate layer of the CLAP text encoder, are integrated into the diffusion U-Net through cross-attention layers.

During training, the model learns to incorporate two key properties from audio chunks: the starting second (“seconds_start”) and the total duration of the original audio file (“seconds_total”). These properties are transformed into discrete learned embeddings per second, which are then concatenated with the text prompt tokens. This unique conditioning allows users to specify the desired length of the generated audio during inference.

The diffusion model at the heart of Stable Audio boasts a staggering 907 million parameters and leverages a sophisticated blend of residual layers, self-attention layers, and cross-attention layers to denoise the input while considering text and timing embeddings. To enhance memory efficiency and scalability for longer sequence lengths, the model incorporates memory-efficient implementations of attention.

To train the flagship Stable Audio model, Stability AI curated an extensive dataset comprising over 800,000 audio files encompassing music, sound effects, and single-instrument stems. This rich dataset, furnished in partnership with AudioSparx – a prominent stock music provider – amounts to a staggering 19,500 hours of audio.

Stable Audio represents the vanguard of audio generation research, emerging from Stability AI’s generative audio research lab, Harmonai. The team remains dedicated to advancing model architectures, refining datasets, and enhancing training procedures. Their pursuit encompasses elevating output quality, fine-tuning controllability, optimising inference speed, and expanding the range of achievable output lengths.

Stability AI has hinted at forthcoming releases from Harmonai, teasing the possibility of open-source models based on Stable Audio and accessible training code.

This latest groundbreaking announcement follows a string of noteworthy stories about Stability. Earlier this week, Stability joined seven other prominent AI companies that signed the White House’s voluntary AI safety pledge as part of its second round.

You can try Stable Audio for yourself here.

(Photo by Eric Nopanen on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stability AI unveils ‘Stable Audio’ model for controllable audio generation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/14/stability-ai-unveils-stable-audio-model-controllable-audio-generation/feed/ 0
Baidu deploys its ERNIE Bot generative AI to the public https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/ https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/#respond Thu, 31 Aug 2023 15:15:49 +0000 https://www.artificialintelligence-news.com/?p=13552 Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website. ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model. The... Read more »

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
Chinese tech giant Baidu has announced that its generative AI product ERNIE Bot is now open to the public through various app stores and its website.

ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model.

The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the natural language understanding capabilities of a model that combines both text and knowledge graph data.

Later that year, Baidu released ERNIE 2.0 which became the first model model to set a score higher than 90 on the GLUE benchmark for evaluating natural language understanding systems.

In 2021, Baidu’s researchers posted a paper on ERNIE 3.0 in which they claim the model exceeds human performance on the SuperGLUE natural language benchmark. ERNIE 3.0 set a new top score on SuperGLUE and displaced efforts from Google and Microsoft.

According to Baidu’s CEO Robin Li, opening up ERNIE Bot to the public will enable the company to obtain more human feedback and improve the user experience. He said that ERNIE Bot is a showcase of the four core abilities of generative AI: understanding, generation, reasoning, and memory. He also said that ERNIE Bot can help users with various tasks such as writing, learning, entertainment, and work.

Baidu first unveiled ERNIE Bot in March this year, demonstrating its capabilities in different domains such as literature, art, and science. For example, ERNIE Bot can summarise a sci-fi novel and offer suggestions on how to continue the story in an expanded universe. It can also generate images and videos based on text inputs, such as creating a portrait of a fictional character or a scene from a movie.

Earlier this month, Baidu revealed that ERNIE Bot’s training throughput had increased three-fold since March and that it had achieved new milestones in data analysis and visualisation. ERNIE Bot can now generate results more quickly and handle image inputs as well. For instance, ERNIE Bot can analyse an image of a pie chart and generate a summary of the data in natural language.

Baidu is one of the first Chinese companies to obtain approval from authorities to release generative AI experiences to the public, according to Bloomberg. The report suggests that officials see AI as a “business and political imperative” for China and want to ensure that the technology is used in a responsible and ethical manner.

Beijing is keen on putting guardrails in place to prevent the spread of harmful or illegal content while still enabling Chinese companies to compete with overseas rivals in the field of AI.

Beijing’s AI guardrails

The “guardrails” include the rules published by the Chinese authorities in July 2023 that govern generative AI in China.

China’s rules go substantially beyond current regulations in other parts of the world and aim to ensure that generative AI is used in a responsible and ethical manner. The rules cover various aspects of generative AI, such as content, data, technology, fairness, and licensing.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are also prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information.

Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI. The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.

Intellectual property rights – an often contentious issue – must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health. Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

China’s rules not only have implications for domestic AI operators but also serve as a benchmark for international discussions on AI governance and ethical practices.

(Image Credit: Alpha Photo under CC BY-NC 2.0 license)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Baidu deploys its ERNIE Bot generative AI to the public appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/31/baidu-deploys-ernie-bot-generative-ai-public/feed/ 0
Azure and NVIDIA deliver next-gen GPU acceleration for AI https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/ https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/#respond Wed, 09 Aug 2023 15:47:51 +0000 https://www.artificialintelligence-news.com/?p=13446 Microsoft Azure users are now able to harness the latest advancements in NVIDIA’s accelerated computing technology, revolutionising the training and deployment of their generative AI applications. The integration of Azure ND H100 v5 virtual machines (VMs) with NVIDIA H100 Tensor Core GPUs and Quantum-2 InfiniBand networking promises seamless scaling of generative AI and high-performance computing... Read more »

The post Azure and NVIDIA deliver next-gen GPU acceleration for AI appeared first on AI News.

]]>
Microsoft Azure users are now able to harness the latest advancements in NVIDIA’s accelerated computing technology, revolutionising the training and deployment of their generative AI applications.

The integration of Azure ND H100 v5 virtual machines (VMs) with NVIDIA H100 Tensor Core GPUs and Quantum-2 InfiniBand networking promises seamless scaling of generative AI and high-performance computing applications, all at the click of a button.

This cutting-edge collaboration comes at a pivotal moment when developers and researchers are actively exploring the potential of large language models (LLMs) and accelerated computing to unlock novel consumer and business use cases.

NVIDIA’s H100 GPU achieves supercomputing-class performance through an array of architectural innovations. These include fourth-generation Tensor Cores, a new Transformer Engine for enhanced LLM acceleration, and NVLink technology that propels inter-GPU communication to unprecedented speeds of 900GB/sec.

The integration of the NVIDIA Quantum-2 CX7 InfiniBand – boasting 3,200 Gbps cross-node bandwidth – ensures flawless performance across GPUs, even at massive scales. This capability positions the technology on par with the computational capabilities of the world’s most advanced supercomputers.

The newly introduced ND H100 v5 VMs hold immense potential for training and inferring increasingly intricate LLMs and computer vision models. These neural networks power the most complex and compute-intensive generative AI applications, spanning from question answering and code generation to audio, video, image synthesis, and speech recognition.

A standout feature of the ND H100 v5 VMs is their ability to achieve up to a 2x speedup in LLM inference, notably demonstrated by the BLOOM 175B model when compared to previous generation instances. This performance boost underscores their capacity to optimise AI applications further, fueling innovation across industries.

The synergy between NVIDIA H100 Tensor Core GPUs and Microsoft Azure empowers enterprises with unparalleled AI training and inference capabilities. This partnership also streamlines the development and deployment of production AI, bolstered by the integration of the NVIDIA AI Enterprise software suite and Azure Machine Learning for MLOps.

The combined efforts have led to groundbreaking AI performance, as validated by industry-standard MLPerf benchmarks:

The integration of the NVIDIA Omniverse platform with Azure extends the reach of this collaboration further, providing users with everything they need for industrial digitalisation and AI supercomputing.

(Image Credit: Uwe Hoh from Pixabay)

See also: Gcore partners with UbiOps and Graphcore to empower AI teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Azure and NVIDIA deliver next-gen GPU acceleration for AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/09/azure-nvidia-deliver-next-gen-gpu-acceleration-ai/feed/ 0
Assessing the risks of generative AI in the workplace https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/ https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/#respond Mon, 17 Jul 2023 13:12:19 +0000 https://www.artificialintelligence-news.com/?p=13284 Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace. One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained. There is insufficient information... Read more »

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks.

The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern.

“Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs.

“While there is no concrete evidence that data submitted to ChatGPT or any other generative AI system might be stored and shared with other people, the risk still exists as new and less tested software often has security gaps.” 

OpenAI, the organisation behind ChatGPT, has been cautious in providing detailed information on how user data is handled. This poses challenges for organisations seeking to mitigate the risk of confidential code fragments being leaked. Constant monitoring of employee activities and implementing alerts for the use of generative AI platforms becomes necessary, which can be burdensome for many organisations.

“Further risks include using wrong or outdated information, especially in the case of junior specialists who are often unable to evaluate the quality of the AI’s output. Most generative models function on large but limited datasets that need constant updating,” adds Šedys.

These models have a limited context window and may encounter difficulties when dealing with new information. OpenAI has acknowledged that its latest framework, GPT-4, still suffers from factual inaccuracies, which can lead to the dissemination of misinformation.

The implications extend beyond individual companies. For example, Stack Overflow – a popular developer community – has temporarily banned the use of content generated with ChatGPT due to low precision rates, which can mislead users seeking coding answers.

Legal risks also come into play when utilising free generative AI solutions. GitHub’s Copilot has already faced accusations and lawsuits for incorporating copyrighted code fragments from public and open-source repositories.

“As AI-generated code can contain proprietary information or trade secrets belonging to another company or person, the company whose developers are using such code might be liable for infringement of third-party rights,” explains Šedys.

“Moreover, failure to comply with copyright laws might affect company evaluation by investors if discovered.”

While organisations cannot feasibly achieve total workplace surveillance, individual awareness and responsibility are crucial. Educating the general public about the potential risks associated with generative AI solutions is essential.

Industry leaders, organisations, and individuals must collaborate to address the data privacy, accuracy, and legal risks of generative AI in the workplace.

(Photo by Sean Pollock on Unsplash)

See also: Universities want to ensure staff and students are ‘AI-literate’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/feed/ 0
Databricks acquires LLM pioneer MosaicML for $1.3B https://www.artificialintelligence-news.com/2023/06/28/databricks-acquires-llm-pioneer-mosaicml-for-1-3b/ https://www.artificialintelligence-news.com/2023/06/28/databricks-acquires-llm-pioneer-mosaicml-for-1-3b/#respond Wed, 28 Jun 2023 09:22:15 +0000 https://www.artificialintelligence-news.com/?p=13238 Databricks has announced its definitive agreement to acquire MosaicML, a pioneer in large language models (LLMs). This strategic move aims to make generative AI accessible to organisations of all sizes, allowing them to develop, possess, and safeguard their own generative AI models using their own data.  The acquisition, valued at ~$1.3 billion – inclusive of... Read more »

The post Databricks acquires LLM pioneer MosaicML for $1.3B appeared first on AI News.

]]>
Databricks has announced its definitive agreement to acquire MosaicML, a pioneer in large language models (LLMs).

This strategic move aims to make generative AI accessible to organisations of all sizes, allowing them to develop, possess, and safeguard their own generative AI models using their own data. 

The acquisition, valued at ~$1.3 billion – inclusive of retention packages – showcases Databricks’ commitment to democratising AI and reinforcing the company’s Lakehouse platform as a leading environment for building generative AI and LLMs.

Naveen Rao, Co-Founder and CEO at MosaicML, said:

“At MosaicML, we believe in a world where everyone is empowered to build and train their own models, imbued with their own opinions and viewpoints — and joining forces with Databricks will help us make that belief a reality.

We started MosaicML to solve the hard engineering and research problems necessary to make large-scale training more accessible to everyone. With the recent generative AI wave, this mission has taken centre stage.

Together with Databricks, we will tip the scales in the favour of many — and we’ll do it as kindred spirits: researchers turned entrepreneurs sharing a similar mission. We look forward to continuing this journey together with the AI community.”

MosaicML has gained recognition for its cutting-edge MPT large language models, with millions of downloads for MPT-7B and the recent release of MPT-30B.

The platform has demonstrated how organisations can swiftly construct and train their own state-of-the-art models cost-effectively by utilising their own data. Esteemed customers like AI2, Generally Intelligent, Hippocratic AI, Replit, and Scatter Labs have leveraged MosaicML for a diverse range of generative AI applications.

The primary objective of this acquisition is to provide organisations with a simple and rapid method to develop, own, and secure their models. By combining the capabilities of Databricks’ Lakehouse Platform with MosaicML’s technology, customers can maintain control, security, and ownership of their valuable data without incurring exorbitant costs.

MosaicML’s automatic optimisation of model training enables 2x-7x faster training compared to standard approaches, and the near linear scaling of resources allows for the training of multi-billion-parameter models within hours. Consequently, Databricks and MosaicML aim to reduce the cost of training and utilising LLMs from millions to thousands of dollars.

The integration of Databricks’ unified Data and AI platform with MosaicML’s generative AI training capabilities will result in a robust and flexible platform capable of serving the largest organisations and addressing various AI use cases.

Upon the completion of the transaction, the entire MosaicML team – including its renowned research team – is expected to join Databricks.

MosaicML’s machine learning and neural networks experts are at the forefront of AI research, striving to enhance model training efficiency. They have contributed to popular open-source foundational models like MPT-30B, as well as the training algorithms powering MosaicML’s products.

The MosaicML platform will be progressively supported, scaled, and integrated to provide customers with a seamless unified platform where they can build, own, and secure their generative AI models. The partnership between Databricks and MosaicML empowers customers with the freedom to construct their own models, train them using their unique data, and develop differentiating intellectual property for their businesses.

The completion of the proposed acquisition is subject to customary closing conditions, including regulatory clearances.

(Photo by Glen Carrie on Unsplash)

See also: MosaicML’s latest models outperform GPT-3 with just 30B parameters

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Databricks acquires LLM pioneer MosaicML for $1.3B appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/28/databricks-acquires-llm-pioneer-mosaicml-for-1-3b/feed/ 0
Stephen Almond, ICO: Prioritise privacy when adopting generative AI https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/ https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/#respond Thu, 15 Jun 2023 14:09:46 +0000 https://www.artificialintelligence-news.com/?p=13197 The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology. According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be... Read more »

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology.

According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be aware of the associated privacy risks.

Stephen Almond, the Executive Director of Regulatory Risk at the ICO, highlighted the importance of recognising the opportunities presented by generative AI while also understanding the potential risks.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” says Almond.

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”

Generative AI works by generating content based on extensive data collection from publicly accessible sources, including personal information. Existing laws already safeguard individuals’ rights, including privacy, and these regulations extend to emerging technologies such as generative AI.

In April, the ICO outlined eight key questions that organisations using or developing generative AI that processes personal data should be asking themselves. The regulatory body is committed to taking action against organisations that fail to comply with data protection laws.

Almond reaffirms the ICO’s stance, stating that they will assess whether businesses have effectively addressed privacy risks before implementing generative AI, and will take action if there is a potential for harm resulting from the misuse of personal data. He emphasises that businesses must not overlook the risks to individuals’ rights and freedoms during the rollout of generative AI.

“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is a risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” explains Almond.

“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”

The ICO is committed to supporting UK businesses in their development and adoption of new technologies that prioritise privacy.

The recently updated Guidance on AI and Data Protection serves as a comprehensive resource for developers and users of generative AI, providing a roadmap for data protection compliance. Additionally, the ICO offers a risk toolkit to assist organisations in identifying and mitigating data protection risks associated with generative AI.

For innovators facing novel data protection challenges, the ICO provides advice through its Regulatory Sandbox and Innovation Advice service. To enhance their support, the ICO is piloting a Multi-Agency Advice Service in collaboration with the Digital Regulation Cooperation Forum, aiming to provide comprehensive guidance from multiple regulatory bodies to digital innovators.

While generative AI offers tremendous opportunities for businesses, the ICO emphasises the need to address privacy risks before widespread adoption. By understanding the implications, mitigating risks, and complying with data protection laws, organisations can ensure the responsible and ethical implementation of generative AI technologies.

(Image Credit: ICO)

Related: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/feed/ 0
Mark Zuckerberg: AI will be built into all of Meta’s products https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/ https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/#respond Fri, 09 Jun 2023 14:41:18 +0000 https://www.artificialintelligence-news.com/?p=13176 Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting. The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and... Read more »

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting.

The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and create emoji stickers for messaging services.

These developments come at a crucial time for Meta, as the company has faced financial struggles and an identity crisis in recent years. Investors criticised Meta for focusing too heavily on its metaverse ambitions and not paying enough attention to AI.

Meta’s decision to focus on AI tools follows in the footsteps of its competitors, including Google, Microsoft, and Snapchat, who have received significant investor attention for their generative AI products. Unlike the aforementioned rivals, Meta is yet to release any consumer-facing generative AI products.

To address this gap, Meta has been reorganising its AI divisions and investing heavily in infrastructure to support its AI product needs.

Zuckerberg expressed optimism during the company meeting, stating that advancements in generative AI have made it possible to integrate the technology into “every single one” of Meta’s products. This signifies Meta’s intention to leverage AI across its platforms, including Facebook, Instagram, and WhatsApp.

In addition to consumer-facing tools, Meta also announced a productivity assistant called Metamate for its employees. This assistant is designed to answer queries and perform tasks based on internal company information.

Meta is also exploring open-source models, allowing users to build their own AI-powered chatbots and technologies. However, critics and competitors have raised concerns about the potential misuse of these tools, as they can be utilised to spread misinformation and hate speech on a larger scale.

Zuckerberg addressed these concerns during the meeting, emphasising the value of democratising access to AI. He expressed hope that users would be able to develop AI programs independently in the future, without relying on frameworks provided by a few large technology companies.

Despite the increased focus on AI, Zuckerberg reassured employees that Meta would not be abandoning its plans for the metaverse, indicating that both AI and the metaverse would remain key areas of focus for the company.

The success of these endeavours will determine whether Meta can catch up with its competitors and solidify its position among tech leaders in the rapidly-evolving landscape.

(Photo by Mariia Shalabaieva on Unsplash)

Related: Meta’s open-source speech AI models support over 1,100 languages

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/feed/ 0