Latest Artificial Intelligence News & Insights | AI News https://www.artificialintelligence-news.com/categories/artificial-intelligence/ Artificial Intelligence News Thu, 02 Nov 2023 15:01:55 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Latest Artificial Intelligence News & Insights | AI News https://www.artificialintelligence-news.com/categories/artificial-intelligence/ 32 32 Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/ https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/#respond Thu, 02 Nov 2023 15:01:54 +0000 https://www.artificialintelligence-news.com/?p=13828 Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer. This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most... Read more »

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer.

This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most pressing challenges.

Dawn Phase 1 is the cornerstone of the recently launched UK AI Research Resource (AIRR), demonstrating the nation’s commitment to exploring innovative systems and architectures.

This supercomputer brings the UK closer to achieving the exascale; a computing threshold of a quintillion (10^18) floating point operations per second. To put this into perspective, the processing power of an exascale system equals what every person on Earth would calculate in over four years if they were working non-stop, 24 hours a day.

Operational at the Cambridge Open Zettascale Lab, Dawn utilises Dell PowerEdge XE9640 servers, providing an unparalleled platform for the Intel Data Center GPU Max Series accelerator. This collaboration ensures a diverse ecosystem through oneAPI, fostering an environment of choice.

The system’s capabilities extend across various domains, including healthcare, engineering, green fusion energy, climate modelling, cosmology, and high-energy physics.

Adam Roe, EMEA HPC technical director at Intel, said:

“Dawn considerably strengthens the scientific and AI compute capability available in the UK and it’s on the ground and operational today at the Cambridge Open Zettascale Lab.

Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI.

I’m very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel, and the University of Cambridge, and further broaden that to the UK scientific and AI community.”

Glimpse into the future

Dawn Phase 1 is not just a standalone achievement; it’s part of a broader strategy.

The collaborative endeavour aims to deliver a Phase 2 supercomputer in 2024, promising tenfold performance levels. This progression would propel the UK’s AI capability, strengthening the successful industry partnership.

The supercomputer’s technical foundation lies in Dell PowerEdge XE9640 servers, renowned for their versatile configurations and efficient liquid cooling technology. This innovation ensures optimal handling of AI and HPC workloads, offering a more effective solution than traditional air-cooled systems.

Tariq Hussain, Head of UK Public Sector at Dell, commented:

“Collaborations like the one between the University of Cambridge, Dell Technologies and Intel, alongside strong inward investment, are vital if we want the compute to unlock the high-growth AI potential of the UK. It is paramount that the government invests in the right technologies and infrastructure to ensure the UK leads in AI and exascale-class simulation capability.

It’s also important to embrace the full spectrum of the technology ecosystem, including GPU diversity, to ensure customers can tackle the growing demands of generative AI, industrial simulation modelling and ground-breaking scientific research.”

As the world awaits the full technical details and performance numbers of Dawn Phase 1 – slated for release in mid-November during the Supercomputing 23 (SC23) conference in Denver, Colorado – the UK stands at the precipice of a transformative era in scientific and AI research.

This collaboration between industry giants and academia not only accelerates research discovery but also propels the UK’s knowledge economy to new heights.

(Image Credit: Joe Bishop for Cambridge Open Zettascale Lab)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/feed/ 0
Microsoft and Siemens revolutionise industry with AI-powered Copilot https://www.artificialintelligence-news.com/2023/10/31/microsoft-and-siemens-revolutionise-industry-with-ai-powered-copilot/ https://www.artificialintelligence-news.com/2023/10/31/microsoft-and-siemens-revolutionise-industry-with-ai-powered-copilot/#respond Tue, 31 Oct 2023 16:18:17 +0000 https://www.artificialintelligence-news.com/?p=13806 Microsoft and Siemens are joining forces to usher in a new era of human-machine collaboration. The result of the collaboration is the Siemens Industrial Copilot, a powerful AI assistant designed to enhance collaboration between humans and machines in the manufacturing sector. The tool enables rapid generation, optimisation, and debugging of complex automation code, significantly reducing... Read more »

The post Microsoft and Siemens revolutionise industry with AI-powered Copilot appeared first on AI News.

]]>
Microsoft and Siemens are joining forces to usher in a new era of human-machine collaboration.

The result of the collaboration is the Siemens Industrial Copilot, a powerful AI assistant designed to enhance collaboration between humans and machines in the manufacturing sector. The tool enables rapid generation, optimisation, and debugging of complex automation code, significantly reducing simulation times from weeks to minutes.

At the core of this collaboration is the integration of Siemens Industrial Copilot with Microsoft Teams, connecting design engineers, frontline workers, and various teams across business functions. This integration simplifies virtual collaboration, empowering professionals with new AI-powered tools and simplifying tasks that previously required extensive time and effort.

Empowering industries with Generative AI

Satya Nadella, Chairman and CEO of Microsoft, expressed the immense potential of this collaboration, stating: “With this next generation of AI, we have a unique opportunity to accelerate innovation across the entire industrial sector.”

Siemens CEO Roland Busch echoed this sentiment; emphasising the revolutionary impact on design, development, manufacturing, and operations.

The companies envision AI copilots becoming integral in industries such as manufacturing, infrastructure, transportation, and healthcare.

Schaeffler AG – a leading automotive supplier – is already embracing generative AI, enabling engineers to generate reliable code for industrial automation systems. Siemens Industrial Copilot will work to reduce downtimes.

Facilitating virtual collaboration

To facilitate virtual collaboration, Siemens and Microsoft are launching Teamcenter for Microsoft Teams; an application that utilises generative AI to connect functions across the product design and manufacturing lifecycle.

This integration will allow millions of workers who previously lacked access to Product Lifecycle Management (PLM) tools to contribute seamlessly to the design and manufacturing processes.

The collaboration between Microsoft and Siemens looks set to be an excellent case study of how AI empowers industries and professionals, revolutionising traditional workflows and fostering global innovation.

(Photo by Sezer Arslan on Unsplash)

See also: Bob Briski, DEPT®:  A dive into the future of AI-powered experiences

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with IoT Tech Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and Siemens revolutionise industry with AI-powered Copilot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/31/microsoft-and-siemens-revolutionise-industry-with-ai-powered-copilot/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Bob Briski, DEPT®:  A dive into the future of AI-powered experiences https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/ https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/#respond Wed, 25 Oct 2023 10:25:58 +0000 https://www.artificialintelligence-news.com/?p=13782 AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences. At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their... Read more »

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences.

At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their tagline, “pioneering work on a global scale with a boutique culture.”

While ‘pioneering’ and ’boutique’ evokes innovation and personalised attention, ‘global scale’ signifies the broad outreach. DEPT® harnesses large language models to disseminate highly targeted, personalised messages to expansive audiences. These models, Briski pointed out, enable DEPT® to comprehend individuals at a massive scale and foster meaningful and individualised interactions.

“The way that we have been using a lot of these large language models is really to deliver really small and targeted messages to a large audience,” says Briski.

However, the integration of AI into various domains – such as retail, sports, education, and healthcare – poses both opportunities and challenges. DEPT® navigates this complexity by leveraging generative AI and large language models trained on diverse datasets, including vast repositories like Wikipedia and the Library of Congress.

Briski emphasised the importance of marrying pre-trained data with DEPT®’s domain expertise to ensure precise contextual responses. This approach guarantees that clients receive accurate and relevant information tailored to their specific sectors.

“The pre-training of these models allows them to really expound upon a bunch of different domains,” explains Briski. “We can be pretty sure that the answer is correct and that we want to either send it back to the client or the consumer or some other system that is sitting in front of the consumer.”

One of DEPT®’s standout achievements lies in its foray into the web3 space and the metaverse. Briski shared the company’s collaboration with Roblox, a platform synonymous with interactive user experiences. DEPT®’s collaboration with Roblox revolves around empowering users to create and enjoy user-generated content at an unprecedented scale. 

DEPT®’s internal project, Prepare to Pioneer, epitomises its commitment to innovation by nurturing embryonic ideas within its ‘Greenhouse’. DEPT® hones concepts to withstand the rigours of the external world, ensuring only the most robust ideas reach their clients.

“We have this internal project called The Greenhouse where we take these seeds of ideas and try to grow them into something that’s tough enough to handle the external world,” says Briski. “The ones that do survive are much more ready to use with our clients.”

While the allure of AI-driven solutions is undeniable, Briski underscored the need for caution. He warns that AI is not inherently transparent and trustworthy and emphasises the imperative of constructing robust foundations for quality assurance.

DEPT® employs automated testing to ensure responses align with expectations. Briski also stressed the importance of setting stringent parameters to guide AI conversations, ensuring alignment with the company’s ethos and desired consumer interactions.

“It’s important to really keep focused on the exact conversation you want to have with your consumer or your customer and put really strict guardrails around how you would like the model to answer those questions,” explains Briski.

In December, DEPT® is sponsoring AI & Big Data Expo Global and will be in attendance to share its unique insights. Briski is a speaker at the event and will be providing a deep dive into business intelligence (BI), illuminating strategies to enhance responsiveness through large language models.

“I’ll be diving into how we can transform BI to be much more responsive to the business, especially with the help of large language models,” says Briski.

As DEPT® continues to redefine digital paradigms, we look forward to observing how the company’s innovations deliver a new era in AI-powered experiences.

DEPT® is a key sponsor of this year’s AI & Big Data Expo Global on 30 Nov – 1 Dec 2023. Swing by DEPT®’s booth to hear more about AI and LLMs from the company’s experts and watch Briski’s day one presentation.

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
Jaromir Dzialo, Exfluency: How companies can benefit from LLMs https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/ https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/#respond Fri, 20 Oct 2023 15:13:43 +0000 https://www.artificialintelligence-news.com/?p=13726 Can you tell us a little bit about Exfluency and what the company does? Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate... Read more »

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
Can you tell us a little bit about Exfluency and what the company does?

Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate asset.

What tech trends have you noticed developing in the multilingual communication space?

As in every other walk of life, AI in general and ChatGPT specifically is dominating the agenda. Companies operating in the language space are either panicking or scrambling to play catch-up. The main challenge is the size of the tech deficit in this vertical. Innovation and, more especially AI-innovation is not a plug-in.

What are some of the benefits of using LLMs?

Off the shelf LLMs (ChatGPT, Bard, etc.) have a quick-fix attraction. Magically, it seems, well formulated answers appear on your screen. One cannot fail to be impressed.

The true benefits of LLMs will be realised by the players who can provide immutable data with which feed the models. They are what we feed them.

What do LLMs rely on when learning language?

Overall, LLMs learn language by analysing vast amounts of text data, understanding patterns and relationships, and using statistical methods to generate contextually appropriate responses. Their ability to generalise from data and generate coherent text makes them versatile tools for various language-related tasks.

Large Language Models (LLMs) like GPT-4 rely on a combination of data, pattern recognition, and statistical relationships to learn language. Here are the key components they rely on:

  1. Data: LLMs are trained on vast amounts of text data from the internet. This data includes a wide range of sources, such as books, articles, websites, and more. The diverse nature of the data helps the model learn a wide variety of language patterns, styles, and topics.
  2. Patterns and Relationships: LLMs learn language by identifying patterns and relationships within the data. They analyze the co-occurrence of words, phrases, and sentences to understand how they fit together grammatically and semantically.
  3. Statistical Learning: LLMs use statistical techniques to learn the probabilities of word sequences. They estimate the likelihood of a word appearing given the previous words in a sentence. This enables them to generate coherent and contextually relevant text.
  4. Contextual Information: LLMs focus on contextual understanding. They consider not only the preceding words but also the entire context of a sentence or passage. This contextual information helps them disambiguate words with multiple meanings and produce more accurate and contextually appropriate responses.
  5. Attention Mechanisms: Many LLMs, including GPT-4, employ attention mechanisms. These mechanisms allow the model to weigh the importance of different words in a sentence based on the context. This helps the model focus on relevant information while generating responses.
  6. Transfer Learning: LLMs use a technique called transfer learning. They are pretrained on a large dataset and then fine-tuned on specific tasks. This allows the model to leverage its broad language knowledge from pretraining while adapting to perform specialised tasks like translation, summarisation, or conversation.
  7. Encoder-Decoder Architecture: In certain tasks like translation or summarisation, LLMs use an encoder-decoder architecture. The encoder processes the input text and converts it into a context-rich representation, which the decoder then uses to generate the output text in the desired language or format.
  8. Feedback Loop: LLMs can learn from user interactions. When a user provides corrections or feedback on generated text, the model can adjust its responses based on that feedback over time, improving its performance.

What are some of the challenges of using LLMs?

A fundamental issue, which has been there ever since we started giving away data to Google, Facebook and the like, is that “we” are the product. The big players are earning untold billions on our rush to feed their apps with our data. ChatGPT, for example, is enjoying the fastest growing onboarding in history. Just think how Microsoft has benefitted from the millions of prompts people have already thrown at it.

The open LLMs hallucinate and, because answers to prompts are so well formulated, one can be easily duped into believing what they tell you.
And to make matters worse, there are no references/links to tell you from where they sourced their answers.

How can these challenges be overcome?

LLMs are what we feed them. Blockchain technology allows us to create an immutable audit trail and with it immutable, clean data. No need to trawl the internet. In this manner we are in complete control of what data is going in, can keep it confidential, and support it with a wealth of useful meta data. It can also be multilingual!

Secondly, as this data is stored in our databases, we can also provide the necessary source links. If you can’t quite believe the answer to your prompt, open the source data directly to see who wrote it, when, in which language and which context.

What advice would you give to companies that want to utilise private, anonymised LLMs for multilingual communication?

Make sure your data is immutable, multilingual, of a high quality – and stored for your eyes only. LLMs then become a true game changer.

What do you think the future holds for multilingual communication?

As in many other walks of life, language will embrace forms of hybrid intelligence. For example, in the Exfluency ecosystem, the AI-driven workflow takes care of 90% of the translation – our fantastic bilingual subject matter experts then only need to focus on the final 10%. This balance will change over time – AI will take an ever-increasing proportion of the workload. But the human input will remain crucial. The concept is encapsulated in our strapline: Powered by technology, perfected by people.

What plans does Exfluency have for the coming year?

Lots! We aim to roll out the tech to new verticals and build communities of SMEs to serve them. There is also great interest in our Knowledge Mining app, designed to leverage the information hidden away in the millions of linguistic assets. 2024 is going to be exciting!

  • Jaromir Dzialo is the co-founder and CTO of Exfluency, which offers affordable AI-powered language and security solutions with global talent networks for organisations of all sizes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/feed/ 0
UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0