Development - AI News https://www.artificialintelligence-news.com/categories/development/ Artificial Intelligence News Thu, 02 Nov 2023 15:01:55 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Development - AI News https://www.artificialintelligence-news.com/categories/development/ 32 32 Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/ https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/#respond Thu, 02 Nov 2023 15:01:54 +0000 https://www.artificialintelligence-news.com/?p=13828 Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer. This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most... Read more »

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer.

This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most pressing challenges.

Dawn Phase 1 is the cornerstone of the recently launched UK AI Research Resource (AIRR), demonstrating the nation’s commitment to exploring innovative systems and architectures.

This supercomputer brings the UK closer to achieving the exascale; a computing threshold of a quintillion (10^18) floating point operations per second. To put this into perspective, the processing power of an exascale system equals what every person on Earth would calculate in over four years if they were working non-stop, 24 hours a day.

Operational at the Cambridge Open Zettascale Lab, Dawn utilises Dell PowerEdge XE9640 servers, providing an unparalleled platform for the Intel Data Center GPU Max Series accelerator. This collaboration ensures a diverse ecosystem through oneAPI, fostering an environment of choice.

The system’s capabilities extend across various domains, including healthcare, engineering, green fusion energy, climate modelling, cosmology, and high-energy physics.

Adam Roe, EMEA HPC technical director at Intel, said:

“Dawn considerably strengthens the scientific and AI compute capability available in the UK and it’s on the ground and operational today at the Cambridge Open Zettascale Lab.

Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI.

I’m very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel, and the University of Cambridge, and further broaden that to the UK scientific and AI community.”

Glimpse into the future

Dawn Phase 1 is not just a standalone achievement; it’s part of a broader strategy.

The collaborative endeavour aims to deliver a Phase 2 supercomputer in 2024, promising tenfold performance levels. This progression would propel the UK’s AI capability, strengthening the successful industry partnership.

The supercomputer’s technical foundation lies in Dell PowerEdge XE9640 servers, renowned for their versatile configurations and efficient liquid cooling technology. This innovation ensures optimal handling of AI and HPC workloads, offering a more effective solution than traditional air-cooled systems.

Tariq Hussain, Head of UK Public Sector at Dell, commented:

“Collaborations like the one between the University of Cambridge, Dell Technologies and Intel, alongside strong inward investment, are vital if we want the compute to unlock the high-growth AI potential of the UK. It is paramount that the government invests in the right technologies and infrastructure to ensure the UK leads in AI and exascale-class simulation capability.

It’s also important to embrace the full spectrum of the technology ecosystem, including GPU diversity, to ensure customers can tackle the growing demands of generative AI, industrial simulation modelling and ground-breaking scientific research.”

As the world awaits the full technical details and performance numbers of Dawn Phase 1 – slated for release in mid-November during the Supercomputing 23 (SC23) conference in Denver, Colorado – the UK stands at the precipice of a transformative era in scientific and AI research.

This collaboration between industry giants and academia not only accelerates research discovery but also propels the UK’s knowledge economy to new heights.

(Image Credit: Joe Bishop for Cambridge Open Zettascale Lab)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
OpenAI reveals DALL-E 3 text-to-image model https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/ https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/#respond Thu, 21 Sep 2023 15:21:57 +0000 https://www.artificialintelligence-news.com/?p=13626 OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model.  DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT. One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts: "A middle-aged woman... Read more »

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
OpenAI has announced DALL-E 3, the third iteration of its acclaimed text-to-image model. 

DALL-E 3 promises significant enhancements over its predecessors and introduces seamless integration with ChatGPT.

One of the standout features of DALL-E 3 is its ability to better understand and interpret user intentions when confronted with detailed and lengthy prompts:

Even if a user struggles to articulate their vision precisely, ChatGPT can step in to assist in crafting comprehensive prompts.

DALL-E 3 has been engineered to excel in creating elements that its predecessors and other AI generators have historically struggled with, such as rendering intricate depictions of hands and incorporating text into images:

OpenAI has also implemented robust security measures, ensuring the AI system refrains from generating explicit or offensive content by identifying and ignoring certain keywords in prompts.

Beyond technical advancements, OpenAI has taken steps to mitigate potential legal issues. 

While the current DALL-E version can mimic the styles of living artists, the forthcoming DALL-E 3 has been designed to decline requests to replicate their copyrighted works. Artists will also have the option to submit their original creations through a dedicated form on the OpenAI website, allowing them to request removal if necessary.

OpenAI’s rollout plan for DALL-E 3 involves an initial release to ChatGPT ‘Plus’ and ‘Enterprise’ customers next month. The enhanced image generator will then become available to OpenAI’s research labs and API customers in the upcoming fall season.

As OpenAI continues to push the boundaries of AI technology, DALL-E 3 represents a major step forward in text-to-image generation.

(Image Credit: OpenAI)

See also: Stability AI unveils ‘Stable Audio’ model for controllable audio generation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI reveals DALL-E 3 text-to-image model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/21/openai-reveals-dall-e-3-text-to-image-model/feed/ 0
CMA sets out principles for responsible AI development  https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/#respond Tue, 19 Sep 2023 10:41:38 +0000 https://www.artificialintelligence-news.com/?p=13614 The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs). FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection... Read more »

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection and fostering healthy competition within this burgeoning industry.

Foundation models – known for their adaptability to diverse applications – have witnessed rapid adoption across various user platforms, including familiar names like ChatGPT and Office 365 Copilot. These AI systems possess the power to drive innovation and stimulate economic growth, promising transformative changes across sectors and industries.

Sarah Cardell, CEO of the CMA, emphasised the urgency of proactive intervention in the AI:

“The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.

That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers.

While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Research from Earlybird reveals that Britain houses the largest number of AI startups in Europe. The CMA’s report underscores the immense benefits that can accrue if the development and use of FMs are managed effectively.

These advantages include the emergence of superior products and services, enhanced access to information, breakthroughs in science and healthcare, and even lower prices for consumers. Additionally, a vibrant FM market could open doors for a wider range of businesses to compete successfully, challenging established market leaders. This competition and innovation, in turn, could boost the overall economy, fostering increased productivity and economic growth.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“With the [UK-hosted] global AI Safety Summit around the corner, the announcement of these principles shows the public and investors that the UK is committed to regulating AI safely. To continue this momentum, it’s important for the UK to strike a balance in creating effective regulation without stifling growing innovation and investment. 

Ensuring that regulation is both well-designed and effective will help attract and maintain investment in the UK by creating a stable, secure, and trustworthy business environment that appeals to domestic and international investors.” 

The CMA’s report also sounds a cautionary note. It highlights the potential risks if competition remains weak or if developers neglect consumer protection regulations. Such lapses could expose individuals and businesses to significant levels of false information and AI-driven fraud. In the long run, a handful of dominant firms might exploit FMs to consolidate market power, offering subpar products or services at exorbitant prices.

While the scope of the CMA’s initial review focused primarily on competition and consumer protection concerns, it acknowledges that other important questions related to FMs, such as copyright, intellectual property, online safety, data protection, and security, warrant further examination.

Sridhar Iyengar, Managing Director of Zoho Europe, commented:

“The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.

Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools.

AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”

The proposed guiding principles, unveiled by the CMA, aim to steer the ongoing development and use of FMs, ensuring that people, businesses, and the economy reap the full benefits of innovation and growth. Drawing inspiration from the evolution of other technology markets, these principles seek to guide FM developers and deployers in the following key areas:

  • Accountability: Developers and deployers are accountable for the outputs provided to consumers.
  • Access: Ensuring ongoing access to essential inputs without unnecessary restrictions.
  • Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches.
  • Choice: Providing businesses with sufficient choices to determine how to utilize FMs effectively.
  • Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed.
  • Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling.
  • Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

Over the next few months, the CMA plans to engage extensively with a diverse range of stakeholders both within the UK and internationally to further develop these principles. This collaborative effort aims to support the positive growth of FM markets, fostering effective competition and consumer protection.

Gareth Mills, Partner at law firm Charles Russell Speechlys, said:

“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

The report itself notes that, although the CMA has established a number of core principles, there is still work to do and that stakeholder feedback – both within the UK and internationally – will be required before a formal policy and regulatory position can be definitively established.

As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

An update on the CMA’s progress and the reception of these principles will be published in early 2024, reflecting the authority’s commitment to shaping AI markets in ways that benefit people, businesses, and the UK economy as a whole.

(Photo by JESHOOTS.COM on Unsplash)

See also: UK to pitch AI’s potential for international development at UN

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/feed/ 0
MLPerf Inference v3.1 introduces new LLM and recommendation benchmarks https://www.artificialintelligence-news.com/2023/09/12/mlperf-inference-v3-1-new-llm-recommendation-benchmarks/ https://www.artificialintelligence-news.com/2023/09/12/mlperf-inference-v3-1-new-llm-recommendation-benchmarks/#respond Tue, 12 Sep 2023 11:46:58 +0000 https://www.artificialintelligence-news.com/?p=13581 The latest release of MLPerf Inference introduces new LLM and recommendation benchmarks, marking a leap forward in the realm of AI testing. The v3.1 iteration of the benchmark suite has seen record participation, boasting over 13,500 performance results and delivering up to a 40 percent improvement in performance.  What sets this achievement apart is the... Read more »

The post MLPerf Inference v3.1 introduces new LLM and recommendation benchmarks appeared first on AI News.

]]>
The latest release of MLPerf Inference introduces new LLM and recommendation benchmarks, marking a leap forward in the realm of AI testing.

The v3.1 iteration of the benchmark suite has seen record participation, boasting over 13,500 performance results and delivering up to a 40 percent improvement in performance. 

What sets this achievement apart is the diverse pool of 26 different submitters and over 2,000 power results, demonstrating the broad spectrum of industry players investing in AI innovation.

Among the list of submitters are tech giants like Google, Intel, and NVIDIA, as well as newcomers Connect Tech, Nutanix, Oracle, and TTA, who are participating in the MLPerf Inference benchmark for the first time.

David Kanter, Executive Director of MLCommons, highlighted the significance of this achievement:

“Submitting to MLPerf is not trivial. It’s a significant accomplishment, as this is not a simple point-and-click benchmark. It requires real engineering work and is a testament to our submitters’ commitment to AI, to their customers, and to ML.”

MLPerf Inference is a critical benchmark suite that measures the speed at which AI systems can execute models in various deployment scenarios. These scenarios span from the latest generative AI chatbots to the safety-enhancing features in vehicles, such as automatic lane-keeping and speech-to-text interfaces.

The spotlight of MLPerf Inference v3.1 shines on the introduction of two new benchmarks:

  • An LLM utilising the GPT-J reference model to summarise CNN news articles garnered submissions from 15 different participants, showcasing the rapid adoption of generative AI.
  • An updated recommender benchmark – refined to align more closely with industry practices – employs the DLRM-DCNv2 reference model and larger datasets, attracting nine submissions. These new benchmarks are designed to push the boundaries of AI and ensure that industry-standard benchmarks remain aligned with the latest trends in AI adoption, serving as a valuable guide for customers, vendors, and researchers alike.

Mitchelle Rasquinha, co-chair of the MLPerf Inference Working Group, commented: “The submissions for MLPerf Inference v3.1 are indicative of a wide range of accelerators being developed to serve ML workloads.

“The current benchmark suite has broad coverage among ML domains, and the most recent addition of GPT-J is a welcome contribution to the generative AI space. The results should be very helpful to users when selecting the best accelerators for their respective domains.”

MLPerf Inference benchmarks primarily focus on datacenter and edge systems. The v3.1 submissions showcase various processors and accelerators across use cases in computer vision, recommender systems, and language processing.

The benchmark suite encompasses both open and closed submissions in the performance, power, and networking categories. Closed submissions employ the same reference model to ensure a level playing field across systems, while participants in the open division are permitted to submit a variety of models.

As AI continues to permeate various aspects of our lives, MLPerf’s benchmarks serve as vital tools for evaluating and shaping the future of AI technology.

Find the detailed results of MLPerf Inference v3.1 here.

(Photo by Mauro Sbicego on Unsplash)

See also: GitLab: Developers view AI as ‘essential’ despite concerns

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MLPerf Inference v3.1 introduces new LLM and recommendation benchmarks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/12/mlperf-inference-v3-1-new-llm-recommendation-benchmarks/feed/ 0
OutSystems: How AI-based development reduces backlogs https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/ https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/#respond Fri, 08 Sep 2023 11:56:57 +0000 https://www.artificialintelligence-news.com/?p=13575 OutSystems may be best known for its low-code development platform expertise. But the company has steadily been moving to a specialism in AI-assisted software development – and the parallels between the two are evident. In June, the company unveiled its generative AI roadmap, codenamed ‘Project Morpheus,’ with benefits including instant app generation using conversational prompts... Read more »

The post OutSystems: How AI-based development reduces backlogs appeared first on AI News.

]]>
OutSystems may be best known for its low-code development platform expertise. But the company has steadily been moving to a specialism in AI-assisted software development – and the parallels between the two are evident.

In June, the company unveiled its generative AI roadmap, codenamed ‘Project Morpheus,’ with benefits including instant app generation using conversational prompts and an AI-powered app editor offering suggestions across the stack.  The mission remains clear: ‘developer productivity without trade-offs’, as founder and CEO Paulo Rosado puts it.

Project Morpheus, in the words of Nuno Carneiro, OutSystems AI product manager, is ‘the next generation of software development.’ “What we’re doing is building a completely new development experience, based on this premise that AI will give you suggestions. You do not have to code practically anything, and the AI is suggesting what to do,” says Carneiro.

“You have a What-You-See-Is-What-You-Get visual experience in terms of software development where you can change the application directly in your development environment. – On top of that, AI gives you suggestions about what you might want to change so that you don’t need to code things manually.”

This means the artificial intelligence is there to tweak, rather than take over. The company’s main offering in the space to date has been the OutSystems AI Mentor System. From code, to architecture, to performance, the developer is in control, but always has an on-call expert to hand.

Scepticism is naturally there, as it was with the rise of low-code platforms. But having slayed the dragon once before, is the job easier this time? “We see the same patterns of people being sceptical of AI in software development,” explains Carneiro. “We’ve been through this process of educating and showing the value of automation in software development before. We now feel like we’re in a good spot to communicate the current transformation in the industry due to the rise of AI.”

The key factor is that the OutSystems platform guards against some of the less salubrious aspects of artificial intelligence technology. Hallucination – where an AI confidently gives an incorrect response – and creating code riddled with vulnerabilities are just two of the pitfalls which could result if given full control. This is where the parallels between low-code and AI-assisted software development are especially striking; even if the code has been generated by AI, you can visually understand what you are building.

“The solutions we see out there at the moment still don’t solve this problem,” says Carneiro. “Because if AI is just writing a bunch of code automatically, and the person in charge of seeing the code and building it doesn’t understand what’s behind it, that’s not going to be a solution for any serious organisation to use. Low-code solves this problem with its visual development experience and the AI Mentor System constantly checks for security vulnerabilities, no matter who, or what, wrote the code.”

The bottom line for businesses is that AI-based development with a low code platform will allow them to complete projects in weeks which would otherwise take months, or even years, to develop. Carneiro gives a theoretical example of a company who wants to do a proof of concept for a new piece of software managing HR internally; a project which could take a week with OutSystems. For wider transformational projects, such as rebuilding an entire supply chain, it would take a few months maximum.

There is another benefit too for larger firms. “We’ve also seen a lot of clients build Centres of Excellence around low-code software development that they then export to their organisations around the world,” says Carneiro. “Using the AI Mentor System means they can then export this and innovate quickly across their whole business.”

Improving the process of software development is only one aspect of a digital transformation journey, however, with OutSystems committed to enabling businesses to adopt AI themselves. Image recognition is one such use case, or using cognitive services that users can add to their applications to solve business problems from unstructured data. This was factored into one part of the generative AI roadmap update, with a new connector announced for Azure OpenAI, built in partnership with Microsoft, to enable the use of large language models in development. “Part of our roadmap here is to help customers build the foundations for AI adoption in their businesses, so they’re not caught off guard,” notes Carneiro.

OutSystems is participating at AI & Big Data Expo Europe, in Amsterdam on September 26-27, and AI and wider digital transformation journeys will be a major part of the agenda. “A typical digital transformation challenge is to connect different data sources, and that’s another place where we believe OutSystems comes in. We’re at the right spot to help businesses solve this,” explains Carneiro. “We naturally help you connect with different data sources, and it’s something we’ve been optimising over the years to help our customers bring in all types of databases and sources – we have tools that help customers connect to integrations and integrate different data sources.

“These challenges might not be obvious before you embark on an AI adoption journey,” Carneiro adds. “But I’m pretty sure anyone who’s tried will recognise them – and we hope they also recognise that OutSystems is a good partner for that.”

Photo by Marc Sendra Martorell on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OutSystems: How AI-based development reduces backlogs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/feed/ 0
UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
OpenAI introduces fine-tuning for GPT-3.5 Turbo and GPT-4 https://www.artificialintelligence-news.com/2023/08/23/openai-fine-tuning-gpt-3-5-turbo-gpt-4/ https://www.artificialintelligence-news.com/2023/08/23/openai-fine-tuning-gpt-3-5-turbo-gpt-4/#respond Wed, 23 Aug 2023 09:17:03 +0000 https://www.artificialintelligence-news.com/?p=13513 OpenAI has announced the ability to fine-tune its powerful language models, including both GPT-3.5 Turbo and GPT-4. The fine-tuning allows developers to tailor the models to their specific use cases and deploy these custom models at scale. This move aims to bridge the gap between AI capabilities and real-world applications, heralding a new era of... Read more »

The post OpenAI introduces fine-tuning for GPT-3.5 Turbo and GPT-4 appeared first on AI News.

]]>
OpenAI has announced the ability to fine-tune its powerful language models, including both GPT-3.5 Turbo and GPT-4.

The fine-tuning allows developers to tailor the models to their specific use cases and deploy these custom models at scale. This move aims to bridge the gap between AI capabilities and real-world applications, heralding a new era of highly-specialised AI interactions.

With early tests yielding impressive results, a fine-tuned version of GPT-3.5 Turbo has demonstrated the ability to not only match but even surpass the capabilities of the base GPT-4 for certain narrow tasks.

All data sent in and out of the fine-tuning API remains the property of the customer, ensuring that sensitive information remains secure and is not used to train other models.

The deployment of fine-tuning has garnered significant interest from developers and businesses. Since the introduction of GPT-3.5 Turbo, the demand for customising models to create unique user experiences has been on the rise.

Fine-tuning opens up a realm of possibilities across various use cases, including:

  • Improved steerability: Developers can now fine-tune models to follow instructions more accurately. For instance, a business wanting consistent responses in a particular language can ensure that the model always responds in that language.
  • Reliable output formatting: Consistent formatting of AI-generated responses is crucial, especially for applications like code completion or composing API calls. Fine-tuning improves the model’s ability to generate properly formatted responses, enhancing the user experience.
  • Custom tone: Fine-tuning allows businesses to refine the tone of the model’s output to align with their brand’s voice. This ensures a consistent and on-brand communication style.

One significant advantage of fine-tuned GPT-3.5 Turbo is its extended token handling capacity. With the ability to handle 4k tokens – twice the capacity of previous fine-tuned models – developers can streamline their prompt sizes, leading to faster API calls and cost savings.

To achieve optimal results, fine-tuning can be combined with techniques such as prompt engineering, information retrieval, and function calling. OpenAI also plans to introduce support for fine-tuning with function calling and gpt-3.5-turbo-16k in the upcoming months.

The fine-tuning process involves several steps, including data preparation, file upload, creating a fine-tuning job, and using the fine-tuned model in production. OpenAI is working on a user interface to simplify the management of fine-tuning tasks.

The pricing structure for fine-tuning comprises two components: the initial training cost and usage costs.

  • Training: $0.008 / 1K Tokens
  • Usage input: $0.012 / 1K Tokens
  • Usage output: $0.016 / 1K Tokens

The introduction of updated GPT-3 models – babbage-002 and davinci-002 – has also been announced, providing replacements for existing models and enabling fine-tuning for further customisation.

These latest announcements underscore OpenAI’s dedication to creating AI solutions that can be tailored to meet the unique needs of businesses and developers.

(Image Credit: Claudia from Pixabay)

See also: ChatGPT’s political bias highlighted in study

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces fine-tuning for GPT-3.5 Turbo and GPT-4 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/23/openai-fine-tuning-gpt-3-5-turbo-gpt-4/feed/ 0