AI Machine Learning News | Latest Machine Learning News | AI News https://www.artificialintelligence-news.com/categories/ai-machine-learning/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Machine Learning News | Latest Machine Learning News | AI News https://www.artificialintelligence-news.com/categories/ai-machine-learning/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
UK reveals AI Safety Summit opening day agenda https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/ https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/#respond Mon, 16 Oct 2023 15:02:01 +0000 https://www.artificialintelligence-news.com/?p=13754 The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park. The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which... Read more »

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park.

The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.

The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.

Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.

Other topics that will be covered during the AI Safety Summit include threats to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.

The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.

International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.

The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.

This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK races to agree statement on AI risks with global leaders

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
How information retrieval is being revolutionised with RAG technology https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/ https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/#respond Mon, 02 Oct 2023 13:07:10 +0000 https://www.artificialintelligence-news.com/?p=13659 In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse... Read more »

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
In an era where digital data proliferates at an unprecedented pace, finding the right information amidst the digital deluge is akin to navigating a complex maze. Traditional enterprise search engines, while powerful, often inundate us with a barrage of results, making it challenging to discern the relevant from the irrelevant. However, amidst this vast expanse of digital information, a revolutionary technology has emerged, promising to transform the way we interact with data in the enterprise. Enter the power of Retrieval-Augmented Generation (RAG) to redefine our relationship with information.

The internet, once seen as a source of knowledge for all, has now become a complex maze. Although traditional search engines are powerful, they often inundate users with a flood of results, making it difficult to find what they are searching for. The emergence of new technologies like ChatGPT from OpenAI has been impressive, along with other language models such as Bard. However, these models also come with certain drawbacks for business users, such as the risk of generating inaccurate information, a lack of proper citation, potential copyright infringements, and a scarcity of reliable information in the business domain. The challenge lies not only in finding information but in finding the right information. In order to make Generative AI effective in the business world, we must address these concerns, which is the focal point of RAG.

The digital challenge: A sea of information

At the corner of platforms like Microsoft Copilot and Lucy is the transformative approach of the Retrieval-Augmented Generation (RAG) model.

Understanding RAG

What precisely is RAG, and how does it work? In simple terms, RAG is a two-step process:

1. Retrieval: Prior to providing an answer, the system delves into an extensive database, meticulously retrieving pertinent documents or passages. This isn’t a rudimentary matching of keywords; it’s a cutting-edge process that comprehends the intricate context and nuances of the query. RAG systems rely on the data owned or licensed by companies, and ensure that Enterprise Levels of access control are impeccably managed and preserved.

2. Generation: Once the pertinent information is retrieved, it serves as the foundation for generating a coherent and contextually accurate response. This isn’t just about regurgitating data; it’s about crafting a meaningful and informative answer.

By integrating these two critical processes, RAG ensures that the responses delivered are not only precise but also well-informed. It’s akin to having a dedicated team of researchers at your disposal, ready to delve into a vast library, select the most appropriate sources, and present you with a concise and informative summary.

Why RAG matters

Leading technology platforms that have embraced RAG – such as Microsoft Copilot for content creation or federated search platforms like Lucy – represent a significant breakthrough for several reasons:

1. Efficiency: Traditional models often demand substantial computational resources, particularly when dealing with extensive datasets. RAG, with its process segmentation, ensures efficiency, even when handling complex queries.

2. Accuracy: By first retrieving relevant data and then generating a response based on that data, RAG guarantees that the answers provided are firmly rooted in credible sources, enhancing accuracy and reliability.

3. Adaptability: RAG’s adaptability shines through as new information is continually added to the database. This ensures that the answers generated by platforms remain up-to-date and relevant.

RAG platforms in action

Picture yourself as a financial analyst seeking insights into market trends. Traditional research methods would require hours, if not days, to comb through reports, articles, and data sets. Lucy, however, simplifies the process – you merely pose your question. Behind the scenes, the RAG model springs into action, retrieving relevant financial documents and promptly generating a comprehensive response, all within seconds.

Similarly, envision a student conducting research on a historical event. Instead of becoming lost in a sea of search results, Lucy, powered by RAG, provides a concise, well-informed response, streamlining the research process and enhancing efficiency.

Take this one step further, Lucy feeds these answers across a complex data ecosystem to Microsoft Copilot and new presentations or documentation is created leveraging all of the institutional knowledge an organisation has created or purchased..

The road ahead

The potential applications of RAG are expansive, spanning academia, industry, and everyday inquiries. Beyond its immediate utility, RAG signifies a broader shift in our interaction with information. In an age of information overload, tools like Microsoft Copilot and Lucy, powered by RAG, are not merely conveniences; they are necessities.

Furthermore, as technology continues to evolve, we can anticipate even more sophisticated iterations of the RAG model, promising heightened accuracy, efficiency, and user experience. Working with platforms that have embraced RAG from the onset (or before even a term) will keep your organisation ahead of the curve.

Conclusion

In the digital era, we face both challenges and opportunities. While the sheer volume of information can be overwhelming, technologies like Microsoft Copilot or Lucy, underpinned by the potency of Retrieval-Augmented Generation, offer a promising path forward. This is a testament to technology’s potential not only to manage but also to meaningfully engage with the vast reservoirs of knowledge at our disposal. These aren’t just platforms; they are a glimpse into the future of information retrieval.

Photo by Markus Winkler on Unsplash

The post How information retrieval is being revolutionised with RAG technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/02/how-information-retrieval-is-being-revolutionised-with-rag-technology/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
CMA sets out principles for responsible AI development  https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/#respond Tue, 19 Sep 2023 10:41:38 +0000 https://www.artificialintelligence-news.com/?p=13614 The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs). FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection... Read more »

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection and fostering healthy competition within this burgeoning industry.

Foundation models – known for their adaptability to diverse applications – have witnessed rapid adoption across various user platforms, including familiar names like ChatGPT and Office 365 Copilot. These AI systems possess the power to drive innovation and stimulate economic growth, promising transformative changes across sectors and industries.

Sarah Cardell, CEO of the CMA, emphasised the urgency of proactive intervention in the AI:

“The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.

That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers.

While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Research from Earlybird reveals that Britain houses the largest number of AI startups in Europe. The CMA’s report underscores the immense benefits that can accrue if the development and use of FMs are managed effectively.

These advantages include the emergence of superior products and services, enhanced access to information, breakthroughs in science and healthcare, and even lower prices for consumers. Additionally, a vibrant FM market could open doors for a wider range of businesses to compete successfully, challenging established market leaders. This competition and innovation, in turn, could boost the overall economy, fostering increased productivity and economic growth.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“With the [UK-hosted] global AI Safety Summit around the corner, the announcement of these principles shows the public and investors that the UK is committed to regulating AI safely. To continue this momentum, it’s important for the UK to strike a balance in creating effective regulation without stifling growing innovation and investment. 

Ensuring that regulation is both well-designed and effective will help attract and maintain investment in the UK by creating a stable, secure, and trustworthy business environment that appeals to domestic and international investors.” 

The CMA’s report also sounds a cautionary note. It highlights the potential risks if competition remains weak or if developers neglect consumer protection regulations. Such lapses could expose individuals and businesses to significant levels of false information and AI-driven fraud. In the long run, a handful of dominant firms might exploit FMs to consolidate market power, offering subpar products or services at exorbitant prices.

While the scope of the CMA’s initial review focused primarily on competition and consumer protection concerns, it acknowledges that other important questions related to FMs, such as copyright, intellectual property, online safety, data protection, and security, warrant further examination.

Sridhar Iyengar, Managing Director of Zoho Europe, commented:

“The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.

Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools.

AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”

The proposed guiding principles, unveiled by the CMA, aim to steer the ongoing development and use of FMs, ensuring that people, businesses, and the economy reap the full benefits of innovation and growth. Drawing inspiration from the evolution of other technology markets, these principles seek to guide FM developers and deployers in the following key areas:

  • Accountability: Developers and deployers are accountable for the outputs provided to consumers.
  • Access: Ensuring ongoing access to essential inputs without unnecessary restrictions.
  • Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches.
  • Choice: Providing businesses with sufficient choices to determine how to utilize FMs effectively.
  • Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed.
  • Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling.
  • Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

Over the next few months, the CMA plans to engage extensively with a diverse range of stakeholders both within the UK and internationally to further develop these principles. This collaborative effort aims to support the positive growth of FM markets, fostering effective competition and consumer protection.

Gareth Mills, Partner at law firm Charles Russell Speechlys, said:

“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

The report itself notes that, although the CMA has established a number of core principles, there is still work to do and that stakeholder feedback – both within the UK and internationally – will be required before a formal policy and regulatory position can be definitively established.

As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

An update on the CMA’s progress and the reception of these principles will be published in early 2024, reflecting the authority’s commitment to shaping AI markets in ways that benefit people, businesses, and the UK economy as a whole.

(Photo by JESHOOTS.COM on Unsplash)

See also: UK to pitch AI’s potential for international development at UN

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/feed/ 0
UK to pitch AI’s potential for international development at UN https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/ https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/#respond Mon, 18 Sep 2023 09:28:39 +0000 https://www.artificialintelligence-news.com/?p=13600 The UK is pitching its vision for leveraging AI’s potential to accelerate development in the world’s most impoverished nations during the UN General Assembly (UNGA). The vision was set out by UK Foreign Secretary James Cleverly and calls upon international partners to collaborate and coordinate their efforts in harnessing AI for development in Africa and... Read more »

The post UK to pitch AI’s potential for international development at UN appeared first on AI News.

]]>
The UK is pitching its vision for leveraging AI’s potential to accelerate development in the world’s most impoverished nations during the UN General Assembly (UNGA).

The vision was set out by UK Foreign Secretary James Cleverly and calls upon international partners to collaborate and coordinate their efforts in harnessing AI for development in Africa and making progress towards the UN’s Sustainable Development Goals.

As part of its efforts, the UK is launching the ‘AI for Development’ programme in partnership with Canada’s International Development Research Centre (IDRC). The primary focus of this initiative is to assist developing countries, primarily in Africa, in building local AI capabilities and fostering innovation.

The announcement coincides with the UK’s co-convening of an event on AI during the margins of the UN General Assembly. This high-level session – chaired by US Secretary of State Antony Blinken – will assemble governments, tech companies, and non-governmental organisations (NGOs) to explore how AI can expedite progress towards the Sustainable Development Goals. These goals aim to create a healthier, fairer, and more prosperous world by 2030.

In parallel with these efforts, the UK is committing £1 million in investment towards a pioneering fund known as the Complex Risk Analytics Fund (‘CRAF’d’). This fund, in collaboration with international partners, will harness the power of AI to prevent crises before they occur. Additionally, it will provide assistance during emergencies and support countries in their recovery towards sustainable development.

Foreign Secretary James Cleverly said:

“The opportunity of AI is immense. It has already been shown to speed up drug discovery, help develop new treatments for common diseases, and predict food insecurity—to name only a few uses.

The UK, alongside our allies and partners, is making sure that the fulfilment of this enormous potential is shared globally.

As AI continues to rapidly evolve, we need a global approach that seizes the opportunities that AI can bring to solving humanity’s shared challenges. The UK-hosted AI summit this November will be key to helping us achieve this.”

Julie Delahanty, President of the International Development Research Centre (IDRC), expressed her satisfaction with the collaboration between IDRC and the UK Foreign, Commonwealth & Development Office (FCDO).

“IDRC is pleased to announce a new collaboration with FCDO, a key ally in tackling the most pressing development challenges,” said Delahanty.

“The AI for Development program will build on existing partnerships, leveraging AI’s capacity to reduce inequalities, address poverty, improve food systems, confront the challenges of climate change and make education more inclusive, while also mitigating risks.”

This announcement underscores the broader commitment of the UK to employ AI innovation to tackle global challenges, including the pursuit of the Sustainable Development Goals.

In a separate event, scheduled for 1-2 November 2023, the UK will host the world’s first major AI Safety Summit at the historic Bletchley Park in Buckinghamshire. This summit aims to garner international consensus on the urgent need for safety measures in cutting-edge AI technology.

See also: White House secures safety commitments from eight more AI companies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK to pitch AI’s potential for international development at UN appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/feed/ 0