cloud Archives - AI News https://www.artificialintelligence-news.com/tag/cloud/ Artificial Intelligence News Fri, 13 Oct 2023 15:39:35 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png cloud Archives - AI News https://www.artificialintelligence-news.com/tag/cloud/ 32 32 Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0
Microsoft releases Azure OpenAI Service and will add ChatGPT ‘soon’ https://www.artificialintelligence-news.com/2023/01/17/microsoft-releases-azure-openai-service-add-chatgpt-soon/ https://www.artificialintelligence-news.com/2023/01/17/microsoft-releases-azure-openai-service-add-chatgpt-soon/#respond Tue, 17 Jan 2023 11:22:58 +0000 https://www.artificialintelligence-news.com/?p=12622 Microsoft has announced the general availability of the Azure OpenAI Service and plans to add ChatGPT in the near future. Currently, Azure OpenAI Service provides access to some of the most powerful AI models in the world—including Codex and DALL-E 2. A “fine-tuned” version of GPT-3.5 will also be available through Azure OpenAI Service soon.... Read more »

The post Microsoft releases Azure OpenAI Service and will add ChatGPT ‘soon’ appeared first on AI News.

]]>
Microsoft has announced the general availability of the Azure OpenAI Service and plans to add ChatGPT in the near future.

Currently, Azure OpenAI Service provides access to some of the most powerful AI models in the world—including Codex and DALL-E 2.

A “fine-tuned” version of GPT-3.5 will also be available through Azure OpenAI Service soon.

Azure OpenAI Service was unveiled in November 2021. However, until now the service was not generally available.

In the months since its unveiling, Microsoft and OpenAI have demonstrated more of the models’ capabilities.

In June 2021, Microsoft-owned GitHub launched ‘Copilot’—a controversial AI programmer that can help developers write and improve their code.

Copilot has continued to see regular enhancements. Just this week, GitHub Next unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”.

In October 2022, Microsoft announced that the impressive text-to-image generative AI model DALL-E 2 would be integrated with the new Designer app and Bing Image Creator.

DALL-E 2, alongside others like Midjourney and Stable Diffusion, also stirred controversy and spurred protests from artists.

Beyond integrating DALL-E 2 in the Bing Image Creator, Microsoft is rumoured to be preparing to use ChatGPT to enhance Bing’s search capability and challenge Google’s dominance.

While the AI models have caused their fair share of concerns and raised important questions around everything from copyright to the wider societal impact, Microsoft and OpenAI have shown how powerful the models are.

Azure OpenAI Service has the potential to enhance our content production in several ways, including summarization and translation, selection of topics, AI tagging, content extraction, and style guide rule application,” said Jason McCartney, Vice President of Engineering at Al Jazeera.

“We are excited to see this service go to general availability so it can help us further contextualize our reporting by conveying the opinion and the other opinion.”

By making Azure OpenAI Service generally available, the duo are enabling more businesses to join others in accessing tools which can improve their operations.

“At Moveworks, we see Azure OpenAI Service as an important component of our machine learning architecture. It enables us to solve several novel use cases, such as identifying gaps in our customer’s internal knowledge bases and automatically drafting new knowledge articles based on those gaps,” commented Vaibhav Nivargi, CTO and Founder at Moveworks.

“Given that so much of the modern enterprise relies on language to get work done, the possibilities are endless—and we look forward to continued collaboration and partnership with Azure OpenAI Service.”

You can find out more about Azure OpenAI Service here.

(Image Credit: Microsoft)

Related: OpenAI opens waitlist for paid version of ChatGPT

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft releases Azure OpenAI Service and will add ChatGPT ‘soon’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/17/microsoft-releases-azure-openai-service-add-chatgpt-soon/feed/ 0
Babylon Health taps Google Cloud to boost scalability and innovation https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/ https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/#respond Mon, 28 Mar 2022 12:01:47 +0000 https://artificialintelligence-news.com/?p=11812 AI-powered healthcare service Babylon Health has announced a partnership with Google Cloud to boost scalability and innovation. London-based Babylon Health is a digital-first health service provider that uses AI and machine learning technology to provide access to health information to people whenever and wherever they need it. The company has partnered with private and public... Read more »

The post Babylon Health taps Google Cloud to boost scalability and innovation appeared first on AI News.

]]>
AI-powered healthcare service Babylon Health has announced a partnership with Google Cloud to boost scalability and innovation.

London-based Babylon Health is a digital-first health service provider that uses AI and machine learning technology to provide access to health information to people whenever and wherever they need it.

The company has partnered with private and public across the UK, North America, South-East Asia, and Rwanda with the aim of making healthcare more accessible and affordable to 24 million patients worldwide.

“Our job is to help people to stay well and we’re on a mission to provide affordable, accessible health care to everyone in the world,” explains Richard Noble, Engineering Director of Data at Babylon.

Babylon Health’s rapid growth has led it to seek a partner to help it scale.

By partnering with Google Cloud, the company claims that it’s been able to:

  • Increase event data ingestion from 1 TB per week to 190 TB daily
  • Reduce the wait time for users to access data from six months to a week
  • Integrate over 100 data sources – providing access to 80 billion data points
  • Save hundreds of hours of work by automatically transcribing 100,000 video consultations in 2021

Babylon Health needs to store and process huge amounts of sensitive data.

“We work with a lot of private patient data and we must ensure that it stays private,” explains Natalie Godec, cloud engineer at Babylon. “At the same time, we must enable our teams to innovate with that data while meeting different national regulatory standards.”

Therefore, Babylon Health required a partner it felt could handle such demands.

“We chose Google Cloud because we knew it could scale with us and support us with our data science and analysis and we could build the tools we needed with it quickly,” added Noble. “It offers the solutions that enable us to focus on our core business, access to health.”

Babylon Health says the move to Google Cloud has enabled it to better analyse its data using AI to unlock new tools and features that help clinicians and users alike. While building a new data model and giving access to users initially took six months, the company says it now takes under a week.

In London, Babylon Health offers its ‘GP at Hand’ service which – in partnership with the NHS – acts as a digital GP practice. Patients can connect to NHS clinicians remotely 24/7 and even be issued prescriptions if required. Where physical examinations are needed, patients will be directed to a suitable venue.

However, GP at Hand has been criticised as “cherry-picking” healthier patients—taking resources away from local GP practices that are often trying to care for sicker, more elderly patients.

Growing pains

While initial problems are to be expected from any relatively new service; poor advice in a healthcare service could result in unnecessary suffering, long-term complications, or even death.

In 2018, Dr David Watkins – a consultant oncologist at Royal Marsden Hospital – reached out to AI News to alert us to Babylon Health’s chatbot giving unsafe advice.

Dr Watkins provided numerous examples of clearly dangerous advice being given by the chatbot:

Babylon Health called Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to Babylon Health, Dr Watkins conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”. He estimates conducting between 800 and 900 full triages and that some were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

That same year, Babylon Health published a paper claiming that its AI could diagnose common diseases as well as human physicians. The Royal College of General Practitioners, the British Medical Association, Fraser and Wong, and the Royal College of Physicians all issued statements disputing the paper’s claims.

Dr Watkins has acknowledged that Babylon Health’s chatbot has improved and has substantially reduced its error rate. In 2018, when Dr Watkins first reached out to us, he says this rate was “one in one”.

In 2020, Babylon Health claimed in a paper that it can now appropriately triage patients in 85 percent of cases.

Hopefully, the partnership with Google Cloud continues to improve Babylon Health’s abilities to help it achieve its potentially groundbreaking aim to deliver 24/7 access to healthcare wherever a patient is.

(Photo by Hush Naidoo Jade Photography on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Babylon Health taps Google Cloud to boost scalability and innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/28/babylon-health-google-cloud-boost-scalability-innovation/feed/ 0
Paravision boosts its computer vision and facial recognition capabilities https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/ https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/#respond Wed, 29 Sep 2021 13:06:14 +0000 http://artificialintelligence-news.com/?p=11143 US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments. “From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision. “With these sweeping updates... Read more »

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments.

“From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision.

“With these sweeping updates to our product family, and with what has become possible in terms of accuracy, speed, usability and portability, we see a remarkable opportunity to unite disparate applications with a coherent sense of identity that bridges physical spaces and cyberspace.”

A new Scaled Vector Search (SVS) capability acts as a search engine to provide accurate, rapid, and stable face matching on large databases that may contain tens of millions of identities. Paravision claims the SVS engine supports hundreds of transactions per second with extremely low latencies.

Another scaling solution called Streaming Container 5 enables the processing of video at over 250 frames per second from any number of streams. The solution features advanced face tracking to ensure that identities remain accurate even in busy environments.

With more enterprises than ever looking to the latency-busting and privacy-enhancing benefits of edge computing, Paravision has partnered with Teknique to co-create a series of hardware and software reference designs that enable the rapid development of face recognition and computer vision capabilities at the edge.

Teknique is a leader in the development of hardware based on designs from California-based fabless semiconductor company Ambarella.

Paravision’s Face SDK has been enhanced for smart cameras powered by Ambarella CVflow chipsets. The update enables facial recognition on CVflow-powered cameras to achieve up to 40 frames per second full pipeline performance.

A new Liveness and Anti-spoofing SDK also adds new safeguards for Ambarella-powered facial recognition solutions. The toolkit uses Ambarella’s visible light, near-infrared, and depth-sensing capabilities to determine whether the camera is seeing a live subject or whether it’s being tricked by recorded footage or a dummy image.

On the mobile side, Paravision has released its Face SDK for Android. The SDK includes face detection, landmarks, quality assessment, template creation, and 1-to-1 or 1-to-many matching. Reference applications are included which include UI/UX recommendations and tools.

Last but certainly not least, Paravision has announced the availability of its first person-level computer vision SDK. The new SDK is designed to go “beyond face recognition” to detect the presence and position of individuals and unlock new use cases.

Provided examples of real-world applications for the computer vision SDK include occupancy analysis, the ability to spot tailgating, as well as custom intention or subject attributes.

“With Person Detection, users could determine whether employees are allowed access to a specific area, are wearing a mask or hard hat, or appear to be in distress,” the company explains. “It can also enable useful business insights such as metrics about queue times, customer throughput or to detect traveller bottlenecks.”

With these exhaustive updates, Paravision is securing its place as one of the most exciting companies in the AI space.

Paravision is ranked the US leader across several of NIST’s Face Recognition Vendor Test evaluations including 1:1 verification, 1:N identification, performance for paperless travel, and performance with face masks.

(Photo by Daniil Kuželev on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/feed/ 0
Google launches fully managed cloud ML platform Vertex AI https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/ https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/#respond Wed, 19 May 2021 15:33:44 +0000 http://artificialintelligence-news.com/?p=10578 Google Cloud has launched Vertex AI, a fully managed cloud platform that simplifies the deployment and maintenance of machine learning models. Vertex was announced during this year’s virtual I/O developer conference and somewhat breaks from Google’s tradition of using its keynote to focus more on updates to its mobile and web development solutions. Google announcing... Read more »

The post Google launches fully managed cloud ML platform Vertex AI appeared first on AI News.

]]>
Google Cloud has launched Vertex AI, a fully managed cloud platform that simplifies the deployment and maintenance of machine learning models.

Vertex was announced during this year’s virtual I/O developer conference and somewhat breaks from Google’s tradition of using its keynote to focus more on updates to its mobile and web development solutions. Google announcing the platform during the keynote shows how important the company believes it to be for a wide range of developers.

Google claims that using Vertex enables models to be trained with up to 80 percent fewer lines of code when compared to competing platforms.

Bradley Shimmin, Chief Analyst for AI Platforms, Analytics, and Data Management at Omdia, said:

“Data science practitioners hoping to put AI to work across the enterprise aren’t looking to wrangle tooling. Rather, they want tooling that can tame the ML lifecycle. Unfortunately, that is no small order.

It takes a supportive infrastructure capable of unifying the user experience, plying AI itself as a supportive guide, and putting data at the very heart of the process — all while encouraging the flexible adoption of diverse technologies.”

Vertex brings together Google Cloud’s AI solutions into a single environment where models can go from experimentation all the way to production.

Andrew Moore, VP and GM of Cloud AI and Industry Solutions at Google Cloud, said:

“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production.

We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”

Vertex provides access to Google’s MLOps toolkit which the company uses internally for workloads involving computer vision, conversation, and language.

Other MLOps features supported by Vertex include Vizier, which increases the rate of experimentation; Feature Store to help practitioners serve, share, and reuse ML features; and Experiments to accelerate the deployment of models into production with faster model selection.

Some high-profile companies were given early access to Vertex. Among them is ModiFace, a part of L’Oréal that focuses on the use of AR and AI to revolutionise the beauty industry.

Jeff Houghton, COO at ModiFace, said:

“We provide an immersive and personalized experience for people to purchase with confidence whether it’s a virtual try-on at web check out, or helping to understand what brand product is right for each individual.

With more and more of our users looking for information at home, on their phone, or at any other touchpoint, Vertex AI allowed us to create technology that is incredibly close to actually trying the product in real life.”

ModiFace uses Vertex to train AI models for all of its new services. For example, the company’s skin diagnostic service is trained on thousands of images from L’Oréal’s Research & Innovation arm and is combined with ModiFace’s AI algorithm to create tailor-made skincare routines.

Another firm that is benefiting from Vertex’s capabilities is Essence, a media agency that is part of London-based global advertising and communications giant WPP.

With Vertex AI, Essence’s developers and data analysts are able to regularly update models to keep pace with the rapidly-changing world of human behaviours and channel content.

Those are just two examples of companies whose operations are already being greatly enhanced through the use of Vertex. Now the floodgates have been opened, we’re sure there’ll be many more stories over the coming years and we can’t wait to hear about them.

You can learn how to get started with Vertex AI here.

(Photo by John Baker on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google launches fully managed cloud ML platform Vertex AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/feed/ 0
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 http://artificialintelligence-news.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
NVIDIA sets another AI inference record in MLPerf https://www.artificialintelligence-news.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/ https://www.artificialintelligence-news.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/#comments Thu, 22 Oct 2020 09:16:41 +0000 http://artificialintelligence-news.com/?p=9966 NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs. MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation. “Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform... Read more »

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last year, NVIDIA led all five benchmarks for both server and offline data centre scenarios with its Turing GPUs. A dozen companies participated.

23 companies participated in this year’s MLPerf but NVIDIA maintained its lead with the A100 outperforming CPUs by up to 237x in data centre inference.

For perspective, NVIDIA notes that a single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA.

“The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

The widespread availability of NVIDIA’s AI platform through every major cloud and data centre infrastructure provider is unlocking huge potential for companies across various industries to improve their operations.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA sets another AI inference record in MLPerf appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/22/nvidia-sets-another-ai-inference-record-mlperf/feed/ 1
Eggplant launches AI-powered software testing in the cloud https://www.artificialintelligence-news.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/ https://www.artificialintelligence-news.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/#respond Tue, 06 Oct 2020 11:11:17 +0000 http://artificialintelligence-news.com/?p=9929 Automation specialists Eggplant have launched a new AI-powered software testing platform. The cloud-based solution aims to help accelerate the delivery of software in a rapidly-changing world while maintaining a high bar of quality. Gareth Smith, CTO of Eggplant, said: “The launch of our cloud platform is a significant milestone in our mission to rid the... Read more »

The post Eggplant launches AI-powered software testing in the cloud appeared first on AI News.

]]>
Automation specialists Eggplant have launched a new AI-powered software testing platform.

The cloud-based solution aims to help accelerate the delivery of software in a rapidly-changing world while maintaining a high bar of quality.

Gareth Smith, CTO of Eggplant, said:

“The launch of our cloud platform is a significant milestone in our mission to rid the world of bad software. In our new normal, delivering speed and agility at scale has never been more critical.

Every business can easily tap into Eggplants’ AI-powered automation platform to accelerate the pace of delivery while ensuring a high-quality digital experience.” 

Enterprises have accelerated their shift to the cloud due to the pandemic and resulting increases in things such as home working.

Recent research from Centrify found that 51 percent of businesses which embraced a cloud-first model were able to handle the challenges presented by COVID-19 far more effectively.

Eggplant’s Digital Automation Intelligence (DAI) Platform features:

  • Cloud-based end-to-end automation: The scalable fusion engine provides frictionless and efficient continuous and parallel end-to-end testing in the cloud, for any apps and websites, and on any target platforms. 
  • Monitoring insights: The addition of advanced user experience (UX) data points and metrics, enables customers to benchmark their applications UX performance. These insights, added to the UX behaviour helps improve SEO. 
  • Fully automated self-healing test assets: The use of AI identifies the tests needed and builds and runs them automatically, under full user control. These tests are self-healing, and automatically adapt as the system-under-test evolves.   

The solution helps to support the “citizen developer” movement—using AI to enable no-code/low-code development for people with minimal programming knowledge.

Both cloud and AI ranked highly in a recent study (PDF) by Deloitte of the most relevant technologies “to operate in the new normal”. Cloud and cybersecurity were joint first with 80 percent of respondents, followed by cognitive and AI tools (73%) and the IoT (65%).

Eggplant’s combination of AI and cloud technologies should help businesses to deal with COVID-19’s unique challenges and beyond.

(Photo by CHUTTERSNAP on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Eggplant launches AI-powered software testing in the cloud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/06/eggplant-ai-powered-software-testing-cloud/feed/ 0
NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud https://www.artificialintelligence-news.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/ https://www.artificialintelligence-news.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/#respond Wed, 08 Jul 2020 10:56:12 +0000 http://artificialintelligence-news.com/?p=9734 Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads. The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.” NVIDIA claims the A100... Read more »

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
Google Cloud users can now harness the power of NVIDIA’s Ampere GPUs for their AI workloads.

The specific GPU added to Google Cloud is the NVIDIA A100 Tensor Core which was announced just last month. NVIDIA says the A100 “has come to the cloud faster than any NVIDIA GPU in history.”

NVIDIA claims the A100 boosts training and inference performance by up to 20x over its predecessors. Large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s.

For those who enjoy their measurements in teraflops (TFLOPS), the A100 delivers around 19.5 TFLOPS in single-precision performance and 156 TFLOPS for Tensor Float 32 workloads.

Manish Sainani, Director of Product Management at Google Cloud, said:

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads.

With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

The announcement couldn’t have arrived at a better time – with many looking to harness AI for solutions to the COVID-19 pandemic, in addition to other global challenges such as climate change.

Aside from AI training and inference, other things customers will be able to achieve with the new capabilities include data analytics, scientific computing, genomics, edge video analytics, and 5G services.

The new Ampere-based data center GPUs are now available in Alpha on Google Cloud. Users can access instances of up to 16 A100 GPUs, which provides a total of 640GB of GPU memory and 1.3TB of system memory.

You can register your interest for access here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA’s AI-focused Ampere GPUs are now available in Google Cloud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/07/08/nvidia-ai-ampere-gpus-available-google-cloud/feed/ 0
Box will launch ‘Skills Kit’ for building custom AI integrations https://www.artificialintelligence-news.com/2018/08/31/box-skills-lot-building-ai-integrations/ https://www.artificialintelligence-news.com/2018/08/31/box-skills-lot-building-ai-integrations/#respond Fri, 31 Aug 2018 14:21:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3689 California-based cloud content management and file sharing service provider Box recently announced that its Skills Kit platform, which allows organisations and developers to build AI integrations for interacting with stored content on their own, will be available to all customers in December 2018. The Box Skills framework was first announced in 2017 and was developing... Read more »

The post Box will launch ‘Skills Kit’ for building custom AI integrations appeared first on AI News.

]]>
California-based cloud content management and file sharing service provider Box recently announced that its Skills Kit platform, which allows organisations and developers to build AI integrations for interacting with stored content on their own, will be available to all customers in December 2018.

The Box Skills framework was first announced in 2017 and was developing an additional layer called ‘Box Skills Kit’ since inception. The latter is a toolkit that allows companies to develop their own bespoke versions of these integrations. The toolkit has attracted development from the likes of IBM, Microsoft, Google, Deloitte, and AIM Consulting.

Chief product officer at Box, Jeetu Patel, said: “Artificial intelligence has the potential to unlock incredible insights, and we are building the world’s best framework, in Box Skills, for bringing that intelligence to enterprise content.”

The Skills Kit has already been used by spirits firm Remy Cointreau. This work involved taking the basic Box Skill for automatically matching metadata to uploaded images, and modifying it so that it would identify specific company products in images. This is how the uploaded images are sorted into specific folders without the need for human interaction or verification.

Box also revealed that its Box Skills platform, which earlier only offered pre-built AI integrations, can now host custom AI models built by third-party AI firms. This means that if an organisation prefers a specific machine learning model built by IBM Watson Studio, Google Cloud AutoML, Microsoft Azure Custom Vision, or AWS SageMaker, can now be integrated into the Box platform to utilise the stored data.

The company also announced updates to its core automation services, which now enables customers to build their own scripts for repetitive workloads. For instance, a marketing team could automate the creation of a template at the beginning of every month and notify specific users to begin collaborating on a new pitch.

Box’s solution appears to be aimed towards smaller work groups that have predictable repetitive tasks in between periods of ad hoc collaboration. It’s less suited for more complicated tasks or those which are unpredictable.

The dashboard for creating these pre-scripted events is very simple, as every automation is based on the premise of ‘if this, then that’. This means that automated processes can be designed quickly by using the drop-down menus.

Box Skills supports over 20 different types of input and output, and includes options for targeting metadata, specific files, or entire folders.

Are you looking forward to the release of Box Skills? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Box will launch ‘Skills Kit’ for building custom AI integrations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/08/31/box-skills-lot-building-ai-integrations/feed/ 0