Amazon Archives - AI News https://www.artificialintelligence-news.com/tag/amazon/ Artificial Intelligence News Mon, 25 Sep 2023 13:36:34 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Amazon Archives - AI News https://www.artificialintelligence-news.com/tag/amazon/ 32 32 Amazon invests $4B in Anthropic to boost AI capabilities https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/ https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/#respond Mon, 25 Sep 2023 13:36:26 +0000 https://www.artificialintelligence-news.com/?p=13635 Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot. Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena. “We are... Read more »

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
Amazon has announced an investment of up to $4 billion into Anthropic, an emerging AI startup renowned for its innovative Claude chatbot.

Anthropic was founded by siblings Dario and Daniela Amodei, who were previously associated with OpenAI. This latest investment signifies Amazon’s strategic move to bolster its presence in the ever-intensifying AI arena.

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers.

“By significantly expanding our partnership, we can unlock new possibilities for organisations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

While Amazon’s investment in Anthropic may seem overshadowed by Microsoft’s reported $13 billion commitment to OpenAI, it is a clear indication of Amazon’s ambition in the rapidly-evolving AI landscape. The collaboration between Amazon and Anthropic holds the promise of reshaping the AI sector with innovative developments.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, CEO of Amazon.

“Customers are quite excited about Amazon Bedrock, AWS’ new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’ AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

Anthropic’s flagship product, the Claude AI model, distinguishes itself by claiming a higher level of safety compared to its competitors.

Claude and its advanced iteration, Claude 2, are large language model-based chatbots similar in functionality to OpenAI’s ChatGPT and Google’s Bard. They excel in tasks like text translation, code generation, and answering a variety of questions.

What sets Claude apart is its ability to autonomously revise responses, eliminating the need for human moderation. This unique feature positions Claude as a safer and more dependable AI tool, especially in contexts where precise, unbiased information is crucial.

Claude’s capacity to handle larger prompts also makes it particularly suitable for tasks involving extensive business or legal documents, offering a valuable edge in industries reliant on meticulous data analysis.

As part of this strategic investment, Amazon will acquire a minority ownership stake in Anthropic. Amazon is set to integrate Anthropic’s cutting-edge technology into a range of its products, including the Amazon Bedrock service, designed for building AI applications. 

In return, Anthropic will leverage Amazon’s custom-designed chips for the development, training, and deployment of its future AI foundation models. The partnership also solidifies Anthropic’s commitment to Amazon Web Services (AWS) as its primary cloud provider.

In the initial phase, Amazon has committed $1.25 billion to Anthropic, with an option to increase its investment by an additional $2.75 billion. If the full $4 billion investment materialises, it will become the largest publicly-known investment linked to AWS.

Anthropic’s partnership with Amazon comes alongside its existing collaboration with Google, where Google holds approximately a 10 percent stake following a $300 million investment earlier this year. Anthropic has affirmed its intent to maintain this relationship with Google and continue offering its technology through Google Cloud, showcasing its commitment to broadening its reach across the industry.

In a rapidly-advancing landscape, Amazon’s strategic investment in Anthropic underscores its determination to remain at the forefront of AI innovation and sets the stage for exciting future developments.

(Image Credit: Anthropic)

See also: OpenAI reveals DALL-E 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon invests $4B in Anthropic to boost AI capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/25/amazon-invests-4b-anthropic-boost-ai-capabilities/feed/ 0
MIT launches cross-disciplinary program to boost AI hardware innovation https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/ https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/#respond Thu, 31 Mar 2022 15:31:40 +0000 https://artificialintelligence-news.com/?p=11825 MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development. “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering,... Read more »

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
MIT has launched a new academia and industry partnership called the AI Hardware Program that aims to boost research and development.

“A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. 

“Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

There are five inaugural members of the program:

  • Amazon
  • Analog Devices
  • ASML
  • NTT Research
  • TSMC

As the diversity of the inaugural members shows, the program is intended to be a cross-disciplinary effort.

“As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” commented Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science

 “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

A key goal of the program is to help create more energy-efficient systems.

“We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” explained Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science.

“Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

Other key areas of exploration include:

  • Analog neural networks
  • New CMOS designs
  • Heterogeneous integration for AI systems
  • Monolithic-3D AI systems
  • Analog nonvolatile memory devices
  • Software-hardware co-design
  • Intelligence at the edge
  • Intelligent sensors
  • Energy-efficient AI
  • Intelligent Internet of Things (IIoT)
  • Neuromorphic computing
  • AI edge security
  • Quantum AI
  • Wireless technologies
  • Hybrid-cloud computing
  • High-performance computation

It’s an exhaustive list and an ambitious project. However, the AI Hardware Program is off to a great start with the inaugural members bringing significant talent and expertise in their respective fields to the table.

“We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions—solutions that we are proud to give to the world and generations to come,” says Aude Oliva, Senior Research Scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Director of Strategic Industry Engagement at the MIT Schwarzman College of Computing.

The program is being co-led by Jesús del Alamo and Aude Oliva. Anantha Chandrakasan will serve as its chair.

More information about the AI Hardware Program can be found here.

(Photo by Nejc Soklič on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MIT launches cross-disciplinary program to boost AI hardware innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/31/mit-launches-cross-disciplinary-program-boost-ai-hardware-innovation/feed/ 0
Amazon will continue to ban police from using its facial recognition AI https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/ https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/#respond Mon, 24 May 2021 16:27:29 +0000 http://artificialintelligence-news.com/?p=10587 Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes. The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where... Read more »

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes.

The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where facial recognition services – from various providers – were found to be inaccurate and/or misused by law enforcement.

Amazon has now extended its ban indefinitely.

Facial recognition services have already led to wrongful arrests that disproportionally impacted marginalised communities.

Last year, the American Civil Liberties Union (ACLU) filed a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma” following a misidentification by a facial recognition system.

Williams was held in a “crowded and filthy” cell overnight without being given any reason before being released on a cold and rainy January night where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find childcare so that she could come and pick him up.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, Deputy Director of digital rights group Fight for the Future.

Clearview AI – a controversial facial recognition provider that scrapes data about people from across the web and is used by approximately 2,400 agencies across the US alone – boasted in January that police use of its system jumped 26 percent following the Capitol raid.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices. Clearview AI was also forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

Many states, countries, and even some police departments are taking matters into their own hands and banning the use of facial recognition by law enforcement. Various rights groups continue to apply pressure and call for more to follow.

Human rights group Liberty won the first international case banning the use of facial recognition technology for policing in August last year. Liberty launched the case on behalf of Cardiff, Wales resident Ed Bridges who was scanned by the technology first on a busy high street in December 2017 and again when he was at a protest in March 2018.

Following the case, the Court of Appeal ruled that South Wales Police’s use of facial recognition technology breaches privacy rights, data protection laws, and equality laws. South Wales Police had used facial recognition technology around 70 times – with around 500,000 people estimated to have been scanned by May 2019 – but must now halt its use entirely.

Facial recognition tests in the UK so far have been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches, but 35 false positives.

A 2019 independent report into the Met Police’s facial recognition trials concluded that it was only verifiably accurate in just 19 percent of cases.

(Photo by Bermix Studio on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/feed/ 0
AWS announces nine major updates for its ML platform SageMaker https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/ https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/#comments Wed, 09 Dec 2020 14:47:48 +0000 http://artificialintelligence-news.com/?p=10096 Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker. SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case. During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.... Read more »

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.

SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.

During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities.

Swami Sivasubramanian, VP of Amazon Machine Learning at AWS, said:

“Hundreds of thousands of everyday developers and data scientists have used our industry-leading machine learning service, Amazon SageMaker, to remove barriers to building, training, and deploying custom machine learning models. One of the best parts about having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables.

Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug, and run custom machine learning models with greater visibility, explainability, and automation at scale.”

The first announcement is Data Wrangler, a feature which aims to automate the preparation of data for machine learning.

Data Wrangler enables customers to choose the data they want from their various data stores and import it with a single click. Over 300 built-in data transformers are included to help customers normalise, transform, and combine features without having to write any code.

Frank Farrall, Principal of AI Ecosystems and Platforms Leader at Deloitte, comments:

“SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of machine learning data preparation needed to take new products to market.

In turn, our clients benefit from the rate at which we scale deployments, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months.”

The second announcement is Feature Store. Amazon SageMaker Feature Store provides a new repository that makes it easy to store, update, retrieve, and share machine learning features for training and inference.

Feature Store aims to overcome the problem of storing features which are mapped to multiple models. A purpose-built feature store helps developers to access and share features that make it much easier to name, organise, find, and share sets of features among teams of developers and data scientists. Because it resides in SageMaker Studio – close to where ML models are run – AWS claims it provides single-digit millisecond inference latency.

Mammad Zadeh, VP of Engineering, Data Platform at Intuit, says:

“We have worked closely with AWS in the lead up to the release of Amazon SageMaker Feature Store, and we are excited by the prospect of a fully managed feature store so that we no longer have to maintain multiple feature repositories across our organization.

Our data scientists will be able to use existing features from a central store and drive both standardisation and reuse of features across teams and models.”

Next up, we have SageMaker Pipelines—which claims to be the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning.

Developers can define each step of an end-to-end machine learning workflow including the data-load steps, transformations from Amazon SageMaker Data Wrangler, features stored in Amazon SageMaker Feature Store, training configuration and algorithm set up, debugging steps, and optimisation steps.

SageMaker Clarify may be one of the most important features being debuted by AWS this week considering ongoing events.

Clarify aims to provide bias detection across the machine learning workflow, enabling developers to build greater fairness and transparency into their ML models. Rather than turn to often time-consuming open-source tools, developers can use the integrated solution to quickly try and counter any bias in models.

Andreas Heyden, Executive VP of Digital Innovations for the DFL Group, says:

“Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardising our machine learning workflows on Amazon SageMaker.

By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans with a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match.”

Deep Profiling for Amazon SageMaker automatically monitors system resource utilisation and provides alerts where required for any detected training bottlenecks. The feature works across frameworks (PyTorch, Apache MXNet, and TensorFlow) and collects system and training metrics automatically without requiring any code changes in training scripts.

Next up, we have Distributed Training on SageMaker which AWS claims makes it possible to train large, complex deep learning models up to two times faster than current approaches.

Kristóf Szalay, CTO at Turbine, comments:

“We use machine learning to train our in silico human cell model, called Simulated Cell, based on a proprietary network architecture. By accurately predicting various interventions on the molecular level, Simulated Cell helps us to discover new cancer drugs and find combination partners for existing therapies.

Training of our simulation is something we continuously iterate on, but on a single machine each training takes days, hindering our ability to iterate on new ideas quickly.

We are very excited about Distributed Training on Amazon SageMaker, which we are expecting to decrease our training times by 90% and to help us focus on our main task: to write a best-of-the-breed codebase for the cell model training.

Amazon SageMaker ultimately allows us to become more effective in our primary mission: to identify and develop novel cancer drugs for patients.”

SageMaker’s Data Parallelism engine scales training jobs from a single GPU to hundreds or thousands by automatically splitting data across multiple GPUs, improving training time by up to 40 percent.

With edge computing advancements increasing rapidly, AWS is keeping pace with SageMaker Edge Manager.

Edge Manager helps developers to optimise, secure, monitor, and maintain ML models deployed on fleets of edge devices. In addition to helping optimise ML models and manage edge devices, Edge Manager also provides the ability to cryptographically sign models, upload prediction data from devices to SageMaker for monitoring and analysis, and view a dashboard which tracks and provided a visual report on the operation of the deployed models within the SageMaker console.

Igor Bergman, VP of Cloud and Software of PCs and Smart Devices at Lenovo, comments:

“SageMaker Edge Manager will help eliminate the manual effort required to optimise, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms.

As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.”

Finally, SageMaker JumpStart aims to make it easier for developers which have little experience with machine learning deployments to get started.

JumpStart provides developers with an easy-to-use, searchable interface to find best-in-class solutions, algorithms, and sample notebooks. Developers can select from several end-to-end machine learning templates(e.g. fraud detection, customer churn prediction, or forecasting) and deploy them directly into their SageMaker Studio environments.

AWS has been on a roll with SageMaker improvements—delivering more than 50 new capabilities over the past year. After this bumper feature drop, we probably shouldn’t expect any more until we’ve put 2020 behind us.

You can find coverage of AWS’ more cloud-focused announcements via our sister publication CloudTech here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AWS announces nine major updates for its ML platform SageMaker appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/09/aws-nine-major-updates-ml-platform-sagemaker/feed/ 1
NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/ https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/#comments Tue, 03 Nov 2020 15:55:37 +0000 http://artificialintelligence-news.com/?p=9998 NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud. Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation... Read more »

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100 system – with eight A100 GPUs – provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, Vice President of Accelerated Computing at NVIDIA, following the benchmark results.

Businesses can access the A100 in AWS’ P4d instance. NVIDIA claims the instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

Each P4d instance features eight NVIDIA A100 GPUs. If even more performance is required, customers are able to access over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA).

Dave Brown, Vice President of EC2 at AWS, said:

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower-cost way to train their massive machine learning models.

Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

P4d supports 400Gbps networking and makes use of NVIDIA’s technologies including NVLink, NVSwitch, NCCL, and GPUDirect RDMA to further accelerate deep learning training workloads.

Some of AWS’ customers across various industries have already begun exploring how the P4d instance can help their business.

Karley Yoder, VP & GM of Artificial Intelligence at GE Healthcare, commented:

“Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results.

Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

For an example from a different industry, the research arm of Toyota is exploring how P4d can improve their existing work in developing self-driving vehicles and groundbreaking new robotics.

“The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours,” explained Mike Garrison, Technical Lead of Infrastructure Engineering at Toyota Research Institute.

“We are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

P4d instances are currently available in the US East (N. Virginia) and US West (Oregon) regions. AWS says further availability is planned soon.

You can find out more about P4d instances and how to get started here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/03/nvidia-mlperf-a100-gpu-amazon-cloud/feed/ 2
Amazon uses AI-powered displays to enforce social distancing in warehouses https://www.artificialintelligence-news.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/ https://www.artificialintelligence-news.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/#respond Wed, 17 Jun 2020 15:43:00 +0000 http://artificialintelligence-news.com/?p=9696 Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses. Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus. Amazon has used its AI expertise to create what it calls the... Read more »

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
Amazon has turned to an AI-powered solution to help maintain social distancing in its vast warehouses.

Companies around the world are having to look at new ways of safely continuing business as we adapt to the “new normal” of life with the coronavirus.

Amazon has used its AI expertise to create what it calls the Distance Assistant. Using a time-of-flight sensor, often found in modern smartphones, the AI measures the distance between employees.

The AI is used to differentiate people from their background and what it sees is displayed on a 50-inch screen for workers to quickly see whether they’re adhering to keeping a safe distance.

Augmented reality is used to overlay either a green or red circle underneath each employee. As you can probably guess – a green circle means that the employee is a safe distance from others, while a red circle indicates that person needs to give others some personal space.

The whole solution is run locally and does not require access to the cloud to function. Amazon says it’s only deployed Distance Assistant in a handful of facilities so far but plans to roll out “hundreds” more “over the next few weeks.”

While the solution appears rather draconian, it’s a clever – and arguably necessary – way of helping to keep people safe until a vaccine for the virus is hopefully found. However, it will strengthen concerns that the coronavirus will be used to normalise increased surveillance and erode privacy.

Amazon claims it will be making Distance Assistant open-source to help other companies adapt to the coronavirus pandemic and keep their employees safe.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon uses AI-powered displays to enforce social distancing in warehouses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/17/amazon-ai-displays-enforce-social-distancing-warehouses/feed/ 0
Amazon makes three major AI announcements during re:Invent 2019 https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/ https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/#respond Tue, 03 Dec 2019 15:45:54 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6270 Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements. During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer. Transcribe Medical The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest. Transcribe Medical is designed... Read more »

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across “thousands” of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS’ HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data.”

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazon’s SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazon’s machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

“Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.”

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the “world’s first” machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composer’s favourite genre is selected in addition to setting the model’s parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then it’s really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon makes three major AI announcements during re:Invent 2019 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/feed/ 0
Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI https://www.artificialintelligence-news.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/ https://www.artificialintelligence-news.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/#respond Thu, 22 Aug 2019 12:31:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5960 A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI. PAX, a Dutch NGO, ranked 50 firms based on three criteria: If technology they’re developing could be used for killer AI. Their involvement with military projects. If they’ve committed... Read more »

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI.

PAX, a Dutch NGO, ranked 50 firms based on three criteria:

  1. If technology they’re developing could be used for killer AI.
  2. Their involvement with military projects.
  3. If they’ve committed to not being involved with military applications in the future.

Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies putting the world at risk, while Google leads the way among large tech companies implementing proper safeguards.

Google’s ranking among the safest tech companies may be of surprise to some given the company’s reputation for mass data collection. Mountain View was also caught up in an outcry regarding its controversial ‘Project Maven’ contract with the Pentagon.

Project Maven was a contract Google had with the Pentagon to supply AI technology for military drones. Several high-profile employees resigned over the contract, while over 4,000 Google staff signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Pichai’s promise not to be involved with such contracts in the future appears to have satisfied PAX in their rankings. Google has since attempted to improve its public image around its AI developments with things such as the creation of a dedicated ethics panel, but that backfired and collapsed quickly after featuring a member of a right-wing think tank and a defense drone mogul.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Microsoft, which ranks among the highest risk tech companies in PAX’s list, warned investors back in February that its AI offerings could damage the company’s reputation. 

In a quarterly report, Microsoft wrote:

“Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Some of Microsoft’s forays into the technology have already proven troublesome, such as chatbot ‘Tay’ which became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine-learning capabilities.

Microsoft and Amazon are both currently bidding for a $10 billion Pentagon contract to provide cloud infrastructure for the US military.

“Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons,” comments Daan Kayser, PAX project leader on autonomous weapons. “Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”

You can find PAX’s full risk assessment of the companies here (PDF).

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Report: Companies like Amazon and Microsoft are ‘putting world at risk’ of killer AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/08/22/report-companies-amazon-microsoft-world-risk-ai/feed/ 0
No Rekognition: Police ditch Amazon’s controversial facial recognition https://www.artificialintelligence-news.com/2019/07/19/rekognition-police-amazon-facial-recognition/ https://www.artificialintelligence-news.com/2019/07/19/rekognition-police-amazon-facial-recognition/#respond Fri, 19 Jul 2019 16:11:04 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5849 Orlando Police have decided to ditch Amazon’s controversial facial recognition system Rekognition following technical issues. Rekognition was called out by the American Civil Liberties Union (ACLU) for erroneously labelling those with darker skin tones as criminals more often in a test using a database of mugshots. Jacob Snow, Technology and Civil Liberties Attorney at the... Read more »

The post No Rekognition: Police ditch Amazon’s controversial facial recognition appeared first on AI News.

]]>
Orlando Police have decided to ditch Amazon’s controversial facial recognition system Rekognition following technical issues.

Rekognition was called out by the American Civil Liberties Union (ACLU) for erroneously labelling those with darker skin tones as criminals more often in a test using a database of mugshots.

Jacob Snow, Technology and Civil Liberties Attorney at the ACLU Foundation of Northern California, said:

“Face surveillance will be used to power discriminatory surveillance and policing that targets communities of colour, immigrants, and activists. Once unleashed, that damage can’t be undone.”

Amazon disputed the methodology used by the ACLU claiming the default ‘confidence’ setting of 80 percent was left on when it suggests at least 95 percent for law enforcement purposes.

Orlando Police was using Rekognition to automatically detect suspected criminals in live footage taken by surveillance cameras. Despite help from Amazon, the police spent 15 months failing to get it to work properly.

“We haven’t even established a stream today,” the city’s chief information officer Rosa Akhtarkhavari told the Orlando Weekly. “We’re talking about more than a year later. We have not, today, established a reliable stream.”

Employees of Amazon recently wrote a letter to CEO Jeff Bezos expressing their concerns over the sale of facial recognition software and other services to US government bodies such as ICE (Immigration and Customs Enforcement).

In their letter, the Amazonians wrote:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used.”

Orlando Police has now cancelled its contract with Amazon. The news will be of some relief to those concerned about the privacy implications of such big brother-like systems.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post No Rekognition: Police ditch Amazon’s controversial facial recognition appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/07/19/rekognition-police-amazon-facial-recognition/feed/ 0
Amazon patent envisions Alexa listening to everything 24/7 https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/ https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/#respond Wed, 29 May 2019 14:07:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5691 A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword. Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps... Read more »

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
A patent filed by Amazon envisions a future where Alexa listens to users 24/7 without the need for a wakeword.

Current digital assistants listen for a wakeword such as “Ok, Google” or “Alexa,” before recording speech for processing. Especially for companies such as Google and Amazon which thrive on knowing everything about users, this helps to quell privacy concerns.

There are some drawbacks from this approach, mainly context. Future AI assistants will be able to provide more help when armed with information leading up to the request.

For example, say you were discussing booking a seat at your favourite restaurant next Tuesday. After asking, “Alexa, do I have anything on my schedule next Tuesday?” it could respond: “No, would you like me to book a seat at the restaurant you were discussing and add it to your calendar?”

Today, such a task would require three separate requests.

Amazon’s patent isn’t quite as complex just yet. The example provided in the filing envisions allowing the user to say things such as “Play ‘And Your Bird Can Sing’ Alexa, by the Beatles,” (Note the wakeword after the play song command.)

David Emm, Principal Security Researcher at Kaspersky Lab, said:

“Many Amazon Alexa users will likely be alarmed by today’s news that the company’s latest patent would allow the devices – commonplace in homes across the UK – to record everything a person says before even being given a command. Whilst the patent doesn’t suggest it will be installed in future Alexa-enabled devices, this still signals an alarming development in the further surrender of our personal privacy.

Given the amount of sensitive information exchanged in the comfort of people’s homes, Amazon would be able to access a huge volume of personal information – information that would be of great value to cybercriminals and threat actors. If the data isn’t secured effectively, a successful breach of Amazon’s systems could have a severe knock-on effect on the data security and privacy of huge numbers of people.

If this patent comes into effect, consumers need to be made very aware of the ramifications of this – and to be fully briefed on what data is being collected, how it is being used, and how they can opt out of this collection. Amazon may argue that analysing stored data will make their devices smarter for Alexa owners – but in today’s digital era, such information could be used nefariously, even by trusted parties. For instance, as we saw with Cambridge Analytica, public sector bodies could target election campaigns at those discussing politics.

There’s a world of difference between temporary local storage of sentences, to determine if the command word has been used, and bulk retention of data for long periods, or permanently – even if the listening process is legitimate and consumers have opted in. There have already been criticisms of Amazon for not making it clear what is being recorded and stored – and we are concerned that this latest development shows the company moving in the wrong direction – away from data visibility, privacy, and consent.”

There’s a joke about Uber that society used to tell you not to get into cars with strangers, and now you’re encouraged to order one from your phone. Lyft has been able to ride in Uber’s wake relatively negative PR free.

Getting the balance right between innovation and safety can be a difficult task. Pioneers often do things first and face the backlash before it actually becomes somewhat normal. That’s not advocating Amazon’s possible approach, but we’ve got to be careful outrage doesn’t halt progress while remaining vigilant of actual dangers.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Amazon patent envisions Alexa listening to everything 24/7 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/05/29/amazon-patent-alexa-listening-everything/feed/ 0