usa Archives - AI News https://www.artificialintelligence-news.com/tag/usa/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png usa Archives - AI News https://www.artificialintelligence-news.com/tag/usa/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
AI think tank calls GPT-4 a risk to public safety https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/ https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/#respond Fri, 31 Mar 2023 15:20:10 +0000 https://www.artificialintelligence-news.com/?p=12881 An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices. Marc Rotenberg, Founder and President of... Read more »

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/feed/ 0
Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
US introduces new AI chip export restrictions https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/ https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/#respond Thu, 01 Sep 2022 16:01:15 +0000 https://www.artificialintelligence-news.com/?p=12228 NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia. In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the... Read more »

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia.

In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the upcoming H100.

“The license requirement also includes any future NVIDIA integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits,” adds NVIDIA.

The US government has reportedly told NVIDIA that the new rules are geared at addressing the risk of the affected products being used for military purposes.

“While we are not in a position to outline specific policy changes at this time, we are taking a comprehensive approach to implement additional actions necessary related to technologies, end-uses, and end-users to protect US national security and foreign policy interests,” said a US Department of Commerce spokesperson.

China is a large market for NVIDIA and the new rules could affect around $400 million in quarterly sales.

AMD has also been told the new rules will impact its similar products, including the MI200.

As of writing, NVIDIA’s shares were down 11.45 percent from the market open. AMD’s shares are down 6.81 percent. However, it’s worth noting that it’s been another red day for the wider stock market.

(Photo by Wesley Tingey on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/feed/ 0
Lyft exec will head the Pentagon’s AI efforts https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/ https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/#respond Tue, 26 Apr 2022 15:07:15 +0000 https://artificialintelligence-news.com/?p=11911 Craig Martell, Head of Machine Learning at Lyft, is set to head the Pentagon’s AI efforts. Breaking Defense first broke the news after learning Martell was destined to be named as the Pentagon’s new chief digital and AI officer. Martell has significant AI industry experience – leading efforts at not just Lyft but also Dropbox... Read more »

The post Lyft exec will head the Pentagon’s AI efforts appeared first on AI News.

]]>
Craig Martell, Head of Machine Learning at Lyft, is set to head the Pentagon’s AI efforts.

Breaking Defense first broke the news after learning Martell was destined to be named as the Pentagon’s new chief digital and AI officer.

Martell has significant AI industry experience – leading efforts at not just Lyft but also Dropbox and LinkedIn – but has no experience navigating public-sector bureaucracy. The Pentagon is going to be very much “in at the deep-end” for Martell in that regard, something which he fully acknowledges.

“I don’t know my ways around the Pentagon yet and I don’t know what levers to pull,” said Martell to Breaking Defense. “So I’m also excited to be partnered up with folks who are really good at that as well.”

Over the first three to six months, Martell expects to spend his time identifying “marquee customers” and the systems that his office will need to improve. His budget for the 2023 fiscal year will be $600 million.

While cutting-edge innovations tend to come from the agile private sector, many contracts for use in the public sector – especially in areas like law enforcement and defense – receive such backlash that they are dropped.

One example is Google’s Project Maven contract with the US Department of Defense to supply AI technology to analyse drone footage. The month after it was leaked, over 4,000 employees signed a petition demanding that Google’s management cease work on Project Maven and promise to never again “build warfare technology.”

Nicolas Chaillan, the Pentagon’s former chief software officer, resigned in September last year in protest after claiming the US has “no competing fighting chance against China in 15 to 20 years” when it comes to AI.

Chaillan argues that a large part of the problem is the reluctance of US companies such as Google to work with the government on AI due to ethical debates over the technology. In contrast, Chinese firms are obligated to work with their government and have little regard for ethics.

It’s hard to imagine there’ll ever be much appetite in the West to compel private companies to provide their technology and knowledge (outside of wartime). Attracting talent like Martell may help the Western public sector gain the kind of agility to keep pace on the global stage without adopting some of the Orwellian practices of rivals.

“If we’re going to be successful in achieving the goals, if we’re going to be successful in being competitive with China, we have to figure out where the best mission value can be found first and that’s going to have to drive what we build, what we design, the policies we come up with,” Martell said.

(Image Credit: By Touch Of Light under CC BY-SA 4.0 license. Image has been cropped.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Lyft exec will head the Pentagon’s AI efforts appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory... Read more »

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/feed/ 0
FTC steps in to block Nvidia’s $40B acquisition of Arm https://www.artificialintelligence-news.com/2021/12/03/ftc-steps-in-block-nvidia-40b-acquisition-of-arm/ https://www.artificialintelligence-news.com/2021/12/03/ftc-steps-in-block-nvidia-40b-acquisition-of-arm/#respond Fri, 03 Dec 2021 15:39:10 +0000 https://artificialintelligence-news.com/?p=11464 America’s Federal Trade Commission (FTC) has become the first regulator to sue to block Nvidia’s acquisition of British chip designer Arm. Arm plays a critical role in the global technology supply chain with its designs used for edge AI chips and processors for smartphones, tablets, desktops, and servers. It’s of little surprise that Nvidia wants... Read more »

The post FTC steps in to block Nvidia’s $40B acquisition of Arm appeared first on AI News.

]]>
America’s Federal Trade Commission (FTC) has become the first regulator to sue to block Nvidia’s acquisition of British chip designer Arm.

Arm plays a critical role in the global technology supply chain with its designs used for edge AI chips and processors for smartphones, tablets, desktops, and servers.

It’s of little surprise that Nvidia wants to bring Arm under its wing and is willing to pay $40 billion (£29 billion) for it.

Global regulators, including in the UK and EU, have launched investigations into the deal due to the widespread implications.

Holly Vedova, Director of the Bureau of Competition at the FTC, said in a statement:

“The FTC is suing to block the largest semiconductor chip merger in history to prevent a chip conglomerate from stifling the innovation pipeline for next-generation technologies.

Tomorrow’s technologies depend on preserving today’s competitive, cutting-edge chip markets. This proposed deal would distort Arm’s incentives in chip markets and allow the combined firm to unfairly undermine Nvidia’s rivals.

The FTC’s lawsuit should send a strong signal that we will act aggressively to protect our critical infrastructure markets from illegal vertical mergers that have far-reaching and damaging effects on future innovations.”

The complaint highlights that Nvidia already uses Arm’s designs for areas including DPU SmartNICs, CPUs for cloud computing, and advanced driving systems. The FTC is concerned that Nvidia would have an incentive to use its acquisition of Arm to limit competitors’ access to new designs.

Some of Nvidia’s rivals have offered to invest in Arm if it helps the company to remain independent.

Dr Lil Read, Analyst at GlobalData, commented:

“The Nvidia-ARM deal is on its last legs. The regulatory environment is much tougher now since Qualcomm has formed a consortium to invest in ARM.

The FTC won’t let it be – nor will the UK CMA or the EU regulator. It’s likely that even if the deal managed to clear those hurdles, Chinese regulators would throw another spanner in the works.

Tying the acquisition up for another two years is not in anyone’s interest – not Nvidia’s, and certainly not ARM’s. There could be hope for ARM if a non-chip firm recognises this opportunity for vertical integration – a trend that we increasingly see with the likes of Tesla and Apple.”

Arm founder Hermann Hauser even suggested the merger would amount to “surrendering the UK’s most powerful trade weapon to the US”.

Last month, UK Digital Secretary Nadine Dorries ordered the CMA (Competition & Markets Authority) to launch a “Phase Two” probe into the proposed merger.

As part of its ‘Phase One’ report, the CMA determined the merger has the possibility of a “substantial lessening of competition across four key markets”. Those markets are data centres, the Internet of Things, automotive, and gaming.

The CMA now has 24 weeks to conduct Phase Two of its investigation.

Nvidia, for its part, has promised to work with UK regulators to alleviate concerns. The company has already pledged to keep Arm in the UK and hire more staff.

“Arm is an incredible company and it employs some of the greatest engineering minds in the world,” said Jensen Huang, CEO of Nvidia. “But we believe we can make Arm even more incredible and take it to even higher levels.”

Today’s decision by the FTC to launch a lawsuit makes the likelihood of the merger proceeding ever more remote.

(Photo by NordWood Themes on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post FTC steps in to block Nvidia’s $40B acquisition of Arm appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/03/ftc-steps-in-block-nvidia-40b-acquisition-of-arm/feed/ 0
CEBR: Automation increases US/UK business revenues, boosts economic resilience https://www.artificialintelligence-news.com/2021/11/16/cebr-automation-us-uk-business-revenues-economic-resilience/ https://www.artificialintelligence-news.com/2021/11/16/cebr-automation-us-uk-business-revenues-economic-resilience/#respond Tue, 16 Nov 2021 14:11:58 +0000 https://artificialintelligence-news.com/?p=11385 Research conducted by the Centre for Economics and Business Research (CEBR), in conjunction with SnapLogic, has found that automation is having a profound impact on the monthly revenues of UK businesses. Within three months of investment in automation technologies, UK companies saw an average increase of five percent – or £14 billion – per month.... Read more »

The post CEBR: Automation increases US/UK business revenues, boosts economic resilience appeared first on AI News.

]]>
Research conducted by the Centre for Economics and Business Research (CEBR), in conjunction with SnapLogic, has found that automation is having a profound impact on the monthly revenues of UK businesses.

Within three months of investment in automation technologies, UK companies saw an average increase of five percent – or £14 billion – per month.

The impact on US businesses was even higher. Over the same three-month period, US companies witnessed an average year-on-year increase in revenue of seven percent—equivalent to an extra $195 billion per month.

Unsurprisingly, businesses that invested more heavily in automation displayed more resilience during the COVID-19 pandemic.

If the US entered the pandemic with similar levels of automation as Singapore, the report suggests the country could have reduced its GDP contraction by $105-212 billion.

The UK, meanwhile, could have prevented its 2020 GDP contraction by around £10-14 billion if it matched the automation levels of the US.

“Our new research confirms a significant positive relationship between automation and economic resilience,” said Josie Dent, Managing Economist at CEBR.

“The adoption of automation, spurred on by the recent pandemic, has helped organisations shield themselves from disruption and quickly position themselves for accelerated growth.”

Rather than destroy jobs, as some fear, automation is boosting employment.

US companies saw an average annual increase in employment of seven percent – equivalent to 7.2 million jobs – within three months of adopting automation technologies.

Over the same period, UK counterparts created an average increase of four percent in jobs—equivalent to around 676,000 roles.

The report suggests that automation has the potential to boost productivity in the UK by 15 percent in the long-term. The healthcare, social work, and transportation industries were noted as particularly benefiting from automation.

“Automation has also led to job creation and greater worker productivity, a significant contrast to the economic picture seen in the period following the global financial crisis,” explained Dent.

The pandemic and clear benefits of automation have driven more businesses than ever to invest in relevant technologies.

In the US, companies spent an average of 13 percent of their annual revenue (amounting to $4.4 trillion) on automation-related technologies. In the UK, companies spent an average of 8 percent of their revenue, or £268 billion in total.

“This first of its kind report from Cebr demonstrates the power of automation to help businesses navigate widespread disruption, and shows how it can be used as a tool to accelerate growth in a post-pandemic age,” said Gaurav Dhillon, CEO at SnapLogic. 

“Businesses today need to equip themselves with enterprise automation technologies that will allow them to quickly adapt and execute business strategies in a rapidly-changing world.”

(Photo by Konstantin Evdokimov on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post CEBR: Automation increases US/UK business revenues, boosts economic resilience appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/16/cebr-automation-us-uk-business-revenues-economic-resilience/feed/ 0
Paravision boosts its computer vision and facial recognition capabilities https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/ https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/#respond Wed, 29 Sep 2021 13:06:14 +0000 http://artificialintelligence-news.com/?p=11143 US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments. “From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision. “With these sweeping updates... Read more »

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
US-based Paravision has announced updates to boost its computer vision and facial recognition capabilities across mobile, on-premise, edge, and cloud deployments.

“From cloud to edge, Paravision’s goal is to help our partners develop and deploy transformative solutions around face recognition and computer vision,” said Joey Pritikin, Chief Product Officer at Paravision.

“With these sweeping updates to our product family, and with what has become possible in terms of accuracy, speed, usability and portability, we see a remarkable opportunity to unite disparate applications with a coherent sense of identity that bridges physical spaces and cyberspace.”

A new Scaled Vector Search (SVS) capability acts as a search engine to provide accurate, rapid, and stable face matching on large databases that may contain tens of millions of identities. Paravision claims the SVS engine supports hundreds of transactions per second with extremely low latencies.

Another scaling solution called Streaming Container 5 enables the processing of video at over 250 frames per second from any number of streams. The solution features advanced face tracking to ensure that identities remain accurate even in busy environments.

With more enterprises than ever looking to the latency-busting and privacy-enhancing benefits of edge computing, Paravision has partnered with Teknique to co-create a series of hardware and software reference designs that enable the rapid development of face recognition and computer vision capabilities at the edge.

Teknique is a leader in the development of hardware based on designs from California-based fabless semiconductor company Ambarella.

Paravision’s Face SDK has been enhanced for smart cameras powered by Ambarella CVflow chipsets. The update enables facial recognition on CVflow-powered cameras to achieve up to 40 frames per second full pipeline performance.

A new Liveness and Anti-spoofing SDK also adds new safeguards for Ambarella-powered facial recognition solutions. The toolkit uses Ambarella’s visible light, near-infrared, and depth-sensing capabilities to determine whether the camera is seeing a live subject or whether it’s being tricked by recorded footage or a dummy image.

On the mobile side, Paravision has released its Face SDK for Android. The SDK includes face detection, landmarks, quality assessment, template creation, and 1-to-1 or 1-to-many matching. Reference applications are included which include UI/UX recommendations and tools.

Last but certainly not least, Paravision has announced the availability of its first person-level computer vision SDK. The new SDK is designed to go “beyond face recognition” to detect the presence and position of individuals and unlock new use cases.

Provided examples of real-world applications for the computer vision SDK include occupancy analysis, the ability to spot tailgating, as well as custom intention or subject attributes.

“With Person Detection, users could determine whether employees are allowed access to a specific area, are wearing a mask or hard hat, or appear to be in distress,” the company explains. “It can also enable useful business insights such as metrics about queue times, customer throughput or to detect traveller bottlenecks.”

With these exhaustive updates, Paravision is securing its place as one of the most exciting companies in the AI space.

Paravision is ranked the US leader across several of NIST’s Face Recognition Vendor Test evaluations including 1:1 verification, 1:N identification, performance for paperless travel, and performance with face masks.

(Photo by Daniil Kuželev on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Paravision boosts its computer vision and facial recognition capabilities appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/29/paravision-boosts-its-computer-vision-and-facial-recognition-capabilities/feed/ 0
Hi Auto brings conversational AI to drive-thrus using Intel technology https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/ https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/#respond Thu, 20 May 2021 14:34:08 +0000 http://artificialintelligence-news.com/?p=10583 Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies. Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020. Long queues at drive-thrus... Read more »

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
Hi Auto is increasing the efficiency of drive-thrus with a conversational AI system powered by Intel technologies.

Drive-thru usage has rocketed over the past year with many indoor restaurants closed due to pandemic-induced restrictions. In fact, research suggests that drive-thru orders in the US alone increased by 22 percent in 2020.

Long queues at drive-thrus have therefore become part of the “new normal” and fast food is no longer the convenient alternative to cooking after a long day of Zoom calls.

Israel-based Hi Auto has created a conversational AI system that greets drive-thru guests, answers their questions, suggests menu items, and enters their orders into the point-of-sale system. If an unrelated question is asked – or the customer orders something that is not on the standard menu – the AI system automatically switches over to a human employee.

The first restaurant to trial the system is Lee’s Famous Recipe Chicken in Ohio.

Chuck Doran, Owner and Operator at Lee’s Famous Recipe Chicken, said:

“The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore. We greet them as soon as they get to the board and the order is taken correctly.

It’s amazing to see the level of accuracy with the voice recognition technology, which helps speed up service. It can even suggest additional items based on the order, which helps us increase our sales.

If a person is running the drive-thru, they may suggest a sale in one out of 20 orders. With Hi Auto, it happens in every transaction where it’s feasible. So, we see improvements in our average check, service time, and improvements in consistency and customer service.

And, because the cashier is now less stressed, she can focus on customer service as well. A less-burdened employee will be a happier employee and we want happy employees interacting with our customers.”

By reducing the number of staff needed for customer service, more employees can be put to work on fulfilling orders to serve as many people as possible. A recent survey of small businesses found that 42 percent have job openings that can’t be filled so ensuring that every worker is optimally utilised is critical.

Roy Baharav, CEO and Co-Founder at Hi Auto, commented:

“At Lee’s, we met a team that puts its heart and soul into serving its customers.

We operationalised our AI system based on what we learned from the owners, general managers, and employees. They have embraced the solution and within a short time began reaping the benefits.

We are now applying the process and lessons learned at Lee’s at additional customer sites.”

Hi Auto’s solution runs on Intel Xeon processors in the cloud and Intel NUC.

Joe Jensen, VP in the Internet of Things Group and GM of Retail, Banking, Hospitality and Education at Intel, said:

“We’re increasingly seeing restaurants interested in leveraging AI to deliver actionable data and personalise customer experiences.

With Hi Auto’s solution powered by Intel technology, quick-service restaurants can help their employees be more productive while increasing customer satisfaction and, ultimately, their bottom line.”

Lee’s Famous Recipe Chicken restaurants plan to rollout Hi Auto’s solution at more of its branches. A video of the conversational AI system in action can be viewed here:

Going forward, Hi Auto plans to add Spanish language support and continue optimising its conversational AI solution. The company says pilots are already underway with some of the largest quick-service restaurants.

(Image Credit: Lee’s Famous Recipe Chicken)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Hi Auto brings conversational AI to drive-thrus using Intel technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/20/hi-auto-conversational-ai-drive-thrus-intel-technology/feed/ 0