military Archives - AI News https://www.artificialintelligence-news.com/tag/military/ Artificial Intelligence News Fri, 28 Apr 2023 13:29:50 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png military Archives - AI News https://www.artificialintelligence-news.com/tag/military/ 32 32 Palantir demos how AI can be used in the military https://www.artificialintelligence-news.com/2023/04/28/palantir-demos-how-ai-can-used-military/ https://www.artificialintelligence-news.com/2023/04/28/palantir-demos-how-ai-can-used-military/#respond Fri, 28 Apr 2023 13:29:50 +0000 https://www.artificialintelligence-news.com/?p=12995 Palantir has demonstrated how AI can be used for national defense and other military purposes. The use of AI in the military is highly controversial. In this context, Large Language Models (LLMs) and algorithms must be implemented as ethically as possible. Palantir believes that’s where its AI Platform (AIP) comes in. AIP offers cutting-edge AI... Read more »

The post Palantir demos how AI can be used in the military appeared first on AI News.

]]>
Palantir has demonstrated how AI can be used for national defense and other military purposes.

The use of AI in the military is highly controversial. In this context, Large Language Models (LLMs) and algorithms must be implemented as ethically as possible.

Palantir believes that’s where its AI Platform (AIP) comes in. AIP offers cutting-edge AI capabilities and claims to ensure that the use of LLMs and AI in the military context is guided by ethical principles.

AIP is able to deploy LLMs and AI across any network, from classified networks to devices on the tactical edge. AIP connects highly sensitive and classified intelligence data to create a real-time representation of the environment.

The solution’s security features let you define what LLMs and AI can and cannot see and what they can and cannot do with safe AI and handoff functions. This control and governance are crucial for mitigating significant legal, regulatory, and ethical risks posed by LLMs and AI in sensitive and classified settings.

AIP also implements guardrails to control, govern, and increase trust. As operators and AI take action on the platform, AIP generates a secure digital record of operations. These capabilities are essential for responsible, effective, and compliant deployment of AI in the military.

In a demo showcasing AIP, a military operator responsible for monitoring activity within Eastern Europe receives an alert that military equipment is amassed in a field 30km from friendly forces.

AIP leverages large language models to allow operators to quickly ask questions such as:

  • What enemy units are in the region?
  • Task new imagery for this location at a resolution of one metre or higher
  • Generate three courses of action to target this enemy equipment
  • Analyse the battlefield, considering a Stryker vehicle and a platoon-size unit
  • How many Javelin missiles does Team Omega have?
  • Assign jammers to each of the validated high-priority communications targets
  • Summarise the operational plan

As the operator poses questions, the LLM is using real-time information integrated from across public and classified sources. Data is automatically tagged and protected by classification markings, and AIP enforces which parts of the organisation the LLM has access to while respecting an individual’s permissions, role, and need to know.

Every response from AIP retains links back to the underlying data records to enable transparency for the user who can investigate as necessary.

AIP unleashes the power of large language models and cutting-edge AI for defense and military organisations while aiming to do so with the appropriate guardrails and high levels of ethics and transparency that are required for such sensitive applications.


(Image Credit: Palantir)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Palantir demos how AI can be used in the military appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/28/palantir-demos-how-ai-can-used-military/feed/ 0
Lyft exec will head the Pentagon’s AI efforts https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/ https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/#respond Tue, 26 Apr 2022 15:07:15 +0000 https://artificialintelligence-news.com/?p=11911 Craig Martell, Head of Machine Learning at Lyft, is set to head the Pentagon’s AI efforts. Breaking Defense first broke the news after learning Martell was destined to be named as the Pentagon’s new chief digital and AI officer. Martell has significant AI industry experience – leading efforts at not just Lyft but also Dropbox... Read more »

The post Lyft exec will head the Pentagon’s AI efforts appeared first on AI News.

]]>
Craig Martell, Head of Machine Learning at Lyft, is set to head the Pentagon’s AI efforts.

Breaking Defense first broke the news after learning Martell was destined to be named as the Pentagon’s new chief digital and AI officer.

Martell has significant AI industry experience – leading efforts at not just Lyft but also Dropbox and LinkedIn – but has no experience navigating public-sector bureaucracy. The Pentagon is going to be very much “in at the deep-end” for Martell in that regard, something which he fully acknowledges.

“I don’t know my ways around the Pentagon yet and I don’t know what levers to pull,” said Martell to Breaking Defense. “So I’m also excited to be partnered up with folks who are really good at that as well.”

Over the first three to six months, Martell expects to spend his time identifying “marquee customers” and the systems that his office will need to improve. His budget for the 2023 fiscal year will be $600 million.

While cutting-edge innovations tend to come from the agile private sector, many contracts for use in the public sector – especially in areas like law enforcement and defense – receive such backlash that they are dropped.

One example is Google’s Project Maven contract with the US Department of Defense to supply AI technology to analyse drone footage. The month after it was leaked, over 4,000 employees signed a petition demanding that Google’s management cease work on Project Maven and promise to never again “build warfare technology.”

Nicolas Chaillan, the Pentagon’s former chief software officer, resigned in September last year in protest after claiming the US has “no competing fighting chance against China in 15 to 20 years” when it comes to AI.

Chaillan argues that a large part of the problem is the reluctance of US companies such as Google to work with the government on AI due to ethical debates over the technology. In contrast, Chinese firms are obligated to work with their government and have little regard for ethics.

It’s hard to imagine there’ll ever be much appetite in the West to compel private companies to provide their technology and knowledge (outside of wartime). Attracting talent like Martell may help the Western public sector gain the kind of agility to keep pace on the global stage without adopting some of the Orwellian practices of rivals.

“If we’re going to be successful in achieving the goals, if we’re going to be successful in being competitive with China, we have to figure out where the best mission value can be found first and that’s going to have to drive what we build, what we design, the policies we come up with,” Martell said.

(Image Credit: By Touch Of Light under CC BY-SA 4.0 license. Image has been cropped.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Lyft exec will head the Pentagon’s AI efforts appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/26/lyft-exec-will-head-pentagon-ai-efforts/feed/ 0
Unity devs aren’t too happy their work is being sold for military AI purposes https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/ https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/#respond Tue, 24 Aug 2021 14:06:09 +0000 http://artificialintelligence-news.com/?p=10954 Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military. Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around... Read more »

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military.

Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around the world.

Just imagine the horror if those developers found out their work was instead being used for real military purposes without their knowledge. That’s exactly what developers behind the popular Unity game engine discovered.

According to a Vice report, three former and current Unity employees confirmed that much of the company’s contract work is to do with AI programming. That’s of little surprise and wouldn’t be of too much concern if it wasn’t conducted under the “GovTech” department with seemingly a high degree of secrecy.

“It should be very clear when people are stepping into the military initiative part of Unity,” one of Vice’s sources said, on condition of anonymity for fear of reprisal.

Vice discovered several deals with the Department of Defense, including two six-figure contracts for “modeling and simulation prototypes” with the US Air Force.

Unity bosses clearly understand that some employees may not be entirely comfortable with knowing their work could be used for war. One memo instructs managers to use the terms “government” or “defense” instead of “military.”

In an internal Slack group, Unity CEO John Riccitello promised to have a meeting with employees.

“Whether or not I’m working directly for the government team, I’m empowering the products they’re selling,” wrote Riccitello. “Do you want to use your tools to catch bad guys?”

That question is likely to receive some passionate responses. After all, few of us are going to forget the backlash and subsequent resignation of Googlers following revelations about the company’s since-revoked ‘Project Maven’ contract with the Pentagon.

You can find Vice’s full report here.

(Photo by Levi Meir Clancy on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/feed/ 0
DARPA’s AI-powered jet fight will be held virtually due to COVID-19 https://www.artificialintelligence-news.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/ https://www.artificialintelligence-news.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/#respond Mon, 10 Aug 2020 15:06:40 +0000 http://artificialintelligence-news.com/?p=9803 An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19. “We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well... Read more »

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
An upcoming event to display and test AI-powered jet fighters will now be held virtually due to COVID-19.

“We are still excited to see how the AI algorithms perform against each other as well as a Weapons School-trained human and hope that fighter pilots from across the Air Force, Navy, and Marine Corps, as well as military leaders and members of the AI tech community will register and watch online,” said Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office.

“It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year.”

DARPA (Defense Advanced Research Projects Agency) is using the AlphaDogfight Trial event to recruit more AI developers for its Air Combat Evolution (ACE) program.

The upcoming event is the final in a series of three and will finish with a bang as the AI-powered F-16 fighter planes virtually take on a human pilot.

“Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI,” Javorsek added.

“If the champion AI earns the respect of an F-16 pilot, we’ll have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.”

The first event was held in November last year with early algorithms:

A second event was held in January this year demonstrating the vast improvements made with the algorithms over a relatively short period of time. The algorithms took on adversaries created by the Johns Hopkins University Applied Physics Lab:

The third and final event will be streamed live from the Applied Physics Lab (APL) from August 18th-20th.

Eight teams will fly against five APL-developed adversary AI algorithms on day one. On day two, teams will fly against each other in a round-robin tournament.

Day three is when things get most exciting, with the top four teams competing in a single-elimination tournament for the AlphaDogfight Trials Championship. The winning team’s AI will then fly against a real F-16 pilot to test the AI’s abilities against a human.

ACE envisions future air combat eventually being conducted without putting human pilots at risk. In the meantime, DARPA hopes the initiative will help improve human pilots’ trust in fighting alongside AI.

Prior registration is required to view the event. Non-US citizens must register prior to August 11th while Americans have until August 17th.

You can register for the event here.

(Image Credit: DARPA)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post DARPA’s AI-powered jet fight will be held virtually due to COVID-19 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/08/10/darpa-ai-jet-fight-online-covid-19/feed/ 0
Palantir took over Project Maven defense contract after Google backed out https://www.artificialintelligence-news.com/2019/12/12/palantir-project-maven-defense-contract-google-out/ https://www.artificialintelligence-news.com/2019/12/12/palantir-project-maven-defense-contract-google-out/#comments Thu, 12 Dec 2019 13:55:30 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6303 Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash. Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs). Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least... Read more »

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
Surveillance firm Palantir took up a Pentagon defense contract known as Project Maven after Google dropped out due to backlash.

Project Maven is a Pentagon initiative aiming to use AI technologies for deploying and monitoring unmanned aerial vehicles (UAVs).

Naturally, Google’s involvement with the initiative received plenty of backlash both internally and externally. At least a dozen employees quit Google while many others threatened to walk out if the firm continued building military products.

The pressure forced Google to abandon the lucrative Pentagon contract. However, it just meant that it was happily picked up by another company.

According to Business Insider who broke the news, the company which stepped in to develop Project Maven was Palantir – a company founded by Peter Thiel, a serial entrepreneur, venture capitalist, and cofounder of PayPal.

Business Insider reporter Becky Peterson wrote that:

“Palantir is working with the Defense Department to build artificial intelligence that can analyze video feeds from aerial drones … Internally at Palantir, where names of clients are kept close to the vest, the project is referred to as ‘Tron,’ after the 1982 Steven Lisberger film.”

In June 2018, Thiel famously said that Google’s decision to pull out from Project Maven but push ahead with Project Dragonfly (a search project for China) amounts to “treason” and should be investigated as such.

Project Maven/Tron is described as being capable of extensive tracking and monitoring of UAVs without human input, but the unclassified information available indicates that it will not be able to fire upon targets. This is somewhat in-line with the accepted norms being established about the use of AI in the military.

Many experts accept that AI will increasingly be used in the military but are seeking to establish acceptable practices. One of the key principles is that, while an AI can track and offer advice to human operators, it should never be able to make decisions by itself which could lead to loss of life.

The rapid pace in which the Project Maven contract was picked up by another company gives credence to comments made by some tech giants that, rather than pull out from such contracts altogether – and potentially hand them to less ethical companies – it’s better to help shape them from the inside.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Palantir took over Project Maven defense contract after Google backed out appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/12/12/palantir-project-maven-defense-contract-google-out/feed/ 3
Pentagon is ‘falling behind’ in military AI, claims former NSWC chief https://www.artificialintelligence-news.com/2019/10/23/pentagon-military-ai-former-nswc-chief/ https://www.artificialintelligence-news.com/2019/10/23/pentagon-military-ai-former-nswc-chief/#respond Wed, 23 Oct 2019 14:50:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6136 The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments. Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.... Read more »

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
The former head of US Naval Special Warfield Command (NSWC) has warned the Pentagon is falling behind adversaries in military AI developments.

Speaking on Tuesday, Rear Adm. Brian Losey said AI is able to provide tactical guidance as well as anticipate enemy actions and mitigate threats. Adversaries with such technology will have a significant advantage.

Losey is retired from the military but is now a partner at San Diego-based Shield AI.

Shield AI specialises in building artificial intelligence systems for the national security sector. The company’s flagship Hivemind AI enables autonomous robots to “see”, “reason”, and “search” the world. Nova is Shield AI’s first Hivemind-powered robot which autonomously searches buildings while streaming video and generating maps.

During a panel discussion at The Promise and The Risk of the AI Revolution conference, Losey said:

“We’re losing a lot of folks because of encounters with the unknown. Not knowing when we enter a house whether hostiles will be there and not really being able to adequately discern whether there are threats before we encounter them. And that’s how we incurred most of our casualties.

The idea is: can we use autonomy, can we use edge AI, can we use AI for manoeuvre to mitigate risk to operators to reduce casualties?”

AI has clear benefits today for soldiers on the battlefield, national policing, and even areas such as firefighting. In the future, it may be vital for national defense against ever more sophisticated weapons.

Some of the US’ historic adversaries, such as Russia, have already shown off developments such as killer robots and hypersonic missiles. AI will be vital to equalising the capabilities and hopefully act as a deterrent to the use of such weaponry.

“If you’re concerned about national security in the future, then it is imperative that the United States lead AI so we that we can unfold the best practices so that we’re not driven by secure AI to assume additional levels of risk when it comes to lethal actions,” Losey said.

Meanwhile, Nobel Peace Prize winner Jody Williams has warned against robots making life-and-death decisions on the battlefield. Williams said it is ‘unethical and immoral’ and can never be undone.

Williams was speaking at the UN in New York following the Project Quarterback announcement from the US military which uses AI to make decisions on what human soldiers should target and destroy.

“We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it,” said Williams during a panel discussion.

It’s almost inevitable AI will be used for military purposes. Arguably, the best we can hope for is to quickly establish international norms for their development and usage to minimise the unthinkable potential damage.

One such norm that many researchers have backed is that AI should only make recommendations on actions to take, but a human should take accountability for any decision made.

A 2017 report by the Human Rights Watch chillingly concluded that no-one is currently accountable for a robot unlawfully killing someone in the heat of a battle.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Pentagon is ‘falling behind’ in military AI, claims former NSWC chief appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/10/23/pentagon-military-ai-former-nswc-chief/feed/ 0
EU AI Expert Group: Ethical risks are ‘unimaginable’ https://www.artificialintelligence-news.com/2019/04/11/eu-ai-expert-group-ethical-risks/ https://www.artificialintelligence-news.com/2019/04/11/eu-ai-expert-group-ethical-risks/#respond Thu, 11 Apr 2019 11:51:44 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5487 The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks. Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society. On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when... Read more »

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
The EU Commission’s AI expert group has published its assessment of the rapidly-advancing technology and warned it has “unimaginable” ethical risks.

Some of the highlighted risks includes lethal autonomous systems, tracking individuals, and ‘scoring’ people in society.

On the subject of lethal autonomous systems, the experts warn machines with cognitive skills could “decide whom, when and where to fight without human intervention”.

When it comes to tracking individuals, the experts foresee biometric data of people being involuntarily used such as “lie detection [or] personality assessment through micro expressions”.

Citizen scoring is on some people’s minds after being featured in an episode of dystopian series Black Mirror. The experts note that scoring criteria must be transparent and fair, with scores being challengeable

The guidelines have been several years in the making and have launched alongside a pilot project for testing how they work in practice.

Experts from various fields across Europe sit in the group, including academic lawyers from Birmingham and Oxford universities.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

The EU as a whole is looking to invest €20bn (£17bn) every year for the next decade to close the current gap between European developments and those in Asia and North America.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post EU AI Expert Group: Ethical risks are ‘unimaginable’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/04/11/eu-ai-expert-group-ethical-risks/feed/ 0
US defense department outlines its AI strategy https://www.artificialintelligence-news.com/2019/02/14/us-defense-department-ai-strategy/ https://www.artificialintelligence-news.com/2019/02/14/us-defense-department-ai-strategy/#respond Thu, 14 Feb 2019 17:11:19 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4936 Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy. “The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said. A 17-page document outlines how the DoD intends to advance its... Read more »

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
Shortly after President Trump issued his vague AI executive order, the US Defense Department outlined a more comprehensive strategy.

“The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” DoD CIO Dana Deasy said.

A 17-page document outlines how the DoD intends to advance its AI prowess with five key steps:

  1. Delivering AI-enabled capabilities that address key missions.
  2. Scaling AI’s impact across DoD through a common foundation that enables decentralized development and experimentation.
  3. Cultivating a leading AI workforce.
  4. Engaging with commercial, academic, and international allies and partners.
  5. Leading in military ethics and AI safety.

Given the concerns about the so-called AI ‘arms race’, that final point will cause a sigh of relief in some people – at least for those who believe it.

The DoD will rapidly prototype new innovations, increase research and development, and boost training and recruitment.

Rather than AI replacing jobs, the DoD believes it will empower those currently serving: “The women and men in the US armed forces remain our enduring source of strength; we will use AI-enabled information, tools, and systems to empower, not replace, those who serve.”

Prior to his resignation as US Secretary of Defense, General James Mattis implored the president to create a national strategy for AI. With his defense background, Mattis was concerned the US is not keeping pace with the likes of China.

Here are the example areas in which the DoD believes AI can improve day-to-day operations:

  • Improving situational awareness and decision-making.
  • Increasing the safety of operating equipment.
  • Implementing predictive maintenance and supply.
  • Streamlining business processes (e.g. reducing the time spent on highly manual, repetitive, and frequent tasks.)

“The present moment is pivotal: we must act to protect our security and advance our competitiveness,” the DOD document states. “But we must embrace change if we are to reap the benefits of continued security and prosperity for the future.”

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post US defense department outlines its AI strategy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/14/us-defense-department-ai-strategy/feed/ 0
Chinese university recruits ‘patriotic’ students to build AI weapons https://www.artificialintelligence-news.com/2018/11/09/chinese-university-students-ai-weapons/ https://www.artificialintelligence-news.com/2018/11/09/chinese-university-students-ai-weapons/#comments Fri, 09 Nov 2018 16:10:06 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4184 A university in China has recruited 27 boys and four girls to become the world’s youngest AI weapons scientists. All of the students are under 18 and were picked from a list of 5,000 candidates by the Beijing Institute of Technology (BIT). Beyond academic prowess, the BIT sought other qualities in the candidates. “We are... Read more »

The post Chinese university recruits ‘patriotic’ students to build AI weapons appeared first on AI News.

]]>
A university in China has recruited 27 boys and four girls to become the world’s youngest AI weapons scientists.

All of the students are under 18 and were picked from a list of 5,000 candidates by the Beijing Institute of Technology (BIT).

Beyond academic prowess, the BIT sought other qualities in the candidates.

“We are looking for qualities such as creative thinking, willingness to fight, a persistence when facing challenges,” a BIT professor told the South China Morning Post.

The recruitment of students from such a young age marks a new point in the race to weaponise AI, primarily led by the US and China.

Students on the ‘Experimental Program for Intelligent Weapons Systems’ course will be mentored by two senior weapons scientists.

Following their first semester, the students will be asked to choose a speciality field in order to be assigned to a relevant defence laboratory for hands-on experience.

The course is four years long and students will be expected to progress onto a PhD at the university to lead China’s AI weapons initiatives.

Last year, Chinese President Xi Jinping emphasised his country will be putting a much greater focus on military AI research.

AI News reported back in July that China is planning for a new era of sea power with unmanned AI-powered submarines. The country hopes to have them operational by the early 2020s to patrol areas home to disputed military bases.

“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”

Of particular concern is that China’s subs are being designed not to seek input during the course of a mission. The international norm being promoted by AI researchers is that any weaponised AI system ultimately requires human input to make decisions.

If China is prepared to fully automate their submarines, it’s likely they’re willing to do so for other weapons systems.

There’s the infamous story of Soviet Officer Stanislav Petrov who decided not to launch the country’s nuclear warheads after a computer glitch made it appear like that five Minuteman intercontinental ballistic missiles had been launched by the US towards the Soviet Union.

Human instinct averted a nuclear disaster that day.

“We are wiser than the computers,” Petrov said in a 2010 interview with the German magazine Der Spiegel. “We created them.”

Had it been an AI instead of Petrov making the decision in 1983, the outcome would likely have been very different. China’s apparent willingness to fully automate weapons should be a concern to us all.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Chinese university recruits ‘patriotic’ students to build AI weapons appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/11/09/chinese-university-students-ai-weapons/feed/ 1
Google funding ‘good’ AI may help some forget that military fiasco https://www.artificialintelligence-news.com/2018/10/30/google-funding-good-ai-military-fiasco/ https://www.artificialintelligence-news.com/2018/10/30/google-funding-good-ai-military-fiasco/#respond Tue, 30 Oct 2018 12:49:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4139 Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with. The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts. Kicking off the initiative is the ‘AI Impact Challenge’ which... Read more »

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with.

The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary Google.org and its own experts.

Kicking off the initiative is the ‘AI Impact Challenge’ which is set to provide $25 million in funding to non-profits while providing access to Google’s vast resources.

As part of the initiative, Google partnered with the Pacific Islands Fisheries Science Center of the US National Oceanic and Atmospheric Administration (NOAA) to develop algorithms to identify humpback whale calls.

The algorithms were created using 15 years worth of data and provide vital information about humpback whale presence, seasonality, daily calling behaviour, and population structure.

While it’s great to see Google funding and lending its expertise to important AI projects, it’s set to a wider backdrop of Silicon Valley tech giants’ involvement with controversial projects such as defence.

Google itself was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Back in April, Google’s infamous ‘Don’t be evil’ motto was removed from its code of conduct’s preface. Now, in the final line, it says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees spoke up. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Here are what Google says is the company’s key objectives for AI developments:

    1. Be socially beneficial.
    1. Avoid creating or reinforcing unfair bias.
    1. Be built and tested for safety.
    1. Be accountable to people.
    1. Incorporate privacy design principles.
    1. Uphold high standards of scientific excellence.
  1. Be made available for uses that accord with these principles.  

That first objective, “be socially beneficial”, is what Google is aiming for with its latest initiative. The company says it’s not against future government contracts as long as they’re ethical.

“We’re entirely happy to work with the US government and other governments in ways that are consistent with our principles,” Google’s AI chief Jeff Dean told reporters Monday.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Google funding ‘good’ AI may help some forget that military fiasco appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/10/30/google-funding-good-ai-military-fiasco/feed/ 0