Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

DeepMind, Elon Musk, and others pledge not to build lethal AI

Save for later
  • 3 min read
  • 18 Jul 2018

article-image

Leading researchers and figures from across the tech industry have signed a pledge agreeing not to develop lethal weapons with AI. The pledge, which was published today (18 July) to coincide with the International Joint Conference on Artificial Intelligence in Sweden, asserted "the decision to take a human life should never be delegated to a machine."

The pledge was coordinated by the Future of Life Institute, a charity which 'mitigates existential risks to humanity'.  The organization was previously behind an unsuccessful letter calling on the UN to ban "killer robots". That included some but not all of the signatories on the current letter.

Who signed the AI pledge?


This letter includes signatories from some of the leading names in the world of AI. DeepMind has thrown its support behind the letter, along with founders Demis Hassabis and Shane Legg. Elon Musk has also signed the letter, taking time out from his spat with members of the Thai cave rescue mission, and Skype founder Jaan Tallinn is also lending his support. Elsewhere, the pledge has support from a significant number of academics working on AI, including Stuart Russell from UC Berkeley and Yoshua Benigo from the University of Montreal.

Specifically, the pledge focuses on weapons that use AI to remove human decision-making from lethal force. However, what this means in practice isn't straightforward - which means legislating against such weapons is incredibly difficult. As a piece in Wired argued last year, banning autonomous weapons simply may not be practical.

It's also worth noting that the pledge does not cover the use of artificial intelligence for non-lethal purposes. Speaking to The Verge, military analyst Paul Scharre was critical of the pledge: "What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons,” he's quoted as saying.

Here's how the letter ends:

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

While the crux of the message represents the right step from industry leaders, whether this amounts to real change is another matter. With pledges and letters coming thick and fast over the last few years, perhaps it's time for concrete actions.

Read next: 

Google Employees Protest against the use of Artificial Intelligence in Military

5 reasons government should regulate technology

The New AI Cold War Between China and the USA

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime