Innovation

UN opens formal discussions on AI-powered autonomous weapons, could ban 'killer robots'

Leaders at the UN in Geneva just agreed to officially discuss guidelines for the design, development, and engineering of autonomous weapons. Here's why it matters.

31295546840990a1759bck.jpg
Image: Campaign to Stop Killer Robots

Many current fears around AI and automation center around the idea that superintelligence could somehow "take over," turning streets around the globe into scenes from The Terminator. While there is much to be gained from discussing the safe development of AI, there's another more imminent danger: Autonomous weapons.

On Friday, after three years of negotiations, the UN unanimously agreed to take action. At the Fifth Review Conference of the UN Convention on Certain Conventional Weapons, countries around the world agreed to begin formal discussions—which will take place for two weeks at the 2017 UN convention in Geneva—on a possible ban of lethal, autonomous weapons. Talks will begin in April or August, and 88 countries have agreed to attend. This week, the number of countries that support a full ban on killer robots went from 14 to 19.

"By moving to a group of governmental experts to formalize the official process, it takes it from being led by these kind of outside academics, and means that they have to find government experts to handle it," said Mary Wareham, coordinator for the Campaign to Stop Killer Robots. "It raises the expectation that they're going to do something about this," she said, although what will be done is not yet clear.

"It is great to see universal recognition of dangers coming from weaponized artificial intelligence," said Roman Yampolskiy, director of the Cybersecurity lab at the University of Louisville. "It is my hope that, in the future, general danger coming from malevolent AI or poorly designed superintelligent systems will likewise be universally understood."

In an address to the UN—which included a briefing by the Campaign to Stop Killer Robots—Toby Walsh, professor of Artificial Intelligence at University of New South Wales, highlighted the necessary steps involved in obtaining a ban on fully autonomous weapons.

More about Innovation

#WeAreNotWaiting: Diabetics are hacking their health, because traditional systems have failed them

Diabetics have been waiting for years for better technology to manage their condition. Some got tired of waiting and hacked together an open source hardware and software solution. This is their story.

Walsh referenced an initiative, announced on Tuesday from the IEEE—a group of a half-million members in the tech space—that "defined ethical standards for those building autonomous systems." The IEEE report "contained a number of recommendations including: there must be meaningful human control over individual attacks, and the design, development, or engineering of autonomous weapons beyond meaningful human control to be used offensively or to kill humans is unethical."

Last year, Walsh wrote an open letter, signed by thousands of leading researchers from the AI community, voicing concerns about an AI arms race, and what could happen if these lethal weapons—which can kill with a superhuman speed—end up in the wrong hands.

Earlier this week, nine members of the US Congress also wrote a letter to the secretaries of state and defense, supporting a ban on autonomous weapons.

"This is a very important issue that has suddenly become urgent," said Vince Conitzer, computer science professor at Duke University. "Where it concerns AI, the border between science fiction and reality is getting blurry in places, and autonomous weapons are on the fast track to crossing over to the reality side. Now is the time to act on this."

SEE: Police use robot to kill for first time; AI experts say it's no big deal but worry about future (TechRepublic)

Bonnie Docherty, who represents the Human Rights Watch and Harvard Law School's International Human Rights Clinic, co-authored a report this week highlighting the dangers of fully autonomous weapons. While Docherty is disappointed that the talks will be limited to two weeks—she'd been hoping for four—she is still encouraged by the decision.

"This week is a key moment for international efforts to address the concerns raised by fully autonomous weapons," Docherty said. "We are pleased that the countries at this major disarmament forum have agreed to formalize discussions on lethal autonomous weapons systems, which should be an important step on the road to a ban."

Roughly a hundred countries were involved in the discussions at the UN. Most of them have been entirely on board. "They've been over and over saying 'yes, we need to go to the next level,'" said Wareham. Even China, she said, agreed that international law on the issue was critical.

Only one country—which got on board on Friday—"expressed skepticism, hesitation, and said it's premature," said Wareham.

The country in question? Russia.

Also see

About

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

7 people following
 
Please Log In to TechRepublic to Comment

Current law enforcement representatives and army members are still killing by mistake and acting erratically many times. I'm wondering if the training programs are far, far, from perfect, how can that activity be programmed in robots with limited reasoning/identifying capabilities in order to react fairly and humanely and avoid tragic outcomes. Just wondering.

Editor's Picks

Free Newsletters, In your Inbox