Wednesday, September 6, 2017

My view on the recent call to regulate autonomous weapons

Among the leaders of 116 the top AI firms from 26 countries, I have signed an Open Letter to collectively urge the United Nations to ban “killer robots”, or lethal autonomous weapons.

The capabilities of AI and robots are improving very fast, often in unpredicted directions, and the future of humanity is largely dependent on what rules are set now with regards to the militarization of AI.

Although we are still on the way to the general AI, narrow AI applications are already in place for military purposes. For example, South Korea has Samsung SGR-A1 sentry guns installed on the borders with North Korea, which have the ability to kill autonomously, which is so far limited by the need of officials’ permission - however, this is just a question of precedent for the autonomous decision-making to become a common practice. And we need to address this now, before it is too late.

I strongly believe that such weapons are a threat to humanity and their deployment may lead to an AI arms race and escalate unnecessary armed conflicts. Engineers cannot fully guarantee predictability and zero-failure functionality of these machines yet, and they could also be hacked or reprogrammed. If automated weapons become available and easy to use by anyone, they could totally change the dynamics of terrorism. As the terrorists won’t be risking own lives anymore, this could lead to the tremendous wave of attacks.

It is even worse if you think of aerial drones that are cheap and intelligent - they could be augmented with explosives, produced in millions and sent autonomously to search and destroy targets. The mathematics of autonomous machine war is alarming, if not terrifying: one army drone can cost roughly a hundred dollars, whilst an average of an eye-popping $2.1 million was spent for every U.S. soldier deployed in Afghanistan. Thus, $2.1 trillion for a million of soldiers as opposed to $100 million for a million drones to execute the (same) operation makes the choice of military rather obvious. One thing we cannot afford to forget, though, is that a full accounting of war's burdens cannot be placed on a ledger: from the innocent civilians harmed, to the soldiers killed and wounded, to their moaning families and parentless children, no price can convey the human toll of the wars. And scale of military actions that become possible with autonomous weapon-powered cost-cutting will be wider than we ever witnessed, with no guarantee that civilians won’t be affected.

There are some fears that banning military use of AI means sending people to war instead of robots. I want to highlight that the letter does not call for a complete ban on using robots for military purposes, only on weaponized robots that think for themselves and make the decision of who, or what, to kill. This is definitely not a black-or-white topic, and I am very interested in the open discussion of its possible impacts before it is too late.  

We should realize there will always be countries which will ignore a ban, but this does not mean that the other, abiding countries must react in a similar manner. The mafia use torture and kidnapping - and the police have to protect against them while not using such methods themselves. Police has justice and various resources to avoid the “dirty” game - and we should do that in military, too. Also, thanks to the UN-nature of the ban, if a country decides not to follow, it may risk facing economic sanctions from the alliance.

Military threats are not the only ones that need to be addresses with the rise of AI - AI’s value for business can also cause serious problems. To draw the world’s attention to AI safety and explore methods how to mitigate the negative scenarios, we have recently announced Round 2 of the General AI Challenge. The goal of the Challenge is to tackle important milestones on the road to general AI, and in the safety round, launching in November 2017, we will ask participants to come up with practical steps for avoiding the AI race to advance the safe AI development. The proposals will then be evaluated by the Scientific Advisory board of prominent AI researchers, and also by business representatives to test the business acceptance of such solutions.

There are many other initiatives, such as Responsible AI under the COST scheme (that GoodAI joined) or Partnership on AI addressing the safety issues. The legislations for AI development and deployment are being widely discussed, striving to find the perfect balance of preserving safety without limiting the scientific progress.

One thing is for sure - there is a lot we can do to ensure safe future. It is important to be cautious, but fear-mongering and sensationalizing AI is not a solution to the safety problems. Let’s encourage cooperation on AI instead of just seeding hysteria with doomsday tweets. Panic will sooner or later backfire with negative public perception, which can strain AI research. And there are not too many reasons for the actual panic, at least not in the public discussions and statements: despite everyone highlighting the dramatic sides of Putin’s recent AI speech (see here or here), he was actually saying that Russia would cooperate with other countries in the development of AI - and this is good news and something we need to build on.

Marek Rosa
CEO and Founder of Keen Software House
CEO, CTO and Founder of GoodAI

For more news:
General AI Challenge: www.general-ai-challenge.org
AI Roadmap Institute: www.roadmapinstitute.org
GoodAI: www.goodai.com
Space Engineers: www.spaceengineersgame.com
Medieval Engineers: www.medievalengineers.com


Personal bio: Marek Rosa is the CEO and CTO of GoodAI, a general artificial intelligence R&D company, and the CEO and founder of Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

No comments:

Post a Comment