Thursday, November 2, 2017

AI Race Avoidance

Updated: please note that Round 2 of the General AI Challenge, AI Race Avoidance, will now be launched in February 2018. We have pushed the launch date back as we recently started working with some exciting new partners and will need more time to finish the specifications of the round. Find out more here: https://www.general-ai-challenge.org/ai-race

The race for general artificial intelligence is rapidly evolving and at GoodAI we believe it is vital that the issue of safety is brought to the top of the agenda.

I am very excited to announce the work of the AI Roadmap Institute, partner organization of GoodAI, which has begun interdisciplinary workshops to help visualize different scenarios of the future with general artificial intelligence.

We have recently held a workshop on AI race avoidance.You can find the outcomes of it in our new blog post here: Avoiding The Precipice: Race Avoidance in the Development of Artificial General Intelligence. I have also summarised the main points below.

Article Summary
  • How can we avoid general AI research becoming a race between researchers, developers and companies, where safety gets neglected in favor of faster deployment of powerful, but unsafe general AI?
  • How can we safeguard against bad use of general AI?
  • AI safety research needs to be promoted beyond the boundaries of the small AI safety community and tackled interdisciplinarily.
  • Roadmaps can be used to compare possible futures and outline mitigation strategies for negative futures.
  • There needs to be active cooperation between safety experts and industry leaders to avoid negative scenarios.

General AI Challenge

This post is the beginning of something much bigger! In November 2017 we will launch Round 2 of the General AI Challenge, where participants will search for solutions to ensure that competition among stakeholders does not lead to negligence when it comes to safety.

We will also be running another workshop in October after the AI and Society Symposium in Tokyo.
We hope that the workshop and the next round of the Challenge will open up wider discussions to lots of different questions including:
  • How to incentivise AGI race winner to obey original agreements and/or share AGI with others?
  • We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
  • All these points are relevant to internal team dynamics as well. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. Looking at the problems across different scales, the pain points are similar.
  • What levels of transparency is optimal and how do we demonstrate transparency?
  • How do we stop the first developers of AGI becoming a target?
  • With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if our enemies don’t follow the ban?
  • And more
The Challenge will attempt to solve these problems via citizen science and help to promote the issue of safety in AI to a wider audience.

So stay tuned and prepare to get involved in the new round of the General AI Challenge!

Thank you for reading!

Marek Rosa
CEO and Founder of Keen Software House
CEO, CTO of GoodAI
:-)


For more news:
General AI Challenge: www.general-ai-challenge.org
AI Roadmap Institute: www.roadmapinstitute.org
GoodAI: www.goodai.com
Space Engineers: www.spaceengineersgame.com
Medieval Engineers: www.medievalengineers.com


Personal bio: Marek Rosa is the CEO and CTO of GoodAI, a general artificial intelligence R&D company, and the CEO and founder of Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

2 comments:

  1. Ban killer AI, violence is not an option

    ReplyDelete
  2. @Skurcey
    I agree entirely, this is a technology with the potential to change life for the better. What we need to do is put our corporate\political differences behind us and brainstorm on general AI and fusion power these two things can greatly propel us(the human race) to a future in which we are not forced to destroy ourselves and our planet for a momentary luxury.

    ReplyDelete