Wednesday, February 15, 2017

First round of General AI Challenge just launched: Gradual Learning – Learning like a Human



SUMMARY:
  • First 6-month warm-up round just launched
  • Focus on “gradual learning” – because it’s an architecture property that enables gradual accumulation of skills, making learning more efficient
  • $50k in prizes in this round; $5mil in total prizes over the multi-year General AI Challenge


Today, we at GoodAI have launched our first warm-up round of the General AI Challenge.

It’s one of many stepping stones on our mission to develop general artificial intelligence - as fast as possible - to help humanity and understand the universe.

This first round focuses on “gradual learning” – which is the ability to gradually accumulate skills and use existing skills to learn new skills more efficiently.

The reason why we started with “gradual learning” is that we have identified it as an architecture property that will enable the efficient inclusion of additional properties. In other words, if you use your existing knowledge to learn to solve new problems, you should be more efficient than if you always have to start with zero experience.

The gradual learning round is not concerned with how good an agent is at solving a particular task (e.g. highest score in a game). Gradual learning is about how efficient an agent is at learning to solve new and unseen task. Using less training data and fewer computational resources is  among the criteria for better agents.

Gradual learning requires a combination of at least these abilities: compositional learning, meta-learning / learning to learn, continuous learning, life-long learning, learning without forgetting, transfer learning and more. More info about “gradual learning” and other required properties for general AI is available in our framework document.


How this round works:
  • Today, teams start developing their AI / AGI agents
  • They can develop, test and train their agents on training tasks provided by us
  • All these tasks were designed with “graduality” in mind – which means that each task builds on skills acquired in previous tasks; each new task reuses skills learned in previous tasks
  • After 6 months, teams will submit their pre-trained agents / models and code
  • We will start evaluating the agents on non-public evaluation tasks
  • We will test the agent’s ability to learn gradually and to not-forget skills
  • The environment that we use for this round is a version of CommAI-Env. It is based on byte inputs and outputs and has text-like properties.
  • The training tasks are based on the CommAI-mini set recently proposed by Baroni et al., 2017 (https://arxiv.org/abs/1701.08954).
  • It may look like the agent is learning to communicate with the environment, however, our ultimate goal is not to build agents for this kind of environment. We chose it now because this type of environment makes it easier and more intuitive for people to understand why one task builds on top of a previous task. If we chose more complex and noisy environments (e.g. computer games), you would have a much harder time identifying when agents build skills on top of previously acquired skills.
  • However, our plan with the General AI Challenge is to scale to this level of complexity sooner or later.
  • Another reason for this environment was that during our road-mapping process we had identified “learned communication” as one of the essential skills which can speed up the acquisition of more advanced skills, and therefore increase the efficiency of learning (but this is something for later stages anyway)
Where are we aiming for with the General AI Challenge? We have identified a set of open problems that we consider to be key milestones in achieving general AI. Our teams here at GoodAI are working hard on solving these milestones but we also think that “outsourcing” to the greater community of researchers, programmers, and hackers can both speed up this process and diversify the avenues of research. We are also hoping that we may find new talented colleagues within the participating teams.

What if no team passes the evaluation in 6 months? Well, we will probably restart this same round again, perhaps with modified definitions and rules, or maybe slightly different evaluation tasks. There’s also a hope that if the participants get another chance, they will build on top of experience they gained in the first attempt which would eventually lead to  a solution for gradual learning. In principle, the teams would then be gradually learning how to gradually learn :-)

We have allocated $5mil in total prize money for this multi-year challenge. We plan to distribute this pool of money to the participants of various rounds throughout the following years. We hope that we get to general AI before the money runs out :-)

Thank you for reading!

Marek Rosa
CEO and Founder of Keen Software House
CEO, CTO of GoodAI


For more news:
General AI Challenge: www.general-ai-challenge.org
AI Roadmap Institute: www.roadmapinstitute.org
GoodAI: www.goodai.com
Space Engineers: www.spaceengineersgame.com
Medieval Engineers: www.medievalengineers.com


Personal bio: Marek Rosa is the CEO and CTO of GoodAI, a general artificial intelligence R&D company, and the CEO and founder of Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

15 comments:

  1. I can see you guys finally started appreciate the difficulties of AGI. There are no more forums on your web-sites, however, remember what I was trying to tell you before. I can see, you are still trying to follow "all good methodologies".

    It is hard to tell something since judging from the first step indicates very problematic base of an AGI approach. Simply speaking I don’t think you can create a gradual AGI agent with its learning environment. This would be equivalent to the definition of an incremental intelligence. Not to mention the difficulties of scaling this type of approach.

    In brief: designing "an agent that understands" could be viewed as solving the AGI in itself. This would require much much more to even begin with.

    If you guys are really serious about AGI try to see the big picture and how all this fit in general. Hope you guys have a lot of fun learning all this stuff, and won’t be discouraged by local minima.

    It is always refreshing to read about people/companies genuinely interested in the AGI field.

    Anyway, best wishes and keep your mind open

    focus2000x

    ReplyDelete
  2. That's great, but hundreds of SE players can't play SE second week in a row, coz one of programmers made a buggy "drivers update" warning. But they released lens flares instead of simple fix...

    ReplyDelete
    Replies
    1. Hi, I am developer in KeenSWH. Please, contact us on support@keenswh.com and we will solve your issue.

      Delete
  3. There might be a slight confusion with this competition - namely, when a successful solution is created and applied for the competition, the creator would send away his work without any guarantee that the company won't claim the work as their own and take credit for it, never actually rewarding the creator... Or maybe telling the public the goal hasn't been reached, therefore "repeating" the competition, while in fact the successful product will be kept and used?

    Is there any guarantee against this? It would definitely encourage more independent creators as they would be reassured their work wouldn't be stolen.

    ReplyDelete
    Replies
    1. Thanks for the heads up.
      GoodAI won't have exclusivity on the submitted solution(s). All participants have the right to share it however they want, and it would be very hard to actually claim that GoodAI invented the solution if the author has already published it somewhere else. Also, the evaluation tasks will be published after the one month evaluation period ends. So everyone will get reassurance on the competitions transparency.

      Delete
  4. The "General AI Challenge" is too confining for me to work within its framework. Therefore, while the Challenge is moving forward, I will be designing and coding Artificial General Intelligence on a divergent pathway.

    ReplyDelete
  5. I think efficient pattern-detection is the core. Music, Text/Syntax patterns, outlines in visual changes.
    An example of pattern-detection is: W-ORT / WORD (matches (W+)(ort/place) with (wort/word) by using a different language) and then can link "place" to NOUN→"noun" for sentence construction because word→"noun".

    After that, I would try a pathfinder-approach (start-word to destination-word over context).
    This pathfinder uses a context and splits 50% into brute force and 50% intuition (linking a context to a similar context).

    Using regex functions is nice, because it's language-independent (java, c, everything implements regex) and I hope I can expand on that in StarMade https://starmadedock.net/threads/regai-blueprinting-regex-ais.28430/

    ReplyDelete
  6. Could you clarify the specification about the non-public tasks? In https://mirror.general-ai-challenge.org/challenge_first_round_specifications.pdf you say that natural language processing is not necessary for solving the tasks (page 13). But then it says that the tasks in A.1.6 are examples what could be used for the non-public tasks, and these tasks includes parsing "evaluate" for calculating formulas, and other simplified language constructs, like "good if", and even some examples from "Blocks World". So in my understanding you could design any task, like playing the Tetris game with (simple) English words. This seems to be too difficult for the current state of AI and I predict that nobody can manage to write a program which solves all tasks.

    ReplyDelete
    Replies
    1. Hi Frank,
      the examples in A.1.6 use simplified language constructs that require simpler than NLP processing. For example, one could imagine a system with language processing capabilities on the level of an LL or LR parser for processing the examples. Sure, such a system is still difficult for the current state of AI. But we intend to help the AI by guiding its learning process. We will first teach it to solve simpler parsing tasks before asking it solve these hardest-level tasks.

      Delete
    2. Hi Martin,

      thanks, I'm new to AI programming and didn't know the "box worlds" specification. I thought it would be more complicated, like visualizing 3D blocks on a real table. So in your challenge it would be something like described here: http://www.cs.umd.edu/~nau/papers/gupta1992complexity.pdf And I guess only EBW, and not VBW, LBW or VLBW? This would be easier.

      But I still don't see how this could be teached with micro-tasks only, without hard-coding some fundamental features into the AI, like boolean logic, or an expression parser, or even a concept of number. Maybe it will be possible, by carefully designing a lot of micro-tasks, but I have no idea how they would look like to solve the A.1.6.1.1 and A.1.6.2.1 hardtest tasks, and what fundamental capabilities would need to be hard-coded, because it can't be thought with micro-tasks. So it might be impossible to train such concepts as "parsing hex numbers", "add two numbers" or "declare a variable" with micro-tasks, only. Which means it is just luck if the AI knows already the required concepts (hard-coded) to solve the non-public mini-tasks.

      Of course, the corollary might be also true: If I manage to create a set of micro-tasks, and program an AI which don't know the concept of "hex number parsing", or "adding" as hard-coded parts of the AI in advance, but it can learn it with my sets of micro-tasks, then this AI might be able to solve other complex mini-tasks as well, like the blocks world task. But this depends highly on the set of micro-tasks and the details of the AI. I would even expect developing a set of micro-tasks which can teach an AI to solve a given mini-task, will be more difficult than developing the AI itself. Could you provide a set of micro-tasks, which can teach an AI to solve the example mini-tasks in A.1.6.1.1 and A.1.6.2?

      Even some of the micro-tasks might be too difficult. I read the examples in the challenge specifications and some look too contrived. There are just too many possible rules, and contradictions, that an AI (or even a human) can possibly learn this with the one-bit reward output, and in the time defined by the challenge. I tried the first simplest task from the Python example (in human-interactive-mode), which uses a random 10 chars alphabet and the rule is that you have to respond with one randomly (at start) chosen char. Shame on my, I couldn't solve this, but it was immediately clear after I saw the Python code (yes, I'm a programmer, not a mathematician). I think the most efficient way for an AI (and humans) to learn is by example, or at least by providing definitions or axioms, and by asking questions back from the teacher, or asking nature (doing physical experiments). The micro-tasks are like looking for a needle in a universe-size haystack, especially if you consider the required micro-tasks to learn fundamental concepts like "parsing a hex number". In summary, I don't think this is the right approach to develop an AI and might even not work at all.

      Might be still fun to try this, even if nobody manages to win the challenge. Do you plan to give consolation prizes? :-)

      Delete
    3. Hi Frank,
      in the blocksworld example, we definitely would not want you to perform complex planning; that seems out of the scope. However, parsing and answering simple questions plus maintaining state about the environment seems viable; after all, the micro-tasks in the public curriculum use these concepts. Notice that the micro-tasks teach the agent even some basics of boolean logic (and, or).

      We're actually hoping for the corrolary you write about :-) I mean - your AI capable of solving the public curriculum or your own curriculum will be flexible enough so that it can also solve the evaluation curriculum.

      I completely agree with you that developing the right curriculum is half of the problem and takes a lot of time. We currently don't have curricula that would lead to the example mini-tasks 1.6.1.1 and 1.6.1.2. If we find the time to do so, we could try to create those too, but I don't want to make any promises now. What I can promise is that we'll do our best to make the evaluation curriculum really gradual and the tasks within to advance by small steps at a time.

      We tested our curriculum on human solvers. Some did not get through the initial tasks, some did (the more advanced tasks are actually easier for humans because of our biases). But none of the tasks remained unsolved. So in principle, we believe the tasks are solvable and we hope that they will also solvable by AIs from the challenge. I agree with your intuition about learning by example or by feedback from the teacher - the advanced tasks in the public curriculum go in this direction. We wanted to see an AI that will be able to discover this principle of gathering feedback from the input without its creators explicitly telling it about the format of the feedback or its location.

      About your last question, there are no consolation prizes, but next to the objective (quantitative) prize there is the subjective (qualitative) prize, which is evaluated by a jury and where the best idea (and not necessarily code) wins. So don't give up hope :-)

      Delete
    4. I wrote a simple webserver, so that humans can try to solve the tasks without the need to install Python etc.
      http://www.frank-buss.de/ai/index.html
      Feel free to use it for whatever you like, commit it to the github repository, or even better: enhance it with some logging and then install it on the challenge website, with a statistics page, how many tasks humans can solve.

      Delete