Wednesday, July 15, 2015

Esprit de corps at GoodAI

Today I’ve prepared a special blog post to tell you a little more about how we do things here at GoodAI. Our way of working is all about esprit de corps, or what’s often just called “morale” in English – it’s about believing in a purpose and staying motivated to move forward towards a common goal. Esprit de corps means being a cohesive whole, refusing to surrender, having each other’s backs, and getting where we need to be.

I prefer to think of us as a group of people going after one common goal, not a corporation with a hierarchy of boss and employees. Our team works in a cooperative environment, and we are all focused on one mission: creating general artificial intelligence. I also want my colleagues to feel like co-owners of GoodAI, so I am planning to give them company shares in the near future. We operate on consensus and everything we do takes us closer to our goal.

That said, every member has a particular place in the team. We’re working towards multiple smaller milestones at any given time, which means that the group is often split into teams that tackle smaller goals. We’re also oriented towards incremental progress rather than making great leaps over longer periods of time. Instead of going after a single goal that takes us two years, we aim for 12 smaller milestones spaced two months apart. I know we achieve more this way, and hitting these incremental milestones allows us to be flexible in our approach and make real progress day by day. We are not afraid to fail, and we take a positive view of every setback we encounter on the way to reaching milestones. By failing fast, we can rethink and attack the challenge from a different angle. We want to find out what doesn’t work as quickly as possible in order to more efficiently find out what methods will succeed.


GoodAI team members are also given a lot of free time to try doing things their own way. We push hard towards a particular goal for two months, and then take one month where every team member explores their own individual ideas and interests related to general AI. We’ve found that this process allows us to keep hitting the milestones we need to hit, but also makes room for a lot of great ideas that emerge when researchers and programmers are given space to be creative.

My own role in the GoodAI team is to drive the direction of our research, push everyone to focus on the most important things and ignore what wastes our time and resources, and determine the best ways to achieve our goals. I would call myself a project architect. I keep the pressure on our teams to produce, but I’m careful to connect people and ideas and to let new approaches emerge.

My job is to understand what the teams can do and to know that they can always do better. It’s my responsibility and personal mission to teach the teams that they can do more than they ever imagined possible. My role is to tap their fullest potential.

If you’re curious about our “stay-the-course” approach, check out this article that perfectly describes the way I look at things I want to do in my life: https://en.wikipedia.org/wiki/Grit_(personality_trait)


Why general artificial intelligence?

For me, creating general AI is the greatest challenge I can imagine, and I know the risk of failure is high. But I also know that general AI will fix everything for humankind in the future – it will be a universal problem-solver. I also know myself. I’m the kind of person who needs to take on the hardest challenges that seem impossible to most people.

If something is too easy, I lose interest. If general AI was a simple challenge, I wouldn't bother. I’m all about keeping a “no limits” mentality, being open to others and their ideas, and remembering that my team is my greatest resource in getting where we need to be.

In case you’re interested in joining us :-), here’s what my team members are saying about working at GoodAI:

Honza: “I’m very curious what GoodAI will be doing in two months, or in half a year. I have a feeling that even my craziest dreams are nothing compared to what we will really do. So it will be a dream come true, literally.”

Jarda: “A big change can be made by someone new (like Marek) who wants to do things in a different way and at the right time. Also, I've always wanted to work on a team like this, and I just had to wait until the company and position was available in the Czech Republic :-) ”

Phil: “I want to be able to contribute to all of these great challenges we have. I love the cutting edge technology at GoodAI, and I love the team with lots of smart people. You can always throw an idea out there, you can always get lots of input.”

Jiri: “I’m doing what I always wanted to do – my work is my hobby.”


Thanks for reading!

Marek
:-)

Learn more about GoodAI on our website www.GoodAI.com, on Facebook, or by following us on Twitter.

20 comments:

  1. I wish you all well on your goal to a general Intelligence. And may it come sooner than you think

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. skynet comes to mind

    ReplyDelete
  4. Congrats Marek! I would love to see AI helping humanity. However, I understand the fears that most people have about AI running amok. You are very optimistic when you say "But I also know that general AI will fix everything for humankind in the future – it will be a universal problem-solver." The thing to keep in mind about the critics is that they are pessimistic about humanity in general and are concerned that AI would see humanity as the problem to be "fixed." Watching politics in the USA, I can understand that pessimism all too well at times.

    ReplyDelete
    Replies
    1. The answer is to model the AI on a really good American-Indian leader.

      Delete
    2. Hopefully it won't be Ghandi from the Civilization games. He was a dick.

      Delete
  5. Reminder: Everyone can learn from mistakes. Perfection is nonexistent. Being good enough is what many strive towards, acknowledging pessimism is a mistake. We will learn to ignore you.

    ReplyDelete
  6. When I start a project, I usually try to think both sides of the coin, positively and negatively. From this company perspective, an AI can solve most of the problems, this is indeed the positive part. But let's not forget to think the negativity as well.

    Negative outcome: Like in every project, first thing countries will ask; "can we make a weapon out of it?" Like the times when the first rocket invented in Germany. He didn't meant it to be used for missiles. He even said which I quote "Launch was a success, but it landed on the wrong planet"...

    Sure an AI can help humanity a lot. I won't deny that. Specially if humanity achieves the perfect nanotechnology, with the help of AI, it can cure diseases, solve environmental issues, healthy farming etc...

    BUT... On the other side, it can categorize humanity as an enemy, an entity which needs to be removed from this planet, or heavily regulated.

    Either way, I hope the research goes well for you guys. Just don't forget that humanity is not ready for an achievement like this. Since the first thing we need to fix before this technology is our old rusted ways and thinking.

    ReplyDelete
    Replies
    1. to say that humanity is not ready to move forward and create things is rather pessimistic.

      Delete
    2. I'm not being pessimistic, rather realistic. If we want to move forward, we need to get rid of some old bad habits, ideals first. After that, humanity can create a nice future for themselves.

      But with the ideals we have now will prevent that from happening. So I wouldn't dismiss my early comment with a single sentence mentioning me being a pessimist.

      Delete
    3. The Terminator movies are taken way too seriously, lol.

      Is it possible that an AI as/more advanced as us could "decide" not to help us? Yes, because it is effectively its own species at that point.

      Is humanity anywhere near the point at which it can create a true AI? I doubt it.

      Of course it would eventually be possible - after all, living organisms already exist, and prove the "intelligence" is possible. They even provide numerous templates.

      However, let us not forget, we are a mere few hundred years (a handful of lifetimes) past the medievil ages. General predictions of the future almost always come true (some day, some guy will make a better toilet), but that day may take 400 years to come about.

      There is a trend in America right now of people known as "preppers" as in "those who prepare". They are convinced the end of the world will occur.

      Of course the end of the world will eventually occur - it is inevitable... but these "preppers" are overestimating the odds that it will end during their lifetime (a mere 80 years which, in the cosmic scheme of things, is such a TINY fraction of time).

      The reason they do this is because they are aelf absorbed with their own lives and mortality, which is not atypical - all living organisms are aelf absorbed. It is a trait which keeps them alive.

      Either way, even if an AI were developed sooner than later, and even if it became hostile... so what? It is the nature of what we call "life" to weed out the weak. If these robots were truly better than humans, they really ought to destroy us if it serves their interests.

      After all, is that not exactly what humanity's ancestors - YOUR ancestors - did to any creature it didnt want around anymore?

      Perhaps humans are so afraid of what an AI could do to them because they know how bad they are, themselves?

      Delete
  7. Reminder that space engineers will never be completed and will be left in an unfinished state, left to the modders to fix it.

    Reminder that medieval engineers is a scummy moneygrab and will be left in an even worse state than space engineers.

    Reminder that keen software has already left one game unfinished: see miner wars 2081

    Reminder that this comment was removed twice already, and censorship instead of discussion is the policy used on keen's forums.

    They know they aren't doing their shit right.

    ReplyDelete
    Replies
    1. Yes it was because there isn't anything that is constructive about this post. You haven't done anything constructive.

      So far they have been consistent with both updates and progress. Reminder that just because you screw up once doesn't mean that you will again.

      Delete
    2. If it were me, I'd remove it once more. Be more creative next time. People are not robots, they'll eventually finish their products. If you want an end product, instead of being inpatient, get involved and join the conversation in forums. I'm sure they can give you their piece of mind there. If that is not satisfactory, go write a review in steam to satisfy your needs of ranting....

      Delete
    3. here's my protip.

      STOP DOING WEEKLY UPDATES.
      START WORKING ON THE PROJECT AND RELEASE DECENT UPDATES INSTEAD OF BUYING TIME ON TARDS WITH WEEKLY SHIT UPDATES.

      Delete
  8. If you know better, join them and learn them else stop screaming aroud and write calm recension.

    ReplyDelete
  9. You will erase your all race, humans... Listen to your talking nature and cousins.... Not technology...
    Stephen hawking is one of those humans that is "smart enough" to understand the risks taken by developing AI.. It may give good things but when the AI is strong enough to erase you

    ReplyDelete
  10. Some ideas for how the AI should work.

    The AI should be just like a human brain (you teach it,it learns and could replicate that action as we would) / it should not be something that is only programmed to be like a slave to humans (Something such as a maid, or worker , it needs to have choices, just like us) / You could try coping all of the human brain into an artificial software / program (Studying every part of a human brain and implementing it into artificial form) .

    ReplyDelete
    Replies
    1. First, one must understand what "choice" really is; how does one program "free will"?

      If humans have it, then that proves it is possible, and what is possible can be replicated.

      If they dont have it... well, then its time to let go of the delusion.

      Delete
  11. Are people really worried about AI becoming 'sentient'?

    I really think they're just working on a general solution to adress the problem of really bad AI in games. Other than some improvements in general pathfinding it seems AI development in games hasnt improved that much at all. Especially if you know how it's done today in most games you will get sad at their simplicity. I think we all know how easy it is to exploit computer behaviour in games.

    AI like in chess games, is very specific to those games for example. And I personally would love to see a company working on furthering the state of more complex AI systems that can give the impression of fighting a human being.

    I think part of the problem is there is also very little understanding n the gaming community just how heavy even a simple AI is in terms of game performance. It used to be that just a bit of pathfinding for multiple objects could crush a computers cpu (remember how baldurs gate had sliders to adjust how well pathfinding worked?)

    Also AI in terms of neural nets etc have existed since the dawn of computers, I still remember reading old computer magazines and people making 'thinking' AI's for the most simple games, where the coolness factor was that they didn't use databases, or calculated through all possible but rather either used some sort of fuzzy logic that allowed a computer not to have to look at all possible decision trees, or neural nets with heuristic functions that had to 'learn' how to play just like we do in a way.

    I highly doubt we'll see 'computers' taking over any time soon in any conventional computer system. But an opponent using neural nets, with fitness functions, heuristic algorythms in games would be so awesome. Especially if people buy the game and don't scream omg why does this ugly game take 100% cpu power on low settings, this game sucks :p

    Anyway I didn't see a comment like this so I thought I'd share it. I'm sure someone out there actually studies this field and could explain it better.

    ReplyDelete