Tuesday, December 15, 2015

GoodAI - R&D Roadmap - Preview

Today I’m writing to offer a brief peek into the GoodAI R&D roadmap to AGI (general artificial intelligence), a document we plan to release to the public in the coming weeks. It outlines our vision and methodology for developing universal artificial intelligence. The roadmap will focus on our plans for the next few months.

To generate our roadmap, we started with a set of simple game environments where our AI agent will learn in a gradual and guided way (what we call a “School for AI”). We then analyzed which abilities the AI agent needs in order to perform successfully in these environments. This analysis led us to a very specific list of functional requirements for an AI system, which we will implement and test before moving to more advanced development stages (and further iterations of our roadmap).

We plan to implement all these requirements into one universal algorithm that will be able to successfully learn all designed and derived abilities just by interacting with the environment and with a teacher.

The roadmap document will be aimed at both the general public and AI researchers.

The purpose of this document is to:
  • Define our research and development roadmap for the next few months
  • Agree on approaches to development
  • Specify a list of very detailed abilities and requirements for a first-stage AGI system
  • Unify our design, architecture, algorithms, and terminology to make sure we understand each other within the team

The current roadmap covers a first-stage AGI system.  It should be able to complete learning tasks with gradually increased complexity, such as playing Pong/Breakout and Mastermind games, predicting movement and shape-switching (VisualPredictions), or completing a Watermaze task.

Example functional requirements include (not all):
  • Being able to learn to detect a new/changed pattern
  • Being able to learn additively, not forgetting or erasing existing knowledge
  • Being able to predict on different time scales and only relevant events
  • One-shot learning
  • Generalization first, then specialization
  • Pattern and sequence generation (later add hierarchies)
  • And more

All remaining abilities/requirements will be addressed in future roadmaps - including higher-level abilities such as natural language, reasoning, creativity, curiosity, advanced exploration, goal alignment (safety, ethics), and so on. We also expect to adjust our assumptions and thinking as we progress in our experimentation – the roadmap is a working document, and will certainly develop with us as we move forward.

So please keep an eye out for our roadmap in the near future! We’re looking forward to feedback from fellow researchers and anyone interested in the progress of general artificial intelligence.

Marek Rosa
CEO, CTO & Founder of GoodAI & Keen Software House
:-)

---
And don’t forget to follow us on social media!
Facebook: www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
Forum: forum.goodai.com
www.GoodAI.com

Monday, November 30, 2015

GoodAI is heading to North America: NIPS 2015 Montreal & SF Bay Area

I’m gearing up for a trip to North America this week, and I’m happy to announce that 4 of my colleagues will join me!

Our traveling team of 5: Me with our COO, Olga Afanasjeva, and 3 GoodAI senior researchers: Dusan, Honza, and Jarda

Our first stop will be Montreal, Quebec for the week-long Conference on Neural Information Processing Systems (NIPS 2015) December 7-12. We’re looking forward to hearing from and connecting with top people in the field of machine learning and computational neuroscience (aka our heroes), and meeting up with a few friends we last saw at the AGI-15 conference in Berlin.

There are two main purposes for this trip. We want to:
  1. Better understand how other researchers tackle challenges in machine learning and computational neuroscience, since many of these ideas can be applied to our AGI research
  2. Grow trust in the AI community, especially among AI developers and AI safety researchers, through informal meetings.
I met with Nick Bostrom in Oxford a few weeks ago, and our conversation strengthened my commitment for building relationships in the AI community. He also gently reminded me that AGI development is not something to be taken lightly :-)

BTW, Nick's book is one of the best on the topics of AI safety, ethics, future outcomes, and more.


Nick Bostrom’s note: “Big responsibility!”
Keeping in touch with other researchers and organizations working towards AI is extremely important for us. Doing so helps us accelerate towards developing general AI as soon as possible, and allows the AI community to see that we’re not interested in developing this technology in secret. We want to be as open as possible, finding ways to cooperate and come together, so we can increase the chance that AGI will be created in our lifetime and that it will be a safe, valuable tool for humankind.

The second leg of our trip will be in the San Francisco Bay Area, where we will open more conversations with people dedicated to AGI and AI safety. Of course, we’ll also work in some time for fun, including the RoboUniverse conference in San Diego, where I’ll appear as a panelist on December 16. There are a number of connections I see between AI and robotics, so I’m really looking forward to contributing to this discussion.

Keep an eye out for upcoming blog posts about our trip, and be sure to follow me on Twitter for regular updates: https://twitter.com/marek_rosa

Thanks for reading!

Marek Rosa
CEO, CTO & Founder of GoodAI & Keen Software House
:-)

---
Follow us on social media!
Facebook: www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
www.GoodAI.com

Thursday, November 12, 2015

Planets! Because you wanted them

Today is an important day in Space Engineers history. I am very happy to present the biggest and most challenging addition we have ever made: Planets!

Space Engineers now has the largest, fully destructible, volumetric and persistent planets in the universe!

Since day one planets occupied the top spot among the most requested features by community, but we didn’t have them in our initial roadmap. The decision to add planets came almost a year ago and back then, I knew it was going to be a very complex and difficult task that would test the limits of how far we could go and how hard we could push.

The vision for Space Engineers was always to provide a realistic and almost infinite environment where everything you see is real, can be touched, used, shaped or recreated. Sacrificing the volumetricity of our game world wasn't an option.

We knew we were explorers and we wanted this fight at the highest volume! The extreme scale of planets not only tested our abilities but also pushed us further,  to a point where no other game developer has ever wandered.

During the development we had our ups and downs, we sacrificed our free time, pulled long hours, reached our programming and artistic limits and then pushed them further. And then again and then some more. The task wasn’t easy, but the team boldly took the challenge and put blood, tears and sweat into accomplishing it.

We never forgot that if we don’t give up, if we keep pushing in spite of all the difficulties  and unpredictability that lies in front of us, eventually, one day, we would give you the Planets and everything that will come.

Today I am proud to introduce you to our Planet Creators: (in a random order) Tomas Psenicka, Tomas Rampas, Petr Minarik, Ondrej Petrzilka, Jan Nekvapil, Čestmír Houska, Marko Korhonen, Daniel Ilha, Greg Zadroga, Lukas Jandik, Adam Williams, Dusan Repík, Jan Veberšík, George Mamakos, Pavol Buday, Joel Wilcox, Marketa Jarosova, Ales Kozak, Simon Leška, Michal Zavadák, Marek Obršal, Dušan Andráš, Charles Winters, Michal Wrobel, Anton Bauer, Natiq Aghayev, Rene Roder, Jan Golmic, Nikita Suharicevs, Dušan Ragančik, Lukáš Tvrdoň, Karel Antonín, Adam Sadlon, Vaclav Novotny and myself.

These are the folks who can one day say to their grandchildren: I was there when Planets were added to Space Engineers. I am a Planet Creator!









Planets are fully destructible, volumetric and persistent. You can drill a hole from one pole to the other, but you will have to put a lot hours into it. Planets have up to 120 km in diameter, which is a surface of 45 238,93421 km2! Even with planets this huge we had to apply smart visual tricks so they appear nice and round from afar. That’s why mountain ranges goes flat once you hit a critical distance from the planet.

Every object on a planet surface is affected by its gravity, which is why we added a new option for station grids - the Station voxel. By enabling this option a station will be static only when touching the voxel (one block build “inside of it”). When you cut some parts of the station away it will become a dynamic object and could break apart. And since you will be building stations on a spherical surface from now on, buildings can't be axis-aligned, but you can orientate the block so that it’s aligned with the landscape surface.

In the Planetary release you can find three planets (up to 120 km in diameter inspired by Earth, Mars and an alien world), three moons (up to 19 km in diameter, inspired by the earth’s moon, Europa and Titan), each one with its unique flora and fauna.

Planets and moons can have either Earth-like, hostile or no atmosphere (the engineer need to keep his helmet on). And if you are looking for more, with modding tools you can create your own planets (we are preparing a detailed modding guide from planets that will be released within the next days).

The surface area for all the planets spans about 140,000 km2 and that is more than any other game (e.g. Skyrim). In the case of Space Engineers, this landscape is also fully volumetric and destructible.

I want to share also two fun facts: planets don’t orbit around the sun, but the sun orbits around the solar system instead, and planets and moons have natural gravity as in real life.

To support Planets as you know them now, we had to optimize many parts of our game engine. This includes physics (collision detection between ships and planets), rendering large scale planetary environments with various materials, flora zones that support fast and seamless movement without any preloading, atmosphere and gravity, etc... We will keep optimizing and supporting the game and planetary features with future updates and additional content to make the experience better and better.

Lastly, to bring the best possible experience without compromises we had to change the hardware requirements, therefore Planets are possible only on hardware supporting DirectX 11. You can find the updated hardware requirements at http://store.steampowered.com/app/244850

Have fun building on planets!

#ThisIsPlanets!
#NowWithPlanets

Marek Rosa
CEO, CTO, Founder
Keen Software House

---

For further updates, keep reading my blog: http://blog.marekrosa.org/ or find me on Twitter: @marek_rosa

Or  follow Space Engineers and Medieval Engineers on Facebook and Twitter:
Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Friday, October 23, 2015

Celebrating the 2nd Anniversary of Space Engineers


Dear Engineers,

Today we are celebrating the 2nd anniversary of Space Engineers. It has been two years since the release of the game co-developed by the community and shaped by its fans. To this day, more than 150,000 community creations have been shared and more than 100 weekly updates issued.

We are not celebrating just the 2nd anniversary of Space Engineers, but also the creative spirit of the community that drives us, pushes us into the unknown, makes us question what is even possible, and helps us see what we never before imagined.

And here we are after two years, standing on the verge of something big and exciting at the same time. Our biggest content update yet will introduce Planets and it will reinvent the way you experience Space Engineers.

Every single day we are blown away by community creations and we are humbled by passionate content creators helping us to shape the game.

To every content creator, streamer, YouTuber, commenter, reviewer, builder, creator, destroyer, explorer, and all the players and every single fan out there, in the name of the whole team I would like to say: you have always been a vital part of the game and we wouldn't be here without you.

Thank you for making Space Engineers possible.

Marek Rosa
CEO, CTO and founder of Keen Software House
:-)

PS. Join us today at 8PM CEST for a live stream with giveaways, live QA and more: http://www.twitch.tv/KeenCommunityNetwork

----

Or  follow Space Engineers and Medieval Engineers on Facebook and Twitter:
Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Wednesday, September 16, 2015

Another GoodAI Milestone: Attention, Vision, and Drawing

Today I am excited to share with you that our GoodAI team has reached another milestone. These research results are part of our artificial brain architecture that we call the attention module.

Your brain’s ability to focus and pay attention to its surroundings is an important part of what allows humans to survive and live comfortably. Attention enables you to selectively focus on parts of your environment. We use attention to designate how much concentration a piece of information or particular stimuli deserves, while at the same time ignoring input that we are aware of but know isn’t immediately relevant. Imagine how difficult it would be if walking down the street, we focused equally on everything we saw, heard, smelled, or could feel. Attention helps us know what’s important at any given time. In the past, attention helped us hide from dangerous predators like wolves or bears, and today it enables us to drive cars and even distinguish between a friend’s cute puppy and the neighbor’s unfriendly watchdog.

It’s important to remember that these results are just a subsection of the attention module of our artificial brain, and that we are building on findings of other researchers in the areas of recurrent networks and deep learning. While this is certainly not an extraordinary breakthrough in AI research, reaching this milestone means that we’re on track in our progress towards general artificial intelligence.

Attention Method and Results

The new milestone is that our attention module is able to understand what the things it sees look like, and how these things it sees relate to each other. One subpart of our module shows how it is able to study a simple black-and-white drawing and remember it. It then reproduces a similar but completely original drawing after seeing just a small selection of the drawing.

For example, if you train the module on faces like these:

Subset of images for training faces

And then show it just some hair (which the AI has never seen before):

Partial input
The module is able to generate a new, complete image of its own creative design. The generated image is not a copy, but is an original picture created by the AI, inspired by images it saw previously.

Generated image
You can see the whole process in this video with several examples:



And we went even further in our experimentation. Inspired by DecoBrush, we trained our module to remember ornamental letters. When we draw a normal, handwritten letter, the module adds ornamentation to it. Please note that this is just the first try and several additional improvements, including structure or neighborhood constraints (i.e. convolutions), could be applied to improve the result.
Adding ornamentation to characters. Several characters are perfect (when the module learned what to copy), while others make sense and are novel, i.e. "E" in the fourth row (where the module learn how to combine)

We’d also like to share our work in progress on more complicated dataset. The module was fed with pictures of mountains. Below you can see both inputs (top) and generated images (bottom):

The figure shows picture reconstruction for 12 examples (divided by a line). The top rows show the input. Below the input there are generated images (the network learns how to reconstruct the input image over a series of steps)

The module tries to generalize among the pictures – interestingly, it also added extra mountaineers in the top left picture pair. We are already working on further improvements and even more interesting tests and applications.

A scene for testing attention
Finally, the module can also understand the relationship between objects in a picture. This is illustrated in our next example: a room with objects. The goal here is to teach the attention module to remember the relationship between objects in the room and also remember how the individual objects look. Simply put, we want our algorithm to realize that there are chairs, a plant, that the plant is below the chair, and that the chairs are next to each other.



This video describes the attention module in action and shows the results:



Module details (for advanced technical readers)

For this milestone, we were inspired mainly by two works: Shape Boltzmann Machines that use pixel-neighbor information to complete missing parts of a picture, and DeepMind's DRAW paper that introduces a general recurrent architecture for unsupervised attention. Similar to that system, our model contains two Long Short-Term Memory (LSTM) layers (encoder and decoder) with about 60 cells each (so that it won’t overfit data but tries to generalize across it).

Input images are separated into: (i) a training set that is used for training (such as the images of faces), and (ii) a test set containing images the module didn’t see during the training (for example, just the hair).

To generate a face, the algorithm uses a database of images to learn the characteristics of a face. When the module sees a new image or part of a new image, it generates yet another new image. This newly generated image is then fed as the input in the next time step. Over time, the module improves by generating novel images that make sense to a viewer.

The attention architecture has similar structure to the algorithm for generating images, but differs in the following way: the input is always just one part of the image that it sees (in contrast to full image as before). From that limited input, the attention module generates: (i) the direction where it should look next and (ii) what will be there. This output is fed back as new input in the following step. The model learns to minimize the difference between the overall scene and what it predicted about the scene.
The attention module depends on the ability to learn sequences of information. Here, we give a bit of insight into how we compared two training algorithms using a simple yet challenging example. We tested the Long Short-Term Memory (LSTM) model with Real-Time Recurrent Learning (RTRL) versus Backpropagation Through Time (BPTT). At time 1, we feed the LSTM a randomly cropped part of the picture of a handwritten digit from the MNIST database; at time 2, a random part of the following number is used as the input; and at time 3, it finally sees what it should predict (the number that follows after those two). Recurrent connections are highlighted in blue. While we found that RTRL is great in most cases, it was outperformed by BPTT in this example as unfolding the network seems to be very important here.

Conclusion

This milestone is an important step along the way to developing a truly general artificial intelligence. However, this is still just a small part of the attention module of our artificial brain, and is an even smaller part of building an artificial brain as a whole. Moreover, we are already exploring several improvements of the attention module and you can expect to see these results in an upcoming Brain Simulator update.

In the meantime, we are working hard on other modules of our AI brain - including motor commands, long term memory, and architectures inspired by neuroscience and deep learning, and you can look forward to new milestone news in the near future.

Thanks for reading! And for a special goodbye, here’s how our attention module imagines the Space Engineers astronaut:


And many thanks to Martin, Simon, and Honza, our attention module researchers!

Marek Rosa
CEO, CTO and founder at GoodAI
:-)

---
Note on blog comments:

Lately I’ve received a lot of spam comments on my blog, and unfortunately the blog has no settings for preventing spammers. Due to the extreme amount of spam messages, I am forced to start moderating comments – I hope readers will understand this need, and we will do our best to approve comments as fast as possible.

---

Want to know as soon as GoodAI reaches another milestone? Check us out on social media:


Tuesday, August 18, 2015

Stabilization period for Medieval Engineers and Space Engineers

Our Space Engineers and Medieval Engineers teams have been working hard to develop new features over the past few months, and recently it became clear that in order to keep the gameplay at a high level, we need to slow down the pace of adding new features and focus more on improvements.  To say it another way, we’ve been pushing towards new features in a highly motivated, almost crazy way and it’s time for a stabilization period :-) We added new features, and now we are working on improving these features and making them feel more complete by using opinions of the community. This is essential for titles that are available in Early Access – we enjoy getting feedback from the community that helps us in the process of optimizing the game.

This means that starting this week, we are moving into a temporary feature freeze period. We will be focusing exclusively on finishing existing features and on bug fixing. I want to assure you that weekly updates will continue, but they’ll be focused on stabilization rather than on new features.

The stabilization period will allow us to fix issues that appeared after recent features were implemented, and improve the overall gameplay.

It’s important to state that going into a stabilization period does not mean that we’ve reached Beta – in fact, we did something similar back in October 2014 (see http://blog.marekrosa.org/2014/10/level-design-stabilization-period_15.html). Taking time to finish existing features and fix bugs is a normal part of game development, and it has been received positively by the community in the past.

The stabilization period will not affect or slow down the development of upcoming major features, which we will continue to work on in parallel with bug fixes. This includes:
  • New multiplayer for Space Engineers and Medieval Engineers
  • Planets for Space Engineers (See our teaser video and screenshots below!)


Finally, I want to say that we are so thankful to our community of players and fans who are very helpful in finding bugs and reporting in-game issues – we couldn’t do it without you.



Thanks very much for reading! As always, feel free to leave comments and questions below. I am very interested in hearing your suggestions, especially about what should get finished and fixed first, what can wait for later, etc.

Marek
:-)

---
For further updates, keep reading my blog: http://blog.marekrosa.org/ or find me on Twitter: @marek_rosa

Or  follow Space Engineers and Medieval Engineers on Facebook and Twitter:
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng
Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG

Tuesday, July 21, 2015

GoodAI is heading to the AGI-15 conference in Berlin!

GoodAI is gearing up for our first whole-team conference about general artificial intelligence in Berlin, Germany from July 22-25. We’re sending a full bus of team members, about 25 people, to represent GoodAI.

GoodAI team on the roof of our offices!

AGI-15 is organized by the Artificial General Intelligence Society together with the Association for the Advancement of Artificial Intelligence (AAAI). The conference is a yearly gathering of top researchers, academics, and business leaders dedicated to creating general AI.

When I say general artificial intelligence, I’m referring to a technology that’s very different from typical AI applications that most people have heard about – self-driving cars, chess-playing AI, or facial recognition software. These specific applications are commonly referred to as narrow AI, which means that they are built to solve very specific problems.

General AI developers such as GoodAI, however, aim for something more universal. General artificial intelligence of the future will be able not only to perform specific tasks very well, but to function with the skill and ability of a human being. Future general AI brains will perceive stimuli in the same way that a human does – by seeing, feeling, interacting, and learning – and use this data to generate behavior, perform tasks, and respond to motivations given by human mentors. General AI will be as flexible and able to learn as a human.

Conference attendees at AGI-15 will therefore all be dedicated to this major task – developing general artificial intelligence. To support efforts from others outside our company who are aligned with our vision, I decided to officially sponsor the AGI-15 conference with a $10,000 donation.

GoodAI will also deliver a tutorial presentation at AGI-15 about our first product: Brain Simulator, our visual editor for designing artificial brain architecture that is now available to the public under and open source, non-commercial license. The tutorial title is “GoodAI Brain Simulator: Prototyping AI Architectures,” and will offer an in-depth explanation of Brain Simulator and its uses.

Thanks for reading!
Marek
:-)

---
If you want to hear more about what GoodAI is doing at the conference or in general, be sure to follow us on social media or check out our website:
Facebook: https://www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
www.GoodAI.com

Wednesday, July 15, 2015

Esprit de corps at GoodAI

Today I’ve prepared a special blog post to tell you a little more about how we do things here at GoodAI. Our way of working is all about esprit de corps, or what’s often just called “morale” in English – it’s about believing in a purpose and staying motivated to move forward towards a common goal. Esprit de corps means being a cohesive whole, refusing to surrender, having each other’s backs, and getting where we need to be.

I prefer to think of us as a group of people going after one common goal, not a corporation with a hierarchy of boss and employees. Our team works in a cooperative environment, and we are all focused on one mission: creating general artificial intelligence. I also want my colleagues to feel like co-owners of GoodAI, so I am planning to give them company shares in the near future. We operate on consensus and everything we do takes us closer to our goal.

That said, every member has a particular place in the team. We’re working towards multiple smaller milestones at any given time, which means that the group is often split into teams that tackle smaller goals. We’re also oriented towards incremental progress rather than making great leaps over longer periods of time. Instead of going after a single goal that takes us two years, we aim for 12 smaller milestones spaced two months apart. I know we achieve more this way, and hitting these incremental milestones allows us to be flexible in our approach and make real progress day by day. We are not afraid to fail, and we take a positive view of every setback we encounter on the way to reaching milestones. By failing fast, we can rethink and attack the challenge from a different angle. We want to find out what doesn’t work as quickly as possible in order to more efficiently find out what methods will succeed.


GoodAI team members are also given a lot of free time to try doing things their own way. We push hard towards a particular goal for two months, and then take one month where every team member explores their own individual ideas and interests related to general AI. We’ve found that this process allows us to keep hitting the milestones we need to hit, but also makes room for a lot of great ideas that emerge when researchers and programmers are given space to be creative.

My own role in the GoodAI team is to drive the direction of our research, push everyone to focus on the most important things and ignore what wastes our time and resources, and determine the best ways to achieve our goals. I would call myself a project architect. I keep the pressure on our teams to produce, but I’m careful to connect people and ideas and to let new approaches emerge.

My job is to understand what the teams can do and to know that they can always do better. It’s my responsibility and personal mission to teach the teams that they can do more than they ever imagined possible. My role is to tap their fullest potential.

If you’re curious about our “stay-the-course” approach, check out this article that perfectly describes the way I look at things I want to do in my life: https://en.wikipedia.org/wiki/Grit_(personality_trait)


Why general artificial intelligence?

For me, creating general AI is the greatest challenge I can imagine, and I know the risk of failure is high. But I also know that general AI will fix everything for humankind in the future – it will be a universal problem-solver. I also know myself. I’m the kind of person who needs to take on the hardest challenges that seem impossible to most people.

If something is too easy, I lose interest. If general AI was a simple challenge, I wouldn't bother. I’m all about keeping a “no limits” mentality, being open to others and their ideas, and remembering that my team is my greatest resource in getting where we need to be.

In case you’re interested in joining us :-), here’s what my team members are saying about working at GoodAI:

Honza: “I’m very curious what GoodAI will be doing in two months, or in half a year. I have a feeling that even my craziest dreams are nothing compared to what we will really do. So it will be a dream come true, literally.”

Jarda: “A big change can be made by someone new (like Marek) who wants to do things in a different way and at the right time. Also, I've always wanted to work on a team like this, and I just had to wait until the company and position was available in the Czech Republic :-) ”

Phil: “I want to be able to contribute to all of these great challenges we have. I love the cutting edge technology at GoodAI, and I love the team with lots of smart people. You can always throw an idea out there, you can always get lots of input.”

Jiri: “I’m doing what I always wanted to do – my work is my hobby.”


Thanks for reading!

Marek
:-)

Learn more about GoodAI on our website www.GoodAI.com, on Facebook, or by following us on Twitter.

Tuesday, July 7, 2015

Announcing GoodAI: a Keen Software House sister company in general artificial intelligence

Today I’m thrilled to publicly announce GoodAI, a Keen Software House sister company developing general artificial intelligence. In this post, I will tell you more about our current stage of general artificial intelligence research at GoodAI, where we hope to go from here, and especially about our plans to improve our games by integrating them with our AI technology.


GoodAI began back in January 2014 as a research project within Keen Software House. Alongside becoming an independent game developer, I always dreamed of leading a team that builds truly general artificial intelligence. About a year and a half ago, I invested $10M USD into what is now our GoodAI company.

I am CEO of both GoodAI and Keen Software House. My role in GoodAI is to set and maintain our vision, driving both the research side and the business side of the company at the same time. I push the team to achieve our mission: develop general artificial intelligence, be helpful to humanity, and learn more about the universe.

I want to reach our end goal as fast as possible, because I really see the good that general artificial intelligence will bring to our world. Imagine an AI that is as smart, adaptable, and able to learn as a human being. Then imagine telling this AI to improve itself – to make itself even smarter, faster, and more capable of solving problems. Such an AI will be the last thing humans ever have to invent – once we have this technology, our AI could invent other technologies, further the sciences, cure diseases, take us further in space than we’ve ever been, and more.

GoodAI has reached two important milestones on the path toward developing general artificial intelligence:

  • Pong-playing AI: In the first project the AI agent learns to play Pong, a Breakout game, from unstructured input of screen pixels and reward signals.
  • Maze game AI: developing an AI that can play a video game requiring it to complete a series of actions in order to reach a final goal. This means that our AI is capable of working with a delayed reward and that it is able to create a hierarchy of goals

Although my companies are currently separated into games and AI, we plan to introduce general artificial intelligence technology into Space and Medieval Engineers in the next few months. As part of GoodAI’s development, our team created a visual tool called Brain Simulator where users can design their own artificial brain architectures. We released Brain Simulator to the public today for free under and open-source, non-commercial license– anyone who’s interested can access Brain Simulator and start building their own artificial brain. Please keep in mind that Brain Simulator is still in the prototype stage of development. More info: www.GoodAI.com

By integrating Brain Simulator into Space Engineers and Medieval Engineers, players will have the option to design their own AI brains for the games and implement it, for example, as a peasant character. Players will also be able to share these brains with each other or take an AI brain designed by us and train it to do things they want it to do (work, obey its master, and so on). The game AIs will learn from the player who trains them (by receiving reward/punishment signals; or by imitating player's behavior), and will have the ability to compete with each other. The AI will be also able to learn by imitating other AIs.

This integration will make playing Space Engineers and Medieval Engineers more fun, and at the same time our AI technology will gain access to millions of new teachers and a new environment. This integration into our games will be done by GoodAI developers. We are giving AI to players, and we are bringing players to our AI researchers.

I am very happy with the overall progress of our games, and our development will not slow down when we start to integrate AI technology. Planets, new multiplayer, and Xbox porting are all progressing as planned. You can look forward to more information about these features, plus some further info about AI in games, in a future blog post.

What modders are saying about integrating Space Engineers and Medieval Engineers with Brain Simulator:

war2k: "This is really cool. I like this concept of being able to design your own AI for a game and then teaching it how to function. Very awesome work, you guys."

Shaostoul: "True AI is both an amazing and terrifying concept, but one I think is a necessity for us. I love the idea of being able to teach an AI and guide it to being productive. Better than a scripted AI for sure."

Malware: "AI is inevitable. It is only a question of time before someone cracks that code, literally. I always believed that the right way to go would be a system that can learn as we do, if that was possible. Why should we waste time figuring out what nature has already done? You're on the right track, I'm sure of it."

Thanks for reading this!
Marek

---
For more information, check out the GoodAI website: www.GoodAI.com
Like us on Facebook: http://www.facebook.com/GoodArtificialIntelligence
Follow us on Twitter: @GoodAIdev

Join us in designing AI brain architecture with Brain Simulator: https://github.com/GoodAI/BrainSimulator

Wednesday, June 24, 2015

Keen Software House is moving and expanding: new Prague and Brno offices

Today I’m happy to report that we’re taking the next step in our plans to expand the Keen Software House teams. We’re growing very rapidly and planning to grow even more – at the moment we have about 30 people working on the games team and roughly 20 on the AI team. We just signed the contracts to open a second office in Brno, and it has been up and running for about one week.

The Prague teams have moved to a new 500m2 office in the Danube House. This office is only temporary. Our old Dejvice offices were just too small, and since our new offices in the Nile House won't be ready for three months, we jumped at the chance to spend a few months in the Danube House.

Keen SWH’s new Prague home – the Danube House
Karolinská 650/1, Prague





Our permanent home will be a 1200m2 office in the Nile House in Prague, just next door to the Danube House. This office will be luxurious and spacey, and will be able to hold us even when our teams are 100+ people all together. It will have a large presentation room, relaxation spaces, a sound room, and more. It’s the most high tech office complex in Prague, and we’re very happy to call it home.

Future permanent home of KeenSWH in Prague – the Nile House
Karolinská 654, Prague

The new Brno office is for AI researchers, SW engineers and game programmers who can’t relocate to Prague. This is our first experiment in opening a remote office.

The location of our new Brno office: Veveří 2581/102, Brno
Once we get used to the challenge of having some people work outside of Prague, we will be able to open other remote offices in the near future. We expect that our next offices will be in Bratislava, Slovakia, and then in other parts of Europe or even around the world.

Positions which can be done from/in Brno (or other remote offices):
- AI Researchers
- SW engineers for AI project
- Game Programmers
 
We’re always hiring, so if you’re interested in joining our Prague or Brno teams (or even working from another location), check out our jobs page here: http://www.keenswh.com/jobs.html

This is all a part of our long term plan to reach talent that can’t move to Prague. We have the resources to expand and we believe this is the most effective way to find the best people to work on our AI project and games.

Thanks for reading!
Marek Rosa
___

If you want to follow our AI project, please follow me on twitter http://twitter.com/#!/marek_rosa or keep checking my blog: http://blog.marekrosa.org

For the latest news on our games, follow us on Facebook or on Twitter.

Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng
Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG

Thursday, June 18, 2015

Guest post by Dusan Andras - Space Engineers: Planets!

Hello Engineers! I am Dusan Andras and for those who don’t know me I am the main programmer who is working on the development of the planets for Space Engineers.

So planets… This has been one of the most demanded features by the community since we released the game on Steam Early Access. Players have been asking for it constantly and we promised to deliver. My colleague Ondrej Petrzilka already shared the first batch of info in his previous blog-post and there have been lots of things implemented since then. Planets are getting even closer to release! At this moment, we are still not 100% sure when we will be able to release them, since it is one of the biggest features we have ever developed and we hope that you understand that this update needs proper testing before we add it in the game – even if they look complete and amazing in the screenshots. I would like to give you a sneak preview for each of the planet’s properties and also some nice screenshots that we took during development.

Planet size
This is always a question of FUN vs REALITY. I can imagine that many players would like to have real life-sized planets, but for the sake of gameplay, time, and engine possibilities we decided to use a 30-50 km diameter for planets and an 8-10 km diameter for moons. Yes, some generated planets could have 0-3 moons accompanying them.

This is a 50 km planet that is 50 km away from you.

This is an 8 km moon from the planet surface. You can see another planet on the horizon

More planets!


Gravity
Planets and moons will have “natural” gravity that will affect ships, players and floating objects near them. The gravity will be scaled to the planet’s size and will decrease the further away you are from them.

Atmosphere
There will be two types of atmosphere around the planet surfaces for now: one “hostile,” without any plants or life and one for “organic” planets with flora. Organic planets will have an atmosphere full of oxygen that you can breathe and supply your ships with and will have a blueish color like earth. Hostile planets will be without oxygen and with different colors of the atmosphere.

Vegetation (trees, bushes, grass)
We added new “organic” material types for planets. It’s only on planets with an oxygen atmosphere and existing flora. In the future you will be able to harvest this organic material – but probably not in the first planetary update. The flora (trees and bushes) has been borrowed from Medieval Engineers. The flora won't be visible from space, but will appear only when the player or ship gets closer to the planet and can be configured / disabled via the world settings.

Organic planet from space

Flora at sunset

Flora during the day


Sun
To simulate the day and night cycles we decided to rotate the sun around the planets/ world. The user will be able to configure the day duration from 1 minute to 24 hours or disable the rotation to keep the current static sun.

Different day cycles from the same planet:







Station voxel support
Because of the planetary gravity, we added a new option for station grids: the Station voxel support. By enabling this option, a station will be static only when touching the voxel (one block build “inside it”). So when you cut some part of the station away it will become a dynamic object and fall.

Note: Please keep in mind that everything that has been written and presented in this blog post can be changed until this feature is released.

Thank you for reading this and we hope that you liked what you’ve seen. We can’t wait to give you planets and start playing with them!

Dusan Andras

---

For the latest news on our games, follow us on Facebook or on Twitter.

Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Wednesday, June 17, 2015

USA Road Trip: DARPA Robotics Challenge, the Future of Life Institute, MIRI and the Exponential Finance Conference

Those of you following me on Twitter probably noticed that I’ve spent the past 3+ weeks in and around NYC, Boston, and now the big cities of California.  Several people suggested that I note down some of my experiences in a travel blog, so here is the first installment. This blog will cover

- the DARPA Robotics Challenge
- my meeting with the Future of Life Institute (FLI)
- my meeting with the Machine Intelligence Research Institute (MIRI)
- the Exponential Finance Conference (don’t worry – I’m not transitioning into finance)
- my visit to Tony Soprano’s house, plus other American adventures

DARPA is the research arm of the U.S. Department of Defense which invests in robotics research and development, and which holds a robotics competition every few years. The robots have to complete a variety of tasks in the shortest amount of time in order to win. The DoD aims to send robots to disaster zones which are too dangerous for humans – they will basically act as first responders and save lives where human capabilities are limited.

For the 2015 challenge, the robots faced 8 difficult tasks. After being placed in a car by their human team, the robots had to drive a car through a course. They were then required to exit the car, open and pass through a door, open a valve by rotating its circular handle 360°, use a drill to cut a large hole in drywall, cross over a great deal of rubble (this is where many bipedal robots crashed), and finally walk up a set of stairs. During this time, the robots had to carry their own source of energy with them. Human teams could only communicate wirelessly with the robots, but were allowed to help the robots get back on their feet after a fall - at the cost of having a penalty minute added to their final times.


Here’s a great clip of the winning robot’s moment of victory:


The DARPA Robotics Challenge was 10x better than I had imagined, and I was really positively surprised by the results. While many of the robots moved by rolling on wheels of some sort, most of this year’s contenders could walk on two “feet.” Even though these walking robots had some problems completing tasks, I was so impressed with the progress companies have made in the past couple of years. I also really enjoyed the expo section that was full of robots and other inventions – and I even got a few close-ups.





I have to say that it’s great that DARPA puts together this competition and supports robotics. I can’t wait for a future challenge where I would like to send an AI brain developed by our team, implemented in a third-party robot. I’d like to see our AI brain in a robot that isn’t just preprogrammed for specific tasks and the body it uses – I want to compete with an AI brain that can be put into any robot body, learn how to control it, and do any task we put in front of it.

A second highlight of my trip (so far) was meeting with people from the Future of Life Institute. FLI recognizes that technology might present risks to humanity in the future, and is committed to protecting human life and being optimistic about what’s to come. I think that FLI representatives were glad to learn that our AI company is serious about pursuing general AI technology safely. I’m happy to say that we share the same values, and I’m looking forward to future cooperation with this organization. 

I was also able to connect with the Machine Intelligence Research Institute and make a small donation to the organization while in the Bay Area. Their team is incredibly smart, and it’s obvious that they’ve thought for a long time about creating a positive future for humanity alongside AI. I like that their thinking is so carefully reasoned and highly structured, and I’m happy to say that we’re planning to cooperate with them in the future. MIRI believes, like me, that cooperation will always win out over competition when it comes to humans and artificial intelligence. 

If you’re interested in AI safety, keep an eye out for my upcoming blog post on the topic!

The Exponential Finance Conference in NYC, a fourth exciting moment on this trip, was all about how technology is impacting businesses today. It was amazing to see the way that the speakers look into the future and try to discover trends and patterns. I’d like to see more business leaders thinking this way in the Czech Republic and Slovakia.

Peter Diamandis, founder of Singularity University, on stage at Exponential Finance
And here’s a glimpse into some of the fun I’ve been having between conferences, meetings, and interviews:

My view of the Statue of Liberty from a helicopter
A quick stop at the Museum of Natural History in NYC
Tony Soprano’s house!


Marek

---
If you’d like to see future updates on the progress of my U.S. road trip, you can follow me on Twitter at http://twitter.com/#!/marek_rosa or keep checking my blog: http://blog.marekrosa.org


Tuesday, June 9, 2015

Guest post by Ondrej Petrzilka - Medieval Engineers: Castle siege, Survival, Clans

Hello, I am Ondrej Petrzilka (lead developer for Space and Medieval Engineers). In this blog post I would like to shed some light on our current and future plans with Medieval Engineers. Currently we’re working on three big things: castle siege (that has just been released), survival, and clans.

Before I continue, let me emphasize that everything I say in this blog post is subject to change. The process of game development at our studio goes through multiple stages (idea, concept, development, testing, feedback, more development…) and during the later stages it’s likely that some of the earlier stages will get changed due to the feedback and experience that we will gain later.


Castle Siege


The original idea was to create a game mode where players can easily test their castle and siege weapon designs. This is the first iteration of this feature and we will be working on it during the upcoming weeks. We plan to add more weapons, armor, and a better combat system including ranged weapons. We’re also considering enabling some survival features in castle siege.

In castle siege mode, there are two teams: attackers and defenders. Defenders choose a map with a castle, and attackers choose siege weapons and one of the predefined positions where they would like to start an attack. The goal for attackers is to destroy the king statue within a pre-set time limit. The statue is hidden somewhere in the castle.

Attackers can use different approaches to destroy the statue. They can attack with siege weapons and destroy the statue directly, they can use ladders to go over walls, they can destroy walls with siege weapons and then run inside, or they can dig under the castle.

For more info watch the tutorial video:


Castle Siege development screenshots:



Survival


It was clear after our last survey that the majority of our players would like us to focus on survival mode. We have been working on several features for a long time now and for the next weeks/months survival will be our priority. You can see the results of the survey here: http://www.kwiksurveys.com/p/0q33A3vf?qid=539349

Character stats
We plan to introduce health, stamina, and food, and maybe water later on. Stamina will be used for sprinting, combat and hard labor. Stamina will regenerate automatically, but when a character is low on food, stamina regeneration will be limited. Food will provide a temporary stamina regeneration boost, and different food will have different boosts. Health will regenerate automatically very slowly, but during sleep it will regenerate fast. Some food and items (e.g. bandages) will have a health regeneration effect. Food will be automatically reduced over time, and characters will have to eat. Otherwise, they will be weak and eventually die.

We plan to add an inventory which will allow players to carry items like food, construction components (nails, wooden spikes) and other small items and components we’ll introduce in the future.

Food
Players will have to obtain food to survive, so we plan to add several food sources into the game. Players will be able to go into the woods and gather berries, mushrooms and roots; some of them will respawn in their original place, and some will respawn randomly. Another food source we plan to add is wildlife. Players will be able to hunt deer for food and later other animals including dangerous ones (like bears and wolf packs). In later stages of the game, players will be able to farm and raise cattle.

Seasons
We plan to add seasons to make survival more interesting and entertaining. The game will start during spring and players will have 3 seasons to prepare for winter and obtain enough food, because during winter food sources will be scarce. We’re also thinking about introducing a new stat: heat. Players would have to get warm cloth and firewood for winter. With seasons, farming will become more interesting. It will be necessary to sow crops in spring and harvest them during late summer or autumn; crops will get destroyed during winter.  During winter, pastures will be covered with snow and players will have to feed the cattle with hay, otherwise they will die of hunger.

Building
We plan to release the first version of the building very soon. To build any large block, players will have to obtain components first. Currently, the only components are wooden timbers, scrap wood and stone. Players will be able to build simple houses in the first version. It won’t be necessary to carry all the components from the inventory to build something, but the components will have to be placed next to the character.

After this point we’ll continuously add more blocks. Later, we’ll introduce new materials and ways of obtaining them. We’re planning to add hay, in order to make hay roofs and feed cattle in winter. We will also introduce clay for making roofs, wooden walls, dishes and basic furnaces, and iron ore, to make iron weapons, armor, nails and mechanisms. We will also add limestone to make grout, sandstone as lightweight stone, and materials for cloth.

Crafting
Players will be able to craft items in three different ways. The first way will be through a toolbar, where the player will simply put an item he wants to craft on the toolbar and then place it. Crafting will consume certain materials and the crafted item will appear in front of the player. This way, the player will be able to craft only very simple things. A second way of crafting will be through a craft table or forge; in this way the player will be able to craft furniture, barrels, chests, mechanisms, weapons and armor. The last way of crafting will be through a furnace. This will be an unattended way of crafting, where the player will put ore and firewood into the furnace and after some time will get ingots.

Water
We would like to add fully simulated water, but we’re not sure if it’s going to be possible because of performance. First we’ll add simple planar water. At ‘sea level’ there will be always water. This will allow us to make lakes, wells and also fake rivers. Even though this has limited potential, it will still be interesting for players.We plan to transport water in buckets and barrels and use it for drinking, cooking and farming. It can be also used to extinguish fires and to power mechanical blocks.

Mechanical blocks
We plan to start adding mechanical blocks when there’s basic survival. Players will use mechanical blocks to power lifts and automated hammers in forges. In the future, there will be more uses for them.


Clans


Clans will come additionally with the survival gameplay. The idea is that a player or multiple players will have their clan and they will try to survive. Players will be able to play as any member of the clan, and the rest of the clan members will be controlled by an AI. Players will be able to switch between clan members. AI-controlled clan members will be able to do simple tasks like farming, gathering food or raw materials, manufacturing items in a workshop, helping to build structures, or patrol castle walls and alert others in case of danger. We would like the player to do interesting and entertaining things in the game, while simple and repetitive tasks can be done by AI-controlled clan members.

When a clan member dies either of old age or when it gets killed, it won’t respawn - it will be lost permanently. New clan members can be born when there are men and women at a reproductive age. We’re already working on new character models: a female, an old lady, and an old man and a child. Old characters won’t be as efficient in manual work, but they will have the advantage of experience so they can be utilized in workshops or kitchens. Children won’t be able to do any work on their own, but they will follow adult clan members and help them (giving them a bonus).


Conclusion


These features are among the largest and most complex that we are planning to implement in Medieval Engineers and their development will be a long-term procedure. Please feel free to post your suggestions and ideas either under this blog post or, more preferably, at our Suggestions sub-forum here: http://forum.keenswh.com/forums/suggestions.413446/. Your feedback is valuable!

Thanks!
Ondrej Petrzilka
---

Thank you for reading this! For the latest news on our games, follow us on Facebook or on Twitter.

Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng
Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG

Monday, June 1, 2015

Fan meetup in LA - June 16

EDIT 06/10/2015:

Dear fans,

I was really excited about meeting up with you on Tuesday, and I was happy to see that so many people were interested. Unfortunately, I really underestimated the amount of time I would have before E3 to hold necessary business meetings while here in the U.S., so I’m afraid I have to cancel our meetup. I’m looking forward to my next visit to LA and I hope I’ll have a chance to meet you then.

Thank you very much for your understanding.

Marek Rosa

--------------

Dear Engineers!

I’ve been waiting for a chance to meet you in person and I think I found the right occasion.  I am going to be in Los Angeles attending E3 and I thought that this would be a great opportunity to organize a small fan meetup on June 16th for our US fans where you will have the chance to meet and ask me questions about anything you wish! As you know, we are an EU based studio so this is one of the rare occasions when I am visiting the US, so we hope you will take advantage of it.

Before we start preparing, we need some more information about who is interested in the meetup and where it should be held. So we would appreciate if you could fill in the survey below and help us organize the meetup.

http://kwiksurveys.com/s/IUQG2acn

Thank you for answering and I am looking forward to meeting you all!

Marek Rosa 

Monday, May 18, 2015

General AI team at Keen Software House hits 2nd milestone

Today I’m excited to tell you that our general AI project has reached another important milestone.

A quick reminder of what our AI brain team has achieved so far:
  • an AI that can play Pong, a Breakout game (left/right movement, responding to visual input, achieving a simple goal)
  • Brain Simulator (a visual editor for designing the architecture of artificial brains)

The new milestone is that our general AI is now able to play a game that requires it to complete a series of actions in order to reach a final goal. This means that our AI is capable of working with a delayed reward and that it is able to create a hierarchy of goals.

Without any prior knowledge of the rules of the game, the AI was motivated to move its body through a maze-like map and learn the rules of the game. The agent behaves according to the principles of reinforcement learning - this means that it seeks reward and avoids punishment. It moves to the place in the maze where it receives the highest reward, and avoids places where it won’t be rewarded. We have visualized this as a 2D map, but in fact the agent works on an arbitrary dimension and the 2D map is our visualization only. The agent actually "sees" 8 numbers (8-dimensional state space) which change according to the agent’s behavior, and it must learn to understand the effects of its actions on these numbers.

Here you can see an example map of the reward areas - the red places represent the highest reward for the AI, and the blue places represent the least reward. The AI agent always tries to move to the reddest place on the map.

Visualization of the agent’s knowledge for a particular task, which changes the state of the lights. It tells the agent what to do in order to change the state of the lights in all known states of the world. The heat map corresponds to the expected utility (“usefulness”) of the best action learned in a given state. A graphical representation of the best action is shown at each position on the map.

The agent's current goal is to go towards the light switch and turn on the lights.

The maze we are using is one where doors can be opened and closed according to a switch, and lights can be turned on or off according to a different switch. When all of the doors are open, the AI agent moves easily through the maze to reach a final destination. This kind of task only requires that the agent complete one simple goal.

The agent uses its learned knowledge to reach the light switch and press the button in order to turn on the lights.

However, imagine that the agent wants to turn on the light but the doors to the light switch are closed. In order to get to the light switch, it first has to open the door by pressing a door switch. Now imagine that this door switch is located in a completely different part of the maze. Before the AI agent can reach its final destination, it has to understand that it cannot move directly to its goal location. If first has to move away from the light switch in order to press a different switch that will open the necessary door.

Our AI is able to follow a complex chain of strategy in order to complete its main goal. It can assign a hierarchical order to its various goals and plan ahead so it reaches an even bigger goal.

The agent solves a more complex task. It has to open two doors in a particular sequence in order to turn on/off the lights. Everything is learned autonomously online.

How this is different from Pong/Breakout, our first milestone with the AI

The AI is able to perform more complex directional tasks and (in some ways) in a more complex environment. While in the Pong environment it could only move left or right, in this maze the agent is able to move left/right, up/down, stay still, or press a switch.

Also, the AI agent in Pong acts according to visual input (pixels), which is raw and unstructured information. This means that the AI began to learn and acted according to what it could "see." In the maze, the AI agent has full and structured information about the environment from the beginning.

Our next step is to have the AI agent get through the maze according to visual, unstructured input. This means that as it interacts with its environment, it will build a map of the environment based exclusively on the raw visual input it receives. It won’t have that  information about the environment when it starts.


How the algorithm works

The brain we have implemented for this milestone is based on combination of a hierarchical Q-learning algorithm and a motivation model which is able to switch between different strategies in order to reach a complex goal. The Q-learning algorithm is more specifically known as HARM, or Hierarchical Action Reinforcement Motivation system.

In a nutshell, the Q-learning algorithm (HARM) is able to spread a reward given in a specific state (e.g. the agent reaching a position on the map) to the surrounding space so the brain can take proper action by climbing steepest gradient of the Q function. However, if the goal state is far away from the current state, it might take a long time to build a strategy that will lead to that goal state. Also, the number of variables in the environment can lead to extremely long routes through the "state space", rendering the problem almost unsolvable.

There are several ideas that can improve the overall performance of the algorithm. First, we made the agent reward itself for any successful change to the environment. The motivation value can be assigned to each variable change so the agent is constantly motivated to change its surroundings.

Second, the brain can develop a set of abstract actions assigned to any type of change that is possible (e.g. changing the state of a door) and can build an underlying strategy for how this change can be made. With such knowledge, the whole hierarchy of Q functions can be created. Third, in order to lower the complexity of the problem, the brain can analyze its "experience buffer" from the past and eventually drop variables that are not affected by its actions or are not necessary for the current goal (i.e. strategy to fulfill the goal).

A mixture of these improvements creates a hierarchical decision model that is built during the exploration phase of learning (the agent is left to randomly explore the environment). After a sufficient amount of knowledge is gathered, we can "order" the agent to fulfill a goal by manually raising motivation value for a selected variable. The agent then will execute the learned abstract action (strategy) by traversing the strategy tree and unrolling it into a chain of primitive actions that lie at the bottom.


Our motivation

Like with the brain’s ability to play Pong/Breakout, this milestone doesn’t mean that our AI is useful to people or businesses at this stage. It does mean that our team is on the right track in its general AI research and development. We’re hitting the milestones we need to hit.

We never lose sight of our long term goal, which is to build a brain that can think, learn, and interact in the world like a human would. We want to create an agent which can be flexible in a changeable environment, just like human beings can. We also know that general AI will eventually bring amazing things to the world – cures for diseases, inventing things for people that would take much longer to invent without the cooperation of AI robots, and teaching us much more than we currently know about the universe.

---

Thanks for reading!

If you’d like to see future updates on the general AI project, you can follow me on Twitter http://twitter.com/#!/marek_rosa or keep checking my blog: http://blog.marekrosa.org