Thursday, November 12, 2015

Planets! Because you wanted them

Today is an important day in Space Engineers history. I am very happy to present the biggest and most challenging addition we have ever made: Planets!

Space Engineers now has the largest, fully destructible, volumetric and persistent planets in the universe!

Since day one planets occupied the top spot among the most requested features by community, but we didn’t have them in our initial roadmap. The decision to add planets came almost a year ago and back then, I knew it was going to be a very complex and difficult task that would test the limits of how far we could go and how hard we could push.

The vision for Space Engineers was always to provide a realistic and almost infinite environment where everything you see is real, can be touched, used, shaped or recreated. Sacrificing the volumetricity of our game world wasn't an option.

We knew we were explorers and we wanted this fight at the highest volume! The extreme scale of planets not only tested our abilities but also pushed us further,  to a point where no other game developer has ever wandered.

During the development we had our ups and downs, we sacrificed our free time, pulled long hours, reached our programming and artistic limits and then pushed them further. And then again and then some more. The task wasn’t easy, but the team boldly took the challenge and put blood, tears and sweat into accomplishing it.

We never forgot that if we don’t give up, if we keep pushing in spite of all the difficulties  and unpredictability that lies in front of us, eventually, one day, we would give you the Planets and everything that will come.

Today I am proud to introduce you to our Planet Creators: (in a random order) Tomas Psenicka, Tomas Rampas, Petr Minarik, Ondrej Petrzilka, Jan Nekvapil, Čestmír Houska, Marko Korhonen, Daniel Ilha, Greg Zadroga, Lukas Jandik, Adam Williams, Dusan Repík, Jan Veberšík, George Mamakos, Pavol Buday, Joel Wilcox, Marketa Jarosova, Ales Kozak, Simon Leška, Michal Zavadák, Marek Obršal, Dušan Andráš, Charles Winters, Michal Wrobel, Anton Bauer, Natiq Aghayev, Rene Roder, Jan Golmic, Nikita Suharicevs, Dušan Ragančik, Lukáš Tvrdoň, Karel Antonín, Adam Sadlon, Vaclav Novotny and myself.

These are the folks who can one day say to their grandchildren: I was there when Planets were added to Space Engineers. I am a Planet Creator!

Planets are fully destructible, volumetric and persistent. You can drill a hole from one pole to the other, but you will have to put a lot hours into it. Planets have up to 120 km in diameter, which is a surface of 45 238,93421 km2! Even with planets this huge we had to apply smart visual tricks so they appear nice and round from afar. That’s why mountain ranges goes flat once you hit a critical distance from the planet.

Every object on a planet surface is affected by its gravity, which is why we added a new option for station grids - the Station voxel. By enabling this option a station will be static only when touching the voxel (one block build “inside of it”). When you cut some parts of the station away it will become a dynamic object and could break apart. And since you will be building stations on a spherical surface from now on, buildings can't be axis-aligned, but you can orientate the block so that it’s aligned with the landscape surface.

In the Planetary release you can find three planets (up to 120 km in diameter inspired by Earth, Mars and an alien world), three moons (up to 19 km in diameter, inspired by the earth’s moon, Europa and Titan), each one with its unique flora and fauna.

Planets and moons can have either Earth-like, hostile or no atmosphere (the engineer need to keep his helmet on). And if you are looking for more, with modding tools you can create your own planets (we are preparing a detailed modding guide from planets that will be released within the next days).

The surface area for all the planets spans about 140,000 km2 and that is more than any other game (e.g. Skyrim). In the case of Space Engineers, this landscape is also fully volumetric and destructible.

I want to share also two fun facts: planets don’t orbit around the sun, but the sun orbits around the solar system instead, and planets and moons have natural gravity as in real life.

To support Planets as you know them now, we had to optimize many parts of our game engine. This includes physics (collision detection between ships and planets), rendering large scale planetary environments with various materials, flora zones that support fast and seamless movement without any preloading, atmosphere and gravity, etc... We will keep optimizing and supporting the game and planetary features with future updates and additional content to make the experience better and better.

Lastly, to bring the best possible experience without compromises we had to change the hardware requirements, therefore Planets are possible only on hardware supporting DirectX 11. You can find the updated hardware requirements at

Have fun building on planets!


Marek Rosa
CEO, CTO, Founder
Keen Software House


For further updates, keep reading my blog: or find me on Twitter: @marek_rosa

Or  follow Space Engineers and Medieval Engineers on Facebook and Twitter:
Space Engineers on Facebook:
Space Engineers on Twitter:
Medieval Engineers on Facebook:
Medieval Engineers on Twitter:

Friday, October 23, 2015

Celebrating the 2nd Anniversary of Space Engineers

Dear Engineers,

Today we are celebrating the 2nd anniversary of Space Engineers. It has been two years since the release of the game co-developed by the community and shaped by its fans. To this day, more than 150,000 community creations have been shared and more than 100 weekly updates issued.

We are not celebrating just the 2nd anniversary of Space Engineers, but also the creative spirit of the community that drives us, pushes us into the unknown, makes us question what is even possible, and helps us see what we never before imagined.

And here we are after two years, standing on the verge of something big and exciting at the same time. Our biggest content update yet will introduce Planets and it will reinvent the way you experience Space Engineers.

Every single day we are blown away by community creations and we are humbled by passionate content creators helping us to shape the game.

To every content creator, streamer, YouTuber, commenter, reviewer, builder, creator, destroyer, explorer, and all the players and every single fan out there, in the name of the whole team I would like to say: you have always been a vital part of the game and we wouldn't be here without you.

Thank you for making Space Engineers possible.

Marek Rosa
CEO, CTO and founder of Keen Software House

PS. Join us today at 8PM CEST for a live stream with giveaways, live QA and more:


Or  follow Space Engineers and Medieval Engineers on Facebook and Twitter:
Space Engineers on Facebook:
Space Engineers on Twitter:
Medieval Engineers on Facebook:
Medieval Engineers on Twitter:

Wednesday, September 16, 2015

Another GoodAI Milestone: Attention, Vision, and Drawing

Today I am excited to share with you that our GoodAI team has reached another milestone. These research results are part of our artificial brain architecture that we call the attention module.

Your brain’s ability to focus and pay attention to its surroundings is an important part of what allows humans to survive and live comfortably. Attention enables you to selectively focus on parts of your environment. We use attention to designate how much concentration a piece of information or particular stimuli deserves, while at the same time ignoring input that we are aware of but know isn’t immediately relevant. Imagine how difficult it would be if walking down the street, we focused equally on everything we saw, heard, smelled, or could feel. Attention helps us know what’s important at any given time. In the past, attention helped us hide from dangerous predators like wolves or bears, and today it enables us to drive cars and even distinguish between a friend’s cute puppy and the neighbor’s unfriendly watchdog.

It’s important to remember that these results are just a subsection of the attention module of our artificial brain, and that we are building on findings of other researchers in the areas of recurrent networks and deep learning. While this is certainly not an extraordinary breakthrough in AI research, reaching this milestone means that we’re on track in our progress towards general artificial intelligence.

Attention Method and Results

The new milestone is that our attention module is able to understand what the things it sees look like, and how these things it sees relate to each other. One subpart of our module shows how it is able to study a simple black-and-white drawing and remember it. It then reproduces a similar but completely original drawing after seeing just a small selection of the drawing.

For example, if you train the module on faces like these:

Subset of images for training faces

And then show it just some hair (which the AI has never seen before):

Partial input
The module is able to generate a new, complete image of its own creative design. The generated image is not a copy, but is an original picture created by the AI, inspired by images it saw previously.

Generated image
You can see the whole process in this video with several examples:

And we went even further in our experimentation. Inspired by DecoBrush, we trained our module to remember ornamental letters. When we draw a normal, handwritten letter, the module adds ornamentation to it. Please note that this is just the first try and several additional improvements, including structure or neighborhood constraints (i.e. convolutions), could be applied to improve the result.
Adding ornamentation to characters. Several characters are perfect (when the module learned what to copy), while others make sense and are novel, i.e. "E" in the fourth row (where the module learn how to combine)

We’d also like to share our work in progress on more complicated dataset. The module was fed with pictures of mountains. Below you can see both inputs (top) and generated images (bottom):

The figure shows picture reconstruction for 12 examples (divided by a line). The top rows show the input. Below the input there are generated images (the network learns how to reconstruct the input image over a series of steps)

The module tries to generalize among the pictures – interestingly, it also added extra mountaineers in the top left picture pair. We are already working on further improvements and even more interesting tests and applications.

A scene for testing attention
Finally, the module can also understand the relationship between objects in a picture. This is illustrated in our next example: a room with objects. The goal here is to teach the attention module to remember the relationship between objects in the room and also remember how the individual objects look. Simply put, we want our algorithm to realize that there are chairs, a plant, that the plant is below the chair, and that the chairs are next to each other.

This video describes the attention module in action and shows the results:

Module details (for advanced technical readers)

For this milestone, we were inspired mainly by two works: Shape Boltzmann Machines that use pixel-neighbor information to complete missing parts of a picture, and DeepMind's DRAW paper that introduces a general recurrent architecture for unsupervised attention. Similar to that system, our model contains two Long Short-Term Memory (LSTM) layers (encoder and decoder) with about 60 cells each (so that it won’t overfit data but tries to generalize across it).

Input images are separated into: (i) a training set that is used for training (such as the images of faces), and (ii) a test set containing images the module didn’t see during the training (for example, just the hair).

To generate a face, the algorithm uses a database of images to learn the characteristics of a face. When the module sees a new image or part of a new image, it generates yet another new image. This newly generated image is then fed as the input in the next time step. Over time, the module improves by generating novel images that make sense to a viewer.

The attention architecture has similar structure to the algorithm for generating images, but differs in the following way: the input is always just one part of the image that it sees (in contrast to full image as before). From that limited input, the attention module generates: (i) the direction where it should look next and (ii) what will be there. This output is fed back as new input in the following step. The model learns to minimize the difference between the overall scene and what it predicted about the scene.
The attention module depends on the ability to learn sequences of information. Here, we give a bit of insight into how we compared two training algorithms using a simple yet challenging example. We tested the Long Short-Term Memory (LSTM) model with Real-Time Recurrent Learning (RTRL) versus Backpropagation Through Time (BPTT). At time 1, we feed the LSTM a randomly cropped part of the picture of a handwritten digit from the MNIST database; at time 2, a random part of the following number is used as the input; and at time 3, it finally sees what it should predict (the number that follows after those two). Recurrent connections are highlighted in blue. While we found that RTRL is great in most cases, it was outperformed by BPTT in this example as unfolding the network seems to be very important here.


This milestone is an important step along the way to developing a truly general artificial intelligence. However, this is still just a small part of the attention module of our artificial brain, and is an even smaller part of building an artificial brain as a whole. Moreover, we are already exploring several improvements of the attention module and you can expect to see these results in an upcoming Brain Simulator update.

In the meantime, we are working hard on other modules of our AI brain - including motor commands, long term memory, and architectures inspired by neuroscience and deep learning, and you can look forward to new milestone news in the near future.

Thanks for reading! And for a special goodbye, here’s how our attention module imagines the Space Engineers astronaut:

And many thanks to Martin, Simon, and Honza, our attention module researchers!

Marek Rosa
CEO, CTO and founder at GoodAI

Note on blog comments:

Lately I’ve received a lot of spam comments on my blog, and unfortunately the blog has no settings for preventing spammers. Due to the extreme amount of spam messages, I am forced to start moderating comments – I hope readers will understand this need, and we will do our best to approve comments as fast as possible.


Want to know as soon as GoodAI reaches another milestone? Check us out on social media: