Monday, August 29, 2016

Opening the Search for a CEO, Senior Producer / Game Director, Senior Multiplayer Programmer at Keen Software House

Today I'm excited to tell you about several new things in this post:

- Keen Software House is scaling up
- My role is shifting, and I will focus primarily on research in GoodAI
- We're searching for a CEO for Keen Software House
- We're searching for a Senior Producer for Space Engineers
- We've announced a 1 million czk / $40k USD internal multiplayer contest
- We're searching for a Senior Multiplayer Programmer

Several months ago, I wrote a blog post explaining that in the last 1-2 years we have transitioned from an indie studio of 5 people to one of 50+. We’re growing fast, and we’ve experienced some challenges (like any expanding business does). We’re solving this by constantly improving our organizational and management processes, and by finding strong leaders for each project.

At that time I wrote that post, we were working to find the right person to lead Medieval Engineers - to help us scale up and keep bringing you quality, awesome, fun, polished games with big features. Not long after that, we announced a new producer for Medieval Engineers: Tim Toxopeus, who took the reins for the game and brought it back to a state where, in just a few short weeks, we’ll be ready to re-release it - this time with Planets, area ownership, fixed aiming, improved character animations, stabilized first person camera, as well as several unannounced features that you’ll learn about in our upcoming weekly developer diaries. The future of Medieval Engineers is bright, and in time will include mechanical blocks, farming, and more.

Today I have more news for you. There is an incredible amount of potential in our teams, our VRAGE engine, and our games at Keen Software House, and it’s time to give them what they deserve: superstar leaders who can raise Keen Software House games to their full potential and go above and beyond for our community.

In order to do that, I have decided to open the search for:

CEO (Chief Executive Officer) for Keen Software House, who will:
  • Continue to fulfill my creative vision and the vision of our community
  • Be able to deliver Space Engineers and Medieval Engineers at the same or better quality than I was since we started it up until mid-2015, when I started to divide my attention between KSH and GoodAI. Hiring someone able to dedicate 100% of their time to these games will raise them to their full potential.
  • Define and sustain the Keen Software House company culture

Senior Producer / Game Director for Space Engineers, who will finish the game and continue developing the Engineers franchise
  • This will allow Petr Minarik, current producer for Space Engineers and our most senior programmer on the team, to focus on his strengths as programmer who can solve the greatest technical challenges in our games
  • We are very thankful to Petr, who agreed to lead the Space Engineers game until we found a dedicated producer. Now it’s time for him to focus his talents less on the business and team management side of things, and more on programming.

Senior Producers / Game Directors for new projects

Senior Multiplayer Programmer, who will bring the multiplayer experience in our games to a new level.
  • Our games have very dynamic, destructible and large environments, populated with a vast number of objects that can change position and shape very frequently and suddenly. Players can build and destroy ships, stations, planets, asteroids, and more. Each of these changes leads to long computations in game mechanics and physics sub-systems. 
  • The Senior Multiplayer Programmer will design and implement a multiplayer and parallelization system that is capable of handling this level of complexity. It’s essential that we provide a smooth game play experience at all times, not allowing players to observe inconsistencies, bugs, deaths due to incorrect multiplayer synchronization, or the game (or game world) being non-responsive, etc.. This is definitely not a simple job. No game has had to solve this before, meaning that we’re in uncharted territory. 
  • However, despite the difficulty of the problem, we definitely take it seriously. In addition to searching for experienced multiplayer programmers to join our team, on Monday previous week we launched an internal competition in at Keen Software house: the group or individual on our team delivering the best multiplayer in under 3 months will receive a 1 million CZK (= $40,000 USD) bonus from me. At the moment, we have two teams competing: 
    1. Petr Minarik + Jan Hlousek + Sandra Lenardova 
    2. Jan Nekvapil + Michal Zak + the Medieval Engineers team
Improving our management structure means that we can scale up in a serious way - the studio can now be fully run by the game teams, rather than by me directly. 

After we find the CEO and producers, I will transition to the role of Chairman. This means I will stand at the head of the company to preserve the original vision and spirit of Keen Software House. However, most of my time will be dedicated to research and development of general AI and my role as CEO/CTO of GoodAI. 

At GoodAI, we have reached very promising research milestones. We are going to launch the AI Roadmap Institute, and we are inventing neural architectures for growing topologies. All of these things mean that for me, now is the right time to focus 100% on GoodAI. I cannot postpone this. My dedication to general AI research will no longer allow me to directly manage our games, or to give Keen Software house the amount of attention it deserves. For these reasons, I am opening the search for more leaders for Keen Software House.

While I know that handing over the management of Keen Software House to a new CEO is a significant change, I am confident that with strong leadership, our Keen Software House team will bring you more than you ever expected or even thought possible. We will continue the Engineers franchise and expand our portfolio to keep delivering to our players.

Still not sure about what we can do with the right leaders? ;-) 

Just look at what we managed in the past year alone in Space Engineers! Since releasing Planets last November, we have focused on making constant improvements to the game. This includes performance optimizations and bugfixes, as well as smaller additions like the new character animation system. In the near future we will deliver realistic sounds, block redesign, a new render engine (better looking and optimized directly for our games), as well as more improvements that are always being worked on. 

What can we offer the potential CEO or Senior Producer / Game Director that joins us?

Our new colleague will be joining a strong studio with lots of potential to grow. Keen Software House invented the Engineering genre in games, and we continue to define the 3D space environment genre as a whole. Just look at the number of recently released games that are inspired by Space Engineers!

Space Engineers August 2016

Our new team leaders will receive serious compensation, bonuses, and stock options / a share in the company based on how Keen is performing, and have the potential to earn millions. We want to focus on having a very elite and senior team that is directly involved and contributing to the success of our games, and is thus compensated very well. This means that even programmers, artists, and game designers can make big money after delivering the results our community wants and deserves.

I am looking forward to finding and welcoming our new colleagues, who will share our passion and dedication.

Thanks for reading! And many thanks to the Space Engineers and Medieval Engineers players out there - we’re grateful for your patience and especially for your feedback. We definitely can’t make these games without you.


Marek Rosa
CEO and Founder of Keen Software House
CEO, CTO, and Founder of GoodAI
:-)

www.KeenSWH.com
www.GoodAI.com

Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Thursday, August 25, 2016

Guest Post by Jan Hlousek: VRAGE Render & Upcoming Performance Optimizations

Hello, Engineers! I’m Jan Hlousek, and I lead the VRAGE Render team at Keen Software House. Today I wanted to share some of our documentation with you, so you can take a closer look at precisely how we’re working to improve the render even further. Things are looking very promising so far (just take a peek at these screenshots!), but please keep in mind that this is a bold plan that is subject to change as we move forward and learn more.



Here is the primary structure of our documentation on render performance optimization:
  • Bottlenecks
    • How to combat draw calls count
    • Hot to combat drawing unnecessary objects
    • How to reduce vertex processing
  • Implementation Challenges
  • Expected Performance
  • Implementation
    • Overview
      • Frame processing flow
      • GPU Data topology
    • Texture arrays for voxels
    • Texture arrays / atlases for models
    • Texture arrays for billboards
    • Lodding #1
    • New render component
      • Instancing
      • GPU culling
      • Draw Composer
      • Lodding #2
    • Static transparent geometry
    • Improved culling
    • Voxel merge
    • Armor rendering
    • Occlusion culling
    • Voxel occluders
    • Shadows
    • Foliage
    • Planet setup
  • Future improvements
    • Occlusion culling #2
    • Point cloud
    • Shader optimizations
    • GPU bubbles removal
  • Appendix A - Optimization possibilities
    • Transparent pipeline
    • Models
    • Voxels
    • Culling
    • Foliage
    • Lights
  • Appendix B - Performance analysis
  • Appendix C - Articles


Read on or take a look at the update video to learn more! The documentation is fairly technical, but the basic idea is this: We want to reduce the draw calls by moving culling to GPU with merge instance rendering. Also, we want to reduce the number of meshes / vertices processed by better lodding and occlusion culling.

Bottlenecks

Main Bottlenecks in Space Engineers are in too many draw calls with too many vertices. Dispatching so many draw calls chokes the CPU on both the render and parallel thread, but also in the driver’s kernel. Dispatching unnecessary (occluded) draw calls with large vertex buffers has a negative impact on GPU performance.

See the Performance analysis section for more.

How to combat draw calls count
  • Using instancing per model will reduce draw calls per model
  • Using merge instancing (collating vertex buffers) reduces draw calls overall
    • Those are implemented to some degree, but with limited usage
  • Moving visibility detection to GPU, operating on a static list of objects without CPU involvement
How to combat drawing unnecessary objects
  • Better frustum culling
    • Currently, lots of stuff is dispatched to render although it is not in frustum
  • Detecting what is occluded using occlusion culling
How to reduce vertex processing
  • Proper lodding
    • Currently, lod thresholds are set up by artists, not taking into account current resolution or field of view


Implementation Challenges

It is quite clear how to combat all problems we currently have. Huge consideration was given to decision where to make the cut between CPU and GPU processing, so the communication between them is fluent (non-blocking) and efficient. Therefore, we decided to move culling to GPU. It will eliminate all per-frame updates of buffers in GPU. All updates of GPU data will be bound to changes in the world or camera spatial changes. We will make sure updates don’t choke the CPU or bus via dispatching them to low-priority thread. 

Note: Some tasks are still under research, therefore the final design of the architecture may be slightly changed.


Expected Performance

On GPU, the processing of culling, lodding and instancing will add some load. A reduced amount of triangles and pixels processed will reduce the load. It can be expected that after all optimizations, GPU performance gain will be based on the complexity of the world. The more complex the scene, the better the performance gain.

CPU performance will be enhanced massively: render thread will take a fraction of the current time, plus removing all per-frame parallel tasks. The simulation thread and all async tasks will have much more processing power at their disposal, reducing sim speed problems.

Bus will be freed from lots of per frame per draw call data currently being dispatched to GPU.
Overall, as we are mostly CPU bound, we are expecting those gains in the tested scenarios (see the Performance analysis section):


Implementation

The roadmap is separated into multiple self-contained tasks. Tasks are designed with the iterative implementation approach in mind: each task can be finished and released separately, and each task brings performance improvements in itself. Tasks should be implemented in a specified order, though, because of dependencies.

Overview

Frame processing flow


GPU Data topology

Texture arrays for voxels
Removing conditions in shaders. Theoretically, it's possible to render all voxels in two passes (single and multi) - this will be implemented in the Draw composer phase.

Texture arrays / atlases for models
Two draw calls per model at maximum (base and decals).
Modify model’s texcoords for vertices to address correctly texture atlas. 
Research pending: mipmapping / filtering issues for atlases, performance for updates of huge arrays, performance for rendering from huge arrays.

Texture arrays for billboards
Apply texture atlasing from models to billboards as well. Replace texture atlasing in GPU particles with the same approach.

Lodding #1
Lod thresholds for models, imposters and voxels deduced in the algorithm based on these factors:
  • Render target resolution
  • Distance from viewport
  • Field of view
  • Density of triangles in model (will be deduced on import; for older models, on load)
  • Quality bias (will be used to generally shift to worse lods - exposed in game settings)
Lodding per viewport.
Far away grids should be discarded from rendering completely.
Making shadows work with new lodding.

New render component
Remove lodding, merging, culling and geometry recording.

Instancing
  • Gather instances for all models
  • Per instance transformation in grid + index to parent’s (grid) transformation
  • Rendering all instances per model at once, without any culling
GPU culling
  • Add brute force frustum culling of instances
  • Add draw indirect based on instance list generated from culling
Draw Composer
  • Collate models into shared vertex / index buffers, each buffer containing objects in stride based on number of triangles (4, 16, 64, 256, 1024, ...), the rest of each stride will be filled with degenerated triangles. Mesh can be contained in multiple buckets to minimize the amount of degenerated triangles. Research pending: performance of indexing vertices from custom buffers; use indexing (with sorted vertices) or not? (less memory vs coherent cache); apply triangle strips?
  • Generate multiple instance lists based on the bucket model is in.
  • Render indirect for each bucket
Lodding #2
  • Bring the lod algorithm on GPU, in draw composer select correct lod mesh for model
  • Output to specific bucket
Static transparent geometry
Apply this approach to static transparent geometry as well.

Improved culling
Spatial tree for frustum culling per grid. Grids themselves will either be culled brute force or they will have their own tree as well.
Research: what is the common amount of grids in the game, and do we need to optimize for that?

Voxel merge
Use new render component and mesh buckets for voxel rendering as well.
Research pending: Bus considerations when adding new voxel patches to existing buckets. Multiple bucket types? (for short / long lived meshes)

Armor rendering
Armor blocks has to be merged, removing invisible edges. Custom tessellation of planes - removing unneeded vertices.
Research pending: Tesselation of lower lods, removing grid details. Basis for physics shape construction.

Occlusion culling
Occluders (essentially meshes with few triangles) are grouped into one huge occluder group per one grid. Occluder group is updated anytime grid is updated.
Armor blocks will have occluder mesh generated only for outer shell.
Models will be able to contain custom occluder lod, which will be added to the occluder group.
Hierarchical z-buffer constructed by rendering occluder groups for every camera view in the frame. HZB used for quick culling per instance.

Voxel occluders
Generate occluder group from the original grid of voxel terrain.
http://procworld.blogspot.cz/2015/08/voxel-occlusion.html
Research: one occluder group per planet / asteroid or multiple?

Shadows
Add PCF postprocessing, tweak and switch to new shadows.

Foliage
Optimize shaders (try removing per frame geometry shader). Lower density of grass with distance. Couple both density and distance for foliage in settings.

Planet setup
Tweak planet setup according to performance:
  • Density and distance of foliage
  • Density of trees / bushes
Add slider affecting densities to settings.

Future improvements

Occlusion culling #2
Occluders can occlude each other, removing whole grids from rendering. For this purpose, every occluder has to have an occludee as well. A bounding box (or multiple bounding boxes in case of large grid) containing all the grid’s objects AABBs will probably be enough.
Occluder groups for farway grids won’t be rendered at all.

Point cloud
Add very far objects to point cloud renderer, containing only position and color, determining pixel size based on the distance. Render whole buffer at once, no culling. Adding and removing from the buffer from time to time. The whole point cloud could be disabled based on the user’s settings (reducing the visibility distance)

Shader optimizations

GPU bubbles removal


Appendix A - Optimization possibilities

Transparent pipeline
  • GPU particles
    • Manage alive particles list for simulation (do not update all particles always)
    • Measure possible gains for multiple particle buckets (Lighting, Collisions, streaks)
  • Static Glass
    • Add support for instancing
  • Billboards
    • Shared texture arrays with gpu particles
      • Render all cpu particles in one pass
    • Automatic atlasing of other billboards
      • Render in one pass
Models
  • Texture arrays
    • Loading to three big texture arrays (cm, add, ng)
    • Research pending
      • Possible performance hit with big texture array locking on load 
      • Possible performance hit with accessing texture array in shader (memory throughput bottleneck)
      • Use atlasing or just arrays?
    • Prepare vertex data with uvs and index to atlas
  • Instancing
    • Create new renderable component with simple interface and clean tracking of instances
    • Eligible for static models without bones
    • List of instance data in structured buffer
  • Merge Instancing
    • Consider whether to merge clusters of objects into one mesh
Voxels
  • Texture arrays
  • Voxel merge
Culling
  • Compute shader for frustum culling
    • Passing list of indices to instances to drawInstancedIndirect
  • Occlusion culling
    • OccluderGroup
      • Contains simple occlusion per block
        • Standard armor handled separately
        • Blocks having a custom occlusion geometry in model
        • No deformations
      • Contains simple occlusion per sector of voxels
      • Essentially a triangle mesh
      • Managed per grid or per block of voxels
Foliage
  • Optimize shaders
Lights
  • Number of lights in world
    • Find out their owner and their purpose
    • Check
      • Medieval planet
      • Space planet
      • Space empty scene


Appendix B - Performance analysis

Setup: CPU i5 3.2GHz, 16G RAM, nVidia GTX 750 Ti



Appendix C - Articles

Bottlenecks of constant buffer access
https://developer.nvidia.com/content/constant-buffers-without-constant-pain-0

Texture update costs
https://eatplayhate.me/2013/09/29/d3d11-texture-update-costs/

Direct3D11 Deferred Contexts
https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/GDC_2013_DUDASH_DeferredContexts.pdf

Direct3D11 Optimization guide
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/04/DX11PerformanceReloaded.ppsx

Hierarchical Z-Buffer
http://malideveloper.arm.com/resources/sample-code/occlusion-culling-hierarchical-z/?doing_wp_cron=1470414658.9501960277557373046875
http://malideveloper.arm.com/partner-showroom/occlusion-culling-with-compute-shaders/?lang=zh-hans

Voxel occlusion
http://procworld.blogspot.cz/2015/08/voxel-occlusion.html

--

Thanks for reading!

Marek Rosa
CEO and Founder, Keen Software House
CEO, CTO and Founder, GoodAI

www.KeenSWH.com
www.GoodAI.com

Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Monday, August 1, 2016

GoodAI One Year Later: Progress to Date and Next Steps

It’s been one whole year since we publicly announced GoodAI. I want to celebrate the anniversary by looking back at what we have achieved in the past 12 months, and tell you a bit more about what we are planning for the future.

Our progress to date and next steps include:
   - Framework
   - Roadmap
   - School for AI
   - Growing Topology Architecture
   - Arnold Simulator
   - AI Roadmap Institute
   - GoodAI Consulting


GoodAI started as a project within Keen Software House in January 2014. It was announced to the public on July 7, 2015, and has now grown to a team of 30 researchers. Together with Keen Software House, we have team members from 17 countries!

GoodAI’s mission is to develop general artificial intelligence
 – as fast as possible – 
to help humanity and understand the universe.

One year ago, our primary approach to building general AI was through Brain Simulator, our in-house collaborative platform that third-party researchers, developers, and tech companies could use to prototype and simulate artificial brain architectures, share knowledge, and exchange feedback. At that time, we were exploring various approaches to building general AI, trying to gain a better understanding of the field as a whole, and consolidating our own specific approach to the problem.

One year later, we’re working on several things that together form our focused approach to building general artificial intelligence.

We have focused mainly on our R&D roadmap. Together with our framework, it is our latest achievement. The roadmap started out almost as a side project, but the importance of a strategic overview of the AI landscape quickly became apparent. It will help us choose research directions more efficiently and reduce the complexity of development within those directions.

I feel that we have accomplished a lot during last year. 
I am very satisfied with our progress.


Framework


We view intelligence as a tool for searching for solutions to problems. The guiding principles of our AI research revolve around an agent which can accumulate skills gradually and in a self-improving manner (where each new skill can be reused and improved in the accumulation of further skills).

Each new skill works like a heuristic that helps to guide and narrow the search for problem solutions. Some heuristics even increase the efficiency of the search for additional heuristics.
These principles have inspired our framework document, which describes how we understand intelligence and provides tools for studying, measuring, and testing various skills and abilities.

The framework itself will aim to be as implementation agnostic as possible, without regard to particular learning methods or environments. It will provide an analytic, systematic, and scalable way to generate hypotheses that are possibly relevant in the search of general AI.


R&D roadmap

The research roadmap is an ordered list of skills / abilities (research milestones) which our AI will need to be able to acquire in order to achieve human level intelligence. Each skill or ability represents an open research problem and these problems can be distributed among different research groups, either internally at GoodAI, or among external researchers and hobbyists.

There are two parts to the roadmap: 
  • a map for the open problems
  • a map for known and proposed solutions (where each problem may have multiple or branching solutions)

The roadmap is a living document which will be updated as we work towards the milestones and evaluate them within the framework document.

The current version of the documents is early-stage and a work in progress. We anticipate that more milestones and research directions will be added to the roadmap as our understanding matures.

The first version of the roadmap and framework will be released to the public within the next couple of months. There will still be many parts missing, but we feel that it is better to engage with the community as soon as possible.


School for AI

What is the goal of the School for AI? We expect the AI to be able to learn. Of course, some intrinsic skills will be hard-coded, and the AI has to be "born" with them. Other skills will be learned. We will teach the AI these skills in a gradual and guided way in our School for AI which we are now developing.

In the School for AI, we first design an optimized set of learning tasks, or as we say, a "curriculum." The curriculum teaches the AI useful skills / heuristics, so it doesn't have to discover them on its own. Without a curriculum, the AI would waste time exploring areas that evolution and society already explored, or those that we know are not useful or perhaps dangerous.


Arnold Simulator

Arnold Simulator is a software platform designed for the rapid prototyping of AI systems with highly dynamic neural network topologies. The software will provide tools for our research and development, but it is also designed for high performance and it's transparently scalable to large computer clusters.

It is the next generation of GoodAI in-house prototyping software. It follows in the steps of GoodAI's Brain Simulator, which focused more on the standard machine learning algorithms. We’re designing it for large, highly dynamic, heterogeneous and heterarchical networks of lightweight actors, and with focus on concurrency, parallelism and low-latency messaging. For concurrency, we're using the actor model, where independent actors communicate via messages. The simulation runs in discrete time-steps, during which the individual actors are processed in parallel. In between the simulation steps, the system can interact with any virtual or real environment via sensors and actuators. The design of Arnold Simulator will allow us to effectively implement the growing general AI architectures we are focusing on.

3D visualization from Arnold Simulator

Working groups

The GoodAI team is organized into four smaller working groups: the Brain group, School group, Software Engineers, and our AI Safety team.
  • Brain group is working on the implementation of solutions to research topics, mostly focusing on growing topologies, modular networks, and the reuse of skills. These are the guys who are implementing hardcoded skills.
  • School group is studying the skills that the AI needs to acquire (hardcoded or learned), and designing learning tasks for efficient education. They’re also working on the R&D roadmap and mapping various curricula. These are the guys who will train the learned skills.
  • Software engineers are building our Arnold Simulator.
  • AI Safety team is studying the safe path forward with our technology, how to mitigate threats to our team and humankind as a whole, creating an alliance of AI researchers committed to the safe development of AI and general AI, developing our futuristic roadmap, and more.


Futuristic roadmap

While our R&D roadmap covers research and technology plans, our futuristic roadmap is focused on freedom, society, ethics, the universe,  people, the Earth, politics, the economy, and more. Its contents include a description of the long-term future we want to build, and a step by step description of how we want to get there while mitigating the risks and challenges we might face along the way.

Our AI Safety team is working on this.


AI Roadmap Institute

We’re also entertaining the idea of setting up an independent institute dedicated exclusively to the study of (general) AI roadmaps – focused only on the big picture and agnostic to implementation details, plus promoting the importance of the big picture and long-term planning, detailed roadmaps, and perhaps shifting the focus/attention of the AI community toward this big picture direction.

The AI Roadmap Institute is a new initiative to collate and study various AI and general AI roadmaps proposed by those working in the field. It will map the space of AI skills and abilities (research topics, open problems, and proposed solutions). The institute will use architecture-agnostic common terminology provided by the framework to compare the roadmaps, allowing research groups with different internal terminologies to communicate effectively.

The amount of research into AI has exploded over the last few years, with many papers appearing daily. The institute's major output will be consolidating this research into an (ideally single) visual summary which outlines the similarities and differences among roadmaps, where roadmaps branch and converge, stages of roadmaps which need to be addressed by new research, and where there are examples of skills and testable milestones. This summary will be constantly updated and available for all who are interested, regardless of technical expertise.

There are currently two categories of roadmaps: research and development, or how to get us to general AI, and Safety/Futuristic - which explore how to keep humanity safe and the years after general AI is reached. These roadmaps will be described by the institute using the framework in an implementation agnostic manner. The roadmaps will show the problems, and any proposed solutions and the implementations of others will be mapped out in a similar manner.

The institute is concerned with ‘big picture’ thinking, without focusing on local problems in the search for general AI. With a point of comparison among different roadmaps and with links to relevant research, the institute can highlight aspects of AI development where solutions exist or are needed. This means that other research groups can take inspiration or suggest new milestones for the roadmaps.

Finally, the institute is for the scientific community and everyone will be invited to contribute. It will phrase higher level concepts in an accessible and architecture-agnostic language, with more technical expressions made available to those who are interested.


Growing Topology Architecture

We are trying to implement the first prototypes of neural architectures that support the gradual accumulation of skills. This is the implementation side of our work, rather than the big picture / theoretical side of what we do.

The Framework – a systematic method for designing roadmaps and proposing solutions to various AI research topics – is helping us to generate useful hypotheses for which skills to implement first, research directions to take, and solutions. Essentially, if you view intelligence as a search problem, then you can see various AI skills and abilities as heuristics which increase the efficiency of the search.

For example, we have identified that the accumulation of skills is one of the first intrinsic heuristics we need to implement in order to allow AI to learn gradually and self-improve.


PR

Our PR plans include promoting the R&D roadmap, the framework, and the AI Roadmap Institute, all of which will help us find like-minded people and facilitate collaboration with academics and the general public


GoodAI Consulting

GoodAI Consulting is an AI-focused consulting firm using state-of-the-art artificial intelligence solutions to maximize business success for companies and organizations. GoodAI Consulting started because of high demand for the cutting-edge know-how of GoodAI researchers. Our world-class research team focuses solely on general AI research, and is building technology that won't be on the market for another 5-10 years. Clients of GoodAI Consulting receive exclusive and preferential access to the GoodAI core team and research results.

GoodAI Consulting is hiring, so join us or sign up to learn more!


GoodAI web site: www.GoodAI.com

We’ve updated our website to reflect the most up-to-date description of our work, our progress, what we are doing, how we are doing it, why, and so on. If you’d like to learn more about our framework, roadmap, and other areas of focus, spend some time on our About page.

Our website is essentially a summarized knowledge base of GoodAI, but if there’s anything you can’t find there, just send us an email at info@goodai.com.


Contacts, friendship, and commitment to cooperation

We have built a substantial network of like-minded people, from academia to business, whom we reach out to when discussing and brainstorming ideas, problem and solutions. We are very fortunate to have them.

---

As a whole, in the past year we focused much more on the big picture - architecture, strategy, and roadmapping – more than on implementation or solving specific and narrow problems. In my view, the big picture for general AI is currently under-researched, and any gains we make in this direction can have a dramatic impact on lower level, more focused implementations.

Certain things in GoodAI are the same as they were one year ago, however.  We’re still self-funded – I invested $10mil at the start of the project, have continuously added funding since then, and am prepared to increase my amount of personal funding in the future to ensure that we always have $10mil+ as a reserve going forward. We are also still completely devoted to our goal of developing general artificial intelligence.

We’re committed to the idea that solving a general problem will, in the end, offer better outcomes than trying to solve a set of specific problems – even if the narrower problems seem easier to tackle at first.

"There lies the inventor's paradox, that it is often significantly easier to find a general solution than a more specific one, since the general solution may naturally have a simpler algorithm and cleaner design, and typically can take less time to solve in comparison with a particular problem."

                                                                                                                                        - Bruce Tate

We aim for general AI, not narrow AI use cases. This approach allows us to restrict the search for the right solution and focus more resources on our desired long term goal.


Thanks for reading!

Marek Rosa
CEO, CTO, and Founder of GoodAI
CEO and Founder of Keen Software House
:-)


Follow us on social media:

Facebook: https://www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
www.GoodAI.com
www.KeenSWH.com