Thursday, August 25, 2016

Guest Post by Jan Hlousek: VRAGE Render & Upcoming Performance Optimizations

Hello, Engineers! I’m Jan Hlousek, and I lead the VRAGE Render team at Keen Software House. Today I wanted to share some of our documentation with you, so you can take a closer look at precisely how we’re working to improve the render even further. Things are looking very promising so far (just take a peek at these screenshots!), but please keep in mind that this is a bold plan that is subject to change as we move forward and learn more.



Here is the primary structure of our documentation on render performance optimization:
  • Bottlenecks
    • How to combat draw calls count
    • Hot to combat drawing unnecessary objects
    • How to reduce vertex processing
  • Implementation Challenges
  • Expected Performance
  • Implementation
    • Overview
      • Frame processing flow
      • GPU Data topology
    • Texture arrays for voxels
    • Texture arrays / atlases for models
    • Texture arrays for billboards
    • Lodding #1
    • New render component
      • Instancing
      • GPU culling
      • Draw Composer
      • Lodding #2
    • Static transparent geometry
    • Improved culling
    • Voxel merge
    • Armor rendering
    • Occlusion culling
    • Voxel occluders
    • Shadows
    • Foliage
    • Planet setup
  • Future improvements
    • Occlusion culling #2
    • Point cloud
    • Shader optimizations
    • GPU bubbles removal
  • Appendix A - Optimization possibilities
    • Transparent pipeline
    • Models
    • Voxels
    • Culling
    • Foliage
    • Lights
  • Appendix B - Performance analysis
  • Appendix C - Articles

Read on to learn more! The documentation is fairly technical, but the basic idea is this: We want to reduce the draw calls by moving culling to GPU with merge instance rendering. Also, we want to reduce the number of meshes / vertices processed by better lodding and occlusion culling.

Bottlenecks

Main Bottlenecks in Space Engineers are in too many draw calls with too many vertices. Dispatching so many draw calls chokes the CPU on both the render and parallel thread, but also in the driver’s kernel. Dispatching unnecessary (occluded) draw calls with large vertex buffers has a negative impact on GPU performance.

See the Performance analysis section for more.

How to combat draw calls count
  • Using instancing per model will reduce draw calls per model
  • Using merge instancing (collating vertex buffers) reduces draw calls overall
    • Those are implemented to some degree, but with limited usage
  • Moving visibility detection to GPU, operating on a static list of objects without CPU involvement
How to combat drawing unnecessary objects
  • Better frustum culling
    • Currently, lots of stuff is dispatched to render although it is not in frustum
  • Detecting what is occluded using occlusion culling
How to reduce vertex processing
  • Proper lodding
    • Currently, lod thresholds are set up by artists, not taking into account current resolution or field of view


Implementation Challenges

It is quite clear how to combat all problems we currently have. Huge consideration was given to decision where to make the cut between CPU and GPU processing, so the communication between them is fluent (non-blocking) and efficient. Therefore, we decided to move culling to GPU. It will eliminate all per-frame updates of buffers in GPU. All updates of GPU data will be bound to changes in the world or camera spatial changes. We will make sure updates don’t choke the CPU or bus via dispatching them to low-priority thread. 

Note: Some tasks are still under research, therefore the final design of the architecture may be slightly changed.


Expected Performance

On GPU, the processing of culling, lodding and instancing will add some load. A reduced amount of triangles and pixels processed will reduce the load. It can be expected that after all optimizations, GPU performance gain will be based on the complexity of the world. The more complex the scene, the better the performance gain.

CPU performance will be enhanced massively: render thread will take a fraction of the current time, plus removing all per-frame parallel tasks. The simulation thread and all async tasks will have much more processing power at their disposal, reducing sim speed problems.

Bus will be freed from lots of per frame per draw call data currently being dispatched to GPU.
Overall, as we are mostly CPU bound, we are expecting those gains in the tested scenarios (see the Performance analysis section):


Implementation

The roadmap is separated into multiple self-contained tasks. Tasks are designed with the iterative implementation approach in mind: each task can be finished and released separately, and each task brings performance improvements in itself. Tasks should be implemented in a specified order, though, because of dependencies.

Overview

Frame processing flow


GPU Data topology

Texture arrays for voxels
Removing conditions in shaders. Theoretically, it's possible to render all voxels in two passes (single and multi) - this will be implemented in the Draw composer phase.

Texture arrays / atlases for models
Two draw calls per model at maximum (base and decals).
Modify model’s texcoords for vertices to address correctly texture atlas. 
Research pending: mipmapping / filtering issues for atlases, performance for updates of huge arrays, performance for rendering from huge arrays.

Texture arrays for billboards
Apply texture atlasing from models to billboards as well. Replace texture atlasing in GPU particles with the same approach.

Lodding #1
Lod thresholds for models, imposters and voxels deduced in the algorithm based on these factors:
  • Render target resolution
  • Distance from viewport
  • Field of view
  • Density of triangles in model (will be deduced on import; for older models, on load)
  • Quality bias (will be used to generally shift to worse lods - exposed in game settings)
Lodding per viewport.
Far away grids should be discarded from rendering completely.
Making shadows work with new lodding.

New render component
Remove lodding, merging, culling and geometry recording.

Instancing
  • Gather instances for all models
  • Per instance transformation in grid + index to parent’s (grid) transformation
  • Rendering all instances per model at once, without any culling
GPU culling
  • Add brute force frustum culling of instances
  • Add draw indirect based on instance list generated from culling
Draw Composer
  • Collate models into shared vertex / index buffers, each buffer containing objects in stride based on number of triangles (4, 16, 64, 256, 1024, ...), the rest of each stride will be filled with degenerated triangles. Mesh can be contained in multiple buckets to minimize the amount of degenerated triangles. Research pending: performance of indexing vertices from custom buffers; use indexing (with sorted vertices) or not? (less memory vs coherent cache); apply triangle strips?
  • Generate multiple instance lists based on the bucket model is in.
  • Render indirect for each bucket
Lodding #2
  • Bring the lod algorithm on GPU, in draw composer select correct lod mesh for model
  • Output to specific bucket
Static transparent geometry
Apply this approach to static transparent geometry as well.

Improved culling
Spatial tree for frustum culling per grid. Grids themselves will either be culled brute force or they will have their own tree as well.
Research: what is the common amount of grids in the game, and do we need to optimize for that?

Voxel merge
Use new render component and mesh buckets for voxel rendering as well.
Research pending: Bus considerations when adding new voxel patches to existing buckets. Multiple bucket types? (for short / long lived meshes)

Armor rendering
Armor blocks has to be merged, removing invisible edges. Custom tessellation of planes - removing unneeded vertices.
Research pending: Tesselation of lower lods, removing grid details. Basis for physics shape construction.

Occlusion culling
Occluders (essentially meshes with few triangles) are grouped into one huge occluder group per one grid. Occluder group is updated anytime grid is updated.
Armor blocks will have occluder mesh generated only for outer shell.
Models will be able to contain custom occluder lod, which will be added to the occluder group.
Hierarchical z-buffer constructed by rendering occluder groups for every camera view in the frame. HZB used for quick culling per instance.

Voxel occluders
Generate occluder group from the original grid of voxel terrain.
http://procworld.blogspot.cz/2015/08/voxel-occlusion.html
Research: one occluder group per planet / asteroid or multiple?

Shadows
Add PCF postprocessing, tweak and switch to new shadows.

Foliage
Optimize shaders (try removing per frame geometry shader). Lower density of grass with distance. Couple both density and distance for foliage in settings.

Planet setup
Tweak planet setup according to performance:
  • Density and distance of foliage
  • Density of trees / bushes
Add slider affecting densities to settings.

Future improvements

Occlusion culling #2
Occluders can occlude each other, removing whole grids from rendering. For this purpose, every occluder has to have an occludee as well. A bounding box (or multiple bounding boxes in case of large grid) containing all the grid’s objects AABBs will probably be enough.
Occluder groups for farway grids won’t be rendered at all.

Point cloud
Add very far objects to point cloud renderer, containing only position and color, determining pixel size based on the distance. Render whole buffer at once, no culling. Adding and removing from the buffer from time to time. The whole point cloud could be disabled based on the user’s settings (reducing the visibility distance)

Shader optimizations

GPU bubbles removal


Appendix A - Optimization possibilities

Transparent pipeline
  • GPU particles
    • Manage alive particles list for simulation (do not update all particles always)
    • Measure possible gains for multiple particle buckets (Lighting, Collisions, streaks)
  • Static Glass
    • Add support for instancing
  • Billboards
    • Shared texture arrays with gpu particles
      • Render all cpu particles in one pass
    • Automatic atlasing of other billboards
      • Render in one pass
Models
  • Texture arrays
    • Loading to three big texture arrays (cm, add, ng)
    • Research pending
      • Possible performance hit with big texture array locking on load 
      • Possible performance hit with accessing texture array in shader (memory throughput bottleneck)
      • Use atlasing or just arrays?
    • Prepare vertex data with uvs and index to atlas
  • Instancing
    • Create new renderable component with simple interface and clean tracking of instances
    • Eligible for static models without bones
    • List of instance data in structured buffer
  • Merge Instancing
    • Consider whether to merge clusters of objects into one mesh
Voxels
  • Texture arrays
  • Voxel merge
Culling
  • Compute shader for frustum culling
    • Passing list of indices to instances to drawInstancedIndirect
  • Occlusion culling
    • OccluderGroup
      • Contains simple occlusion per block
        • Standard armor handled separately
        • Blocks having a custom occlusion geometry in model
        • No deformations
      • Contains simple occlusion per sector of voxels
      • Essentially a triangle mesh
      • Managed per grid or per block of voxels
Foliage
  • Optimize shaders
Lights
  • Number of lights in world
    • Find out their owner and their purpose
    • Check
      • Medieval planet
      • Space planet
      • Space empty scene


Appendix B - Performance analysis

Setup: CPU i5 3.2GHz, 16G RAM, nVidia GTX 750 Ti



Appendix C - Articles

Bottlenecks of constant buffer access
https://developer.nvidia.com/content/constant-buffers-without-constant-pain-0

Texture update costs
https://eatplayhate.me/2013/09/29/d3d11-texture-update-costs/

Direct3D11 Deferred Contexts
https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/GDC_2013_DUDASH_DeferredContexts.pdf

Direct3D11 Optimization guide
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/04/DX11PerformanceReloaded.ppsx

Hierarchical Z-Buffer
http://malideveloper.arm.com/resources/sample-code/occlusion-culling-hierarchical-z/?doing_wp_cron=1470414658.9501960277557373046875
http://malideveloper.arm.com/partner-showroom/occlusion-culling-with-compute-shaders/?lang=zh-hans

Voxel occlusion
http://procworld.blogspot.cz/2015/08/voxel-occlusion.html

--

Thanks for reading!

Marek Rosa
CEO and Founder, Keen Software House
CEO, CTO and Founder, GoodAI

www.KeenSWH.com
www.GoodAI.com

Space Engineers on Facebook: https://www.facebook.com/SpaceEngineers
Space Engineers on Twitter: https://twitter.com/SpaceEngineersG
Medieval Engineers on Facebook: https://www.facebook.com/MedievalEngineers
Medieval Engineers on Twitter: https://twitter.com/MedievalEng

Monday, August 1, 2016

GoodAI One Year Later: Progress to Date and Next Steps

It’s been one whole year since we publicly announced GoodAI. I want to celebrate the anniversary by looking back at what we have achieved in the past 12 months, and tell you a bit more about what we are planning for the future.

Our progress to date and next steps include:
   - Framework
   - Roadmap
   - School for AI
   - Growing Topology Architecture
   - Arnold Simulator
   - AI Roadmap Institute
   - GoodAI Consulting


GoodAI started as a project within Keen Software House in January 2014. It was announced to the public on July 7, 2015, and has now grown to a team of 30 researchers. Together with Keen Software House, we have team members from 17 countries!

GoodAI’s mission is to develop general artificial intelligence
 – as fast as possible – 
to help humanity and understand the universe.

One year ago, our primary approach to building general AI was through Brain Simulator, our in-house collaborative platform that third-party researchers, developers, and tech companies could use to prototype and simulate artificial brain architectures, share knowledge, and exchange feedback. At that time, we were exploring various approaches to building general AI, trying to gain a better understanding of the field as a whole, and consolidating our own specific approach to the problem.

One year later, we’re working on several things that together form our focused approach to building general artificial intelligence.

We have focused mainly on our R&D roadmap. Together with our framework, it is our latest achievement. The roadmap started out almost as a side project, but the importance of a strategic overview of the AI landscape quickly became apparent. It will help us choose research directions more efficiently and reduce the complexity of development within those directions.

I feel that we have accomplished a lot during last year. 
I am very satisfied with our progress.


Framework


We view intelligence as a tool for searching for solutions to problems. The guiding principles of our AI research revolve around an agent which can accumulate skills gradually and in a self-improving manner (where each new skill can be reused and improved in the accumulation of further skills).

Each new skill works like a heuristic that helps to guide and narrow the search for problem solutions. Some heuristics even increase the efficiency of the search for additional heuristics.
These principles have inspired our framework document, which describes how we understand intelligence and provides tools for studying, measuring, and testing various skills and abilities.

The framework itself will aim to be as implementation agnostic as possible, without regard to particular learning methods or environments. It will provide an analytic, systematic, and scalable way to generate hypotheses that are possibly relevant in the search of general AI.


R&D roadmap

The research roadmap is an ordered list of skills / abilities (research milestones) which our AI will need to be able to acquire in order to achieve human level intelligence. Each skill or ability represents an open research problem and these problems can be distributed among different research groups, either internally at GoodAI, or among external researchers and hobbyists.

There are two parts to the roadmap: 
  • a map for the open problems
  • a map for known and proposed solutions (where each problem may have multiple or branching solutions)

The roadmap is a living document which will be updated as we work towards the milestones and evaluate them within the framework document.

The current version of the documents is early-stage and a work in progress. We anticipate that more milestones and research directions will be added to the roadmap as our understanding matures.

The first version of the roadmap and framework will be released to the public within the next couple of months. There will still be many parts missing, but we feel that it is better to engage with the community as soon as possible.


School for AI

What is the goal of the School for AI? We expect the AI to be able to learn. Of course, some intrinsic skills will be hard-coded, and the AI has to be "born" with them. Other skills will be learned. We will teach the AI these skills in a gradual and guided way in our School for AI which we are now developing.

In the School for AI, we first design an optimized set of learning tasks, or as we say, a "curriculum." The curriculum teaches the AI useful skills / heuristics, so it doesn't have to discover them on its own. Without a curriculum, the AI would waste time exploring areas that evolution and society already explored, or those that we know are not useful or perhaps dangerous.


Arnold Simulator

Arnold Simulator is a software platform designed for the rapid prototyping of AI systems with highly dynamic neural network topologies. The software will provide tools for our research and development, but it is also designed for high performance and it's transparently scalable to large computer clusters.

It is the next generation of GoodAI in-house prototyping software. It follows in the steps of GoodAI's Brain Simulator, which focused more on the standard machine learning algorithms. We’re designing it for large, highly dynamic, heterogeneous and heterarchical networks of lightweight actors, and with focus on concurrency, parallelism and low-latency messaging. For concurrency, we're using the actor model, where independent actors communicate via messages. The simulation runs in discrete time-steps, during which the individual actors are processed in parallel. In between the simulation steps, the system can interact with any virtual or real environment via sensors and actuators. The design of Arnold Simulator will allow us to effectively implement the growing general AI architectures we are focusing on.

3D visualization from Arnold Simulator

Working groups

The GoodAI team is organized into four smaller working groups: the Brain group, School group, Software Engineers, and our AI Safety team.
  • Brain group is working on the implementation of solutions to research topics, mostly focusing on growing topologies, modular networks, and the reuse of skills. These are the guys who are implementing hardcoded skills.
  • School group is studying the skills that the AI needs to acquire (hardcoded or learned), and designing learning tasks for efficient education. They’re also working on the R&D roadmap and mapping various curricula. These are the guys who will train the learned skills.
  • Software engineers are building our Arnold Simulator.
  • AI Safety team is studying the safe path forward with our technology, how to mitigate threats to our team and humankind as a whole, creating an alliance of AI researchers committed to the safe development of AI and general AI, developing our futuristic roadmap, and more.


Futuristic roadmap

While our R&D roadmap covers research and technology plans, our futuristic roadmap is focused on freedom, society, ethics, the universe,  people, the Earth, politics, the economy, and more. Its contents include a description of the long-term future we want to build, and a step by step description of how we want to get there while mitigating the risks and challenges we might face along the way.

Our AI Safety team is working on this.


AI Roadmap Institute

We’re also entertaining the idea of setting up an independent institute dedicated exclusively to the study of (general) AI roadmaps – focused only on the big picture and agnostic to implementation details, plus promoting the importance of the big picture and long-term planning, detailed roadmaps, and perhaps shifting the focus/attention of the AI community toward this big picture direction.

The AI Roadmap Institute is a new initiative to collate and study various AI and general AI roadmaps proposed by those working in the field. It will map the space of AI skills and abilities (research topics, open problems, and proposed solutions). The institute will use architecture-agnostic common terminology provided by the framework to compare the roadmaps, allowing research groups with different internal terminologies to communicate effectively.

The amount of research into AI has exploded over the last few years, with many papers appearing daily. The institute's major output will be consolidating this research into an (ideally single) visual summary which outlines the similarities and differences among roadmaps, where roadmaps branch and converge, stages of roadmaps which need to be addressed by new research, and where there are examples of skills and testable milestones. This summary will be constantly updated and available for all who are interested, regardless of technical expertise.

There are currently two categories of roadmaps: research and development, or how to get us to general AI, and Safety/Futuristic - which explore how to keep humanity safe and the years after general AI is reached. These roadmaps will be described by the institute using the framework in an implementation agnostic manner. The roadmaps will show the problems, and any proposed solutions and the implementations of others will be mapped out in a similar manner.

The institute is concerned with ‘big picture’ thinking, without focusing on local problems in the search for general AI. With a point of comparison among different roadmaps and with links to relevant research, the institute can highlight aspects of AI development where solutions exist or are needed. This means that other research groups can take inspiration or suggest new milestones for the roadmaps.

Finally, the institute is for the scientific community and everyone will be invited to contribute. It will phrase higher level concepts in an accessible and architecture-agnostic language, with more technical expressions made available to those who are interested.


Growing Topology Architecture

We are trying to implement the first prototypes of neural architectures that support the gradual accumulation of skills. This is the implementation side of our work, rather than the big picture / theoretical side of what we do.

The Framework – a systematic method for designing roadmaps and proposing solutions to various AI research topics – is helping us to generate useful hypotheses for which skills to implement first, research directions to take, and solutions. Essentially, if you view intelligence as a search problem, then you can see various AI skills and abilities as heuristics which increase the efficiency of the search.

For example, we have identified that the accumulation of skills is one of the first intrinsic heuristics we need to implement in order to allow AI to learn gradually and self-improve.


PR

Our PR plans include promoting the R&D roadmap, the framework, and the AI Roadmap Institute, all of which will help us find like-minded people and facilitate collaboration with academics and the general public


GoodAI Consulting

GoodAI Consulting is an AI-focused consulting firm using state-of-the-art artificial intelligence solutions to maximize business success for companies and organizations. GoodAI Consulting started because of high demand for the cutting-edge know-how of GoodAI researchers. Our world-class research team focuses solely on general AI research, and is building technology that won't be on the market for another 5-10 years. Clients of GoodAI Consulting receive exclusive and preferential access to the GoodAI core team and research results.

GoodAI Consulting is hiring, so join us or sign up to learn more!


GoodAI web site: www.GoodAI.com

We’ve updated our website to reflect the most up-to-date description of our work, our progress, what we are doing, how we are doing it, why, and so on. If you’d like to learn more about our framework, roadmap, and other areas of focus, spend some time on our About page.

Our website is essentially a summarized knowledge base of GoodAI, but if there’s anything you can’t find there, just send us an email at info@goodai.com.


Contacts, friendship, and commitment to cooperation

We have built a substantial network of like-minded people, from academia to business, whom we reach out to when discussing and brainstorming ideas, problem and solutions. We are very fortunate to have them.

---

As a whole, in the past year we focused much more on the big picture - architecture, strategy, and roadmapping – more than on implementation or solving specific and narrow problems. In my view, the big picture for general AI is currently under-researched, and any gains we make in this direction can have a dramatic impact on lower level, more focused implementations.

Certain things in GoodAI are the same as they were one year ago, however.  We’re still self-funded – I invested $10mil at the start of the project, have continuously added funding since then, and am prepared to increase my amount of personal funding in the future to ensure that we always have $10mil+ as a reserve going forward. We are also still completely devoted to our goal of developing general artificial intelligence.

We’re committed to the idea that solving a general problem will, in the end, offer better outcomes than trying to solve a set of specific problems – even if the narrower problems seem easier to tackle at first.

"There lies the inventor's paradox, that it is often significantly easier to find a general solution than a more specific one, since the general solution may naturally have a simpler algorithm and cleaner design, and typically can take less time to solve in comparison with a particular problem."

                                                                                                                                        - Bruce Tate

We aim for general AI, not narrow AI use cases. This approach allows us to restrict the search for the right solution and focus more resources on our desired long term goal.


Thanks for reading!

Marek Rosa
CEO, CTO, and Founder of GoodAI
CEO and Founder of Keen Software House
:-)


Follow us on social media:

Facebook: https://www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
www.GoodAI.com
www.KeenSWH.com

Wednesday, June 15, 2016

Marek's reading list - Endless Frontier: Vannevar Bush, Engineer of the American Century

I try to make time to read a variety of books, articles, news, and more. I’m especially interested in learning about how others have successfully managed and completed scientific projects, and one figure always left me feeling curious: Vannevar Bush. Bush is credited as one of the people who won WWII for the U.S. by bridging the gap between civilian scientific research and the military. Bush worked as President of the Carnegie Institution for Science and later Chairman of the National Advisory Committee for Aeronautics (now NASA). He also single-handedly convinced President Roosevelt to establish the National Defense Research Committee, allowing for effective coordination between civilian scientists and military researchers, and served as Director of the Office of Scientific Research and Development. Most importantly, perhaps, he oversaw the Manhattan Project. All of his efforts during the WWII era went towards leading thousands of scientists to develop groundbreaking technology in a very short period of time at a critical moment in history.

Vannevar Bush
Together with our Head of PR, JoEllen, I looked into Endless Frontier, a book by G. Pascal Zachary about Vannevar Bush’s life. The text explores all aspects of his adulthood, especially his triumphs and failures in the pursuit of a technology that revolutionized the American approach to scientific research.


Please keep in mind that this blog is not about my opinion on nuclear technology or the military. Rather, I am interested in how Bush managed a sizable and often remote research team. In general, here at GoodAI we want to maintain our distance from military research. However, I see that Vannevar Bush led a team with a strong sense of urgency (or even danger), and with a high potential for reward if they succeeded.

I think a lot of how he became successful can be applied to how we can organize our research and teams at GoodAI and Keen Software House.

Personality as a leader

  • Vannevar Bush was a strong-handed leader. He wasn’t afraid to take risks when the chance of reward was high, and he was certainly self-assured when it came to making big decisions. Knowing he had as good a chance as anyone to solve tough problems, he thought of himself as a ship captain, requiring “loyalty, even deference, from subordinates” but being “fiercely loyal and protective” of the people who worked for him. 
  • He refused to take “no” for an answer when he felt it was important.
  • It wasn’t easy to work for him – he was known for being both tenacious and belligerent. He liked to tell people to “Justify the space you occupy.” On the other hand, he was both skilled and self-aware, so he always tried to pair himself with another leader who was more sympathetic and generous than himself.
  • Most defining of all, he wasn’t afraid to work hard. He had grit


Personal knowledge of research

  • In additional to reserving time for his own experiments and pursuing his own ideas, Bush strongly believed that any good researcher has to reach beyond their own field – to business, the free market, finance and economics, and more. 
  • He also believed that a good scientist needs soft skills – to be able to teach others and explain high-level concepts to them, and to have a strong sense of the “needs and aspirations” of the people they serve. For Bush, a scientist can’t be isolated in a lab, removed from reality.


Approach to research

  • When designing a machine, Vannevar Bush always started with a specific end goal. He then went back to the beginning and outlined specific steps for getting there, highlighting any part that might cause difficulty. He also wasn’t afraid to go back and revise as he moved forward in the process. Versatility was critical for Bush.
  • His goal was to “dream in a rather definite way.” He knew what he wanted, and always had a plan for how to get there. 
  • He worried about over-specialization in science, and was sure to stay involved in politics and business. He even moved to Washington D.C. to be closer to the action of the day. This proved a successful strategy for him, as he earned money by working to make things that people genuinely needed. 
  • He wasn’t bound by custom – he was as likely to speak his mind to the U.S. president as to his subordinate. He even compelled the army and navy to cooperate and coordinate, poising himself as a fixer and a middleman, willing to tackle any problem. He even set up the NACA to connect the military with civilian research.


Organizing the team


Team organization was the self-described “greatest challenge” for Bush when it came to conducting research, but he had a number of strategies to ensure things ran smoothly.
  • He believed that clear structure could overcome crises and conflicts of personality – however, this sometimes made his teams overly bureaucratic.
  • He knew his researchers and their needs – he praised them for improvements, regardless of how small they were. He saw that researchers were motivated by pride, money, and patriotism, and ensured that they received precisely those rewards. 
  • Questions he asked at meetings included: What’s new? What are our alternatives? What are the pros and cons of each? This ensured there was always a full discussion and awareness of facts and alternatives, but he was the one to make the final decision.
  • The 1st principle of management for Bush was “hire good people and put them in key positions.”
    • His top people always had a direct line to Bush himself.
    • His team was diverse but balanced as a whole – he had the loyal man who really worked for the organization, but also one with more enthusiasm and creativity, etc.
    • None of his researchers ever quit, despite his demands. He always supported his inner team publicly, but was critically honest with them in private.
  • Bush didn’t staff only his own employees. Instead, he contracted with research institutions, universities, and industrial labs. This meant scientists could stay in their own labs, and that he could more easily put together a national network of the very best researchers, hiring rock stars from MIT, Harvard, AT&T.
  • Finally, he often held informal “teas” for his staff in the afternoons – he saw it as a morale booster and a way to share information informally.

Accomplishments

  • Overall, his most significant achievement was bridging the gap between science and government, compelling government to put scientists’ knowledge to effective use
  • He managed to maintain an agile approach to research and development, though the organizations he oversaw were very large and widely distributed. 
  • He was never afraid to tell people what he thought (even the president of the United States), and successfully ensured continued government funding for science and engineering after WWII, contributing in a major way to American post-war supremacy.

Failures

  • However, Bush was elitist – he was an enemy of participatory democracy, so failed to build mass support for his ideas about how research should be conducted. Instead, he relied on a limited number of key decision makers to push his ideas through.  
  • He failed to understand how competition could inspire and drive creative innovation, and was partial to centralization. While he contributed to the U.S. rise to power during and after WWII, he also played a strong role in feeding the military-industrial complex and bureaucratic government institutions. 
  • Given his self-confidence which often proved to be an asset, he pledged himself to nuclear proliferation, but had little idea how to keep the technology contained or in safe hands. 

Our takeaways

  • We need to maintain balance in the team
  • As a team, we need to remain committed to agile development, and not become a sluggish corporation. 
  • It will also be critical for us to stay committed to continuously refining our thinking on the safety aspects of the artificial intelligence we’re building, and how AGI will impact the future. For this, we will likely need to reach out and cooperate with others who know more about economics, sociology, government relations, and more.
  • We should remain focused on design details for research, just as Bush did when designing a new machine – setting out a roadmap where each small step is specified as much as possible. We also shouldn’t be afraid to remain agile, changing our plans as we learn more and more. 
  • Finally, we should be wary of over-specialization, as it may distract us from our goals. 

In general, reading the story of this successful leader confirmed that we’re on the right track in many ways. There’s always room to be inspired, however, and I’m always looking for ways to improve how our team works together.

---

Thanks for reading! Let me know in the comments if you have more recommended reading, especially about science, research, engineering, and development – I’m very open to suggestions!

Marek Rosa
CEO, CTO & Founder, GoodAI
CEO & Founder, Keen Software House


Facebook: https://www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
www.GoodAI.com
www.keenswh.com