January 21
Today I will introduce how I define general purpose, human-level intelligence. This blog post is not limited to artificial intelligence, but does answer some basic questions that can help us progress towards artificial general intelligence (AGI).

You can also watch a longer interview with me where I discuss this topic:

Some people may consider what I write here fairly obvious. However, I believe that for others my description will offer a new perspective and help them understand how I see the big picture.

Simple description of intelligence

Intelligence is a tool that an intelligent agent uses to learn, adapt, solve problems, and achieve goals in a dynamic, complex, and uncertain environment. Intelligence achieves this by representing relevant parts of the environment in a simplified, abstracted mental model where searching for optimal solutions is faster and cheaper. Intelligence has fewer resources (atoms, computation cycles, energy, etc.) than the environment, so the intelligent brain must use resources in a smart way.

Evolution vs. intelligence

Like intelligence, evolution tries to find solutions or optimal ways to operate in a complex environment. However, instead of using a mental map or representation of the real world, evolution uses the real world – an environment that is extremely complex and has a nearly infinite number of parameters.

We can say that evolution is a “dumb” algorithm because it does not plan ahead and every solution has to be tested in the real world.

For example, we know that birds build different types of nests. Nests near the ground (where there are many predators) tend to have a protective dome structure, while nests in trees have an open cup shape. Birds do not use their intelligence to decide that when they build nests on the ground those nests should have a protective dome – instead, it is believed that it took generations before evolution found birds through natural selection who were suited to building ground nests with protective domes. Other birds are restricted to building nests in trees, because they haven’t evolved ground nest-building skills.

We can say that evolution “learns” by reusing and combining things that have already worked. However, its memory is limited – if it finds itself at a dead end, it has limited options to backtrack to a previous working solution.

Evolution is blind and has no sense of where it is, where it was, or where it is going.

Unlike evolution, intelligence searches for solutions in a simplified mental representation of the real world. The intelligent agent observes a portion of its environment and tries to create a simplified mental model of that environment. Since the agent operates with limited resources, it will model only those parts of the environment (internal, external, its own thinking, other agents, and so on) that are relevant for finding optimal solutions.

Since this representative model has fewer parameters than the real world and includes only relevant information, searching for solutions is faster and various non-evolutionary optimization strategies are made possible. These strategies include planning, forecasting, learning, abstracting, connecting things that are not close in time or space but may be related to one another, and more.

There can be higher or lower levels of intelligence and adaptability – depending on the abilities of the intelligent agent. At a certain low level, an agent doesn’t even need to be adaptive or intelligent to achieve goals, but can succeed simply by following a set of rules (discovered, for example, by evolution).

Properties and abilities that enable intelligence

Patterns are critical to the functioning of intelligence. A pattern is something that happens in a regular and repeated way (not random noise). Our current human approach to creating a mental model is to see the world as a hierarchy of spatial and temporal patterns which manifest as letters, words, songs, behavior, events, physical laws, and so on.

Intelligence depends on:

  • Pattern detection – trying to find causal correlations between things. There are patterns in the universe around us. We know some of them, but many of them are still a mystery we need to discover.
  • Pattern generation – using detected patterns for something new, or applying patterns to unknown environments based on hypotheses we’ve generated. Through pattern generation, plans can be tested and goals achieved.

We can learn new patterns by transferring patterns from domain to domain, essentially problem solving by analogy. Learning by analogy means acquiring new knowledge about something (an object, an action, a problem, and so on) by transferring useful knowledge we already have about something similar (a similar object, action, problem, etc.).

For example, if someone learns how to use a hammer, this basic movement can later help them use an axe.

Attention: Attention is a phenomenon closely connected to the intelligent use of limited resources. Attention is a critical part of intelligence because intelligence models only those parts of reality that are relevant for a problem the agent is trying to solve. This is how I understand all levels of attention – mental, sensory, goal directed, etc.

Put simply, the intelligent agent uses previous experience and knowledge about the world to focus its attention on specific parts of the real world or specific parts of the model. The mind has limited processing and memory resources and cannot do or process everything at the same time.

People are always searching for new patterns, because finding patterns is what optimizes the results of intelligence and helps it achieve goals faster and more cheaply. We can say that new and useful patterns are as valuable as gold. However, pattern detection and generation cannot occur all at once or without preparation. Detecting and utilizing patterns (i.e. increasing intelligence) involves both gradual and guided learning.

  • Guided learning means that there is someone (a mentor or society) who has already discovered many patterns for us, and we just need to learn the patterns from them. Without guided learning we would have to reinvent everything people before us already discovered.
  • Gradual learning means that we learn abilities one by one, where complex abilities are based on previously-learned abilities.
    • For example, before you can start programming, you first need to learn to write, read, speak, understand the environment, and so on.  Without gradual learning, we’d have to spot patterns in places where no lower-level pattern has been learned – making it a very difficult search problem.
    • If someone gives you a book written in Chinese, you most likely won’t be able to read or understand it. But if they give you a textbook and you study the first chapter to learn basic Chinese language patterns, you can then start the second chapter and learn more advanced patterns. Eventually, you will be able to read that book in Chinese.

Other mechanisms critical for understanding intelligence include:

  • Abstraction: allows us to see the correlation between one object and another at a high level, something evolution is unable to do
  • Uncertainty: the mental model of the world is a probabilistic model, because we don’t always know why things happen as they do in the real world. Using a probabilistic model prevents us from having to go into deep investigations on every matter, and allows us to act or move towards goals without complete certainty
  • Generalization and Specialization: the ability to move from the specific to the general or from the general to the specific

IMPORTANT: The above properties and abilities are not sufficient for a human-level intelligence! This is why we have invented a framework for studying intelligence properties and abilities in a systematic manner – you can read more on this in our upcoming R&D roadmap for 2016.


Keep in mind this important principle: intelligence works in a simplified representation of the world where it searches for patterns that help it achieve optimal solutions. This model is much simpler than the real world and it also has a clearer structure. Intelligence can implement many optimizations that are impossible in real world investigations (planning, abstraction, and looking for correlations between events) and achieve efficient results. Intelligence always looks for more efficient, cheaper, and faster ways to use limited resources (mental, environmental, time resources, etc.).

Your feedback

Please let me know how you define or describe intelligence, if you think my description is too shallow or broad, where we can simplify it, and whether some additional details need to be added.

Many thanks!

Marek Rosa
CEO, CTO & Founder
GoodAI, Keen Software House

Facebook: https://www.facebook.com/GoodArtificialIntelligence
Twitter: @GoodAIdev
Forum: http://forum.goodai.com

  1. What are your thoughs on emotions and other such forces that actually direct the usage of intelligence (to decide which goals to seek, evaluate them, learn what is good and what is bad etc)?

    1. That was my first thought as well. You can set a simple goal when testing in a closed system (eg. "reach that location"), but how do you design goals for real-world testing?

    2. I try to understand intelligence purely as a tool that the other mechanisms (motivation, emotions…) are using to achieve their goals.

      In other words, pure intelligence without emotions/goals/motivation won't achieve nothing as it wouldn't have a reason.

      This is not a big issue, because one can easily add motivation also to the intelligence engine.

  2. Hello, I really like your definition of intelligence. It's clear and simple. But consider following points on role of evolution in designing of it. I think that there is no intelligence (even AI) without evolutionary processes. You are talking about evolution as "dumb" algorithm. But we can divide evolution into at least biological, mental and cultural evolution. In biological evolution agents (genes) acts like you describet in example with nests. They are not able to plan. So they have to die, adapt, die, etc. But suddenly neural network occures and they are able to make a model of environment. So in some cases they can plan their actions. For me it means, that they will let die their hypothesis about enviromnent instead of them (thx D. Dennett – book Consciousness Explained). So it is still evolutionary process, but now on the level of neural network – later mind. That means that evolutionary pressure is still here, but now is "choosing" better representations of environment. After that these agents with neural networks are able to model new realities based on models – so they create culture – buildings, cars, etc. And these things are also failing or succeeding. It is new competitive environment for new agents (ideas, hypothesis, books, clothes, religions and AIs). So for me, AI can only be created by evolutionary processes. You can give it some genetic makeup (programmed possibilities and goals), but after that it have to learn major of "skills" through itself in specific training environment. During this (possibly blind if modelled by computer) learning process you also have to "kill" a lot of it's copies and after that you have a master AI in specific task. It's similar process as in biological evolution or in learning mind making mistakes. So AI or neural networks are also blind, but their blindness is only moved on the level of models. They look like they are planning (because they dont have to die), but they only have better learning process and priciple is the same – natural selection. Even patter learning process mentioned by you is evolutionary – you "let die" unsuccesful "behaviour".
    Sorry for such s long comment but I'm really interested in this topic and couldn't find direct contact to you.

  3. I have one definition of Intelligence, which is more simplified.

    Intelligence is when nerve systems have prediction ability.
    There is different nerve systems created by the evolution. Prediction ability is necessary and sufficient to call nerve system “intelligent”, as for me.
    And yes, all the intelligence beings I know, have this ability by ”representing relevant parts of the environment in a simplified, abstracted mental model of pattern within patterns“.

    At the end of your thoughts you are walking around one very important property, part of intelligence, but you did not named it.
    This one, valuable as gold, called INTEREST.
    You can say it is just a part of attention. Well yes, and not. It affects attention, and it acts within attention. But it has different goals, than the “simple” attention mechanism. Interest is what direct attention to self improvement.

    “We think only the most interesting thoughts” – I came to this maxima on my own, and I feel it is right.
    I spend a lot of time trying to understand “what is interest”. It is same difficult as “what is snow?”, when you are a snowman in Antarctic.

    ps: just sent “cuda-neurons” test to Radka.

    1. I agree with you Alexander. Btw we are just trying to implement an attention mechanism that learns to recognize and select relevant features in input and also learns to ignore irrelevant differences. This way our AI would learn how to learn better.

      Regarding abilities – there's a huge list of additional abilities that we will need our AI to learn. More on this later.

  4. I think what you call evolution, Marek, might be what is also called natural instinct. It's that hard-wired or deeply embedded coupling of stimulus and response that does not involve thought, at least in the abstract, reflective sense. Birds build nests out of instinct for protection from the elements and predators and to raise off-spring. They don't 'design' their nests, but they do make decisions about which materials to use from among possible options. Those tactical decisions may still be mostly hard-wired, but there may be trade-offs made based on proximity/availability versus functionality or durability. I believe birds and other nest builders may learn through experience, but as with evolution overall, changes to instinctive behaviors are gradual. A given species won't change its basic nest building pattern and build a nest that differs from fundamentally from the programmed normal pattern.

    I agree that intelligence involves abstract, meta-level activities like reflection, planning, being able to explore alternatives and trade-offs by simulating possible sequences of events and outcomes in your mind rather than physically trying them all, and learning. Learning can vary from direct feedback loops to much more analytical and reflective types of feedback. My cat learned, at least in the sense of recognizing a pattern and selecting one pattern over another, that if she wants in, it's more effective to jump up to the window ledge and tap on the glass with her paw than to just stand at the base of the door or even to meow at the door. A wild crow couple in my garden [I call them Russell and Sheryl] learned that unlike the cat, I'm not a threat, and in fact I often fill the birdbath with fresh water. The warning caw I used to hear is now replaced by different caw when the birdbath is dry and I'm outside. Not only have they learned patterns, they are using communications across species (as is the cat at the window) and some might argue they have successfully trained me to fill the birdbath for them.

    Obviously the success of general AI comes through generalization and re-application of patterns or rules to new use cases. That could be brunt-force encoded, taking a pattern from one context and trying it in every other and seeing where it works apart from its original context. I think AI is somewhat smarter than that today, as we can already use pattern similarities across different contexts or domains to narrow the possible cases where reapplication is likely to be most successful. It's a large-scale computational problem though, given the complexity of the world. Adaptation by the system itself is limited, too (i.e., you don't really apply the same rule; you first have to generalize it, then decide where it could be reapplied, likely with some specialization, and then you have to do that specialization). Humans, and maybe some other species, at least among primates and possibility other animals as well, are good at that critical thinking — analysis, rationalization of possible reasons why something works in one case and may or may not work in some other, and reapplication, which may even involve some new creative twist on the previous pattern (in a way that isn't necessarily accidental, as in evolution).

    If you can get your AGI to do some of that, even with fairly simplified models of the real-world, then you'll be on to something that will truly advance the state-of-the-art in AI. Keep at it, my friend!

  5. The Intelligence is a set of properties of a specific Neural Network topology, rather simple. If we wouldn't know about, say, combustion, we'd might discuss the internal combustion engine by stating – it is warm, it is heavy, it it moving West, it is cold sometimes and such. It is Engine – and modelling some properties wouldn't help. The same with Intelligence – there is a NN topology complient to all observable features.