Wednesday, July 19, 2023

Introducing our work on general-purpose LLM Agents

At GoodAI, we are dedicated to pushing the boundaries of artificial intelligence. Our current focus is on the development of Large Language Model (LLM)-based agents with personalities that go beyond simple conversations, and instead exhibit LLM-driven behaviors, interacting with humans and other agents, as well as their virtual environment. Our agents learn from feedback, store long-term memories, and express goal-oriented behaviors. We are building a cognitive architecture on top of LLM, which is used as a reasoning engine, and adding long-term memory, the foundation for continual learning.

Unlocking the Potential of Large Language Model Agents

Since 2021 we’ve been applying our research in the development of the AI People, our in-house video game where LLM agents come alive. In this open-ended sandbox simulation, agents interact with each other and their environment, forming relationships and displaying emotions.

How It Works

Our LLM agents are emulated personalities with goals and memories. As the designers, we describe their personalities in plain text, which serves as a blueprint for their behavior. We feed the agents’ observations and recent events into the LLM, and generate responses that reflect what the agent would do in a given situation. 

These responses are then translated into possible game actions, providing our agents with the autonomy and adaptability to navigate their surroundings. It’s important to note that our agents’ behaviors are not scripted; they are dynamically generated by the LLM, resulting in unpredictable, realistic, and amusing experiences.

Cognitive Architecture generates goal-oriented behavior

Thanks to the cognitive architecture design, goal-oriented behavior is at the core of our LLM agents. When presented with a goal, they employ a planning and execution process to achieve it. If the goal can be accomplished using atomic game actions, we generate a plan that outlines how to solve the objective. For longer-term goals, we decompose the plan into simpler tasks that can be completed within shorter timeframes. This iterative approach continues until the task becomes solvable using atomic game actions. We rely on feedback to guide our agents, with completed tasks leading to new goals and unsuccessful plans prompting us to reassess and revise our strategies.

Long-term Memory enables continual learning

Our LLM agents rely on their Long-Term Memory (LTM) to store and retrieve crucial memories. Conversations, thoughts, plans, actions, observations, skills, and behaviors are all stored within a vector database. The LTM enables pre-processing and post-processing of memories, ensuring optimal retrieval. By considering factors such as context, recency, importance, and relevance, our agents can access the appropriate memories to inform their actions. LTM acts as a foundation for continual learning, enabling our LLM agents to grow and develop over time.

Overcoming Challenges

While our LLM-driven universal agents hold tremendous potential, they also face certain challenges. The LLM’s responses can be volatile and unreliable, as slight changes in the prompt can lead to significant variations in the output. Occasionally, the LLM may generate irrelevant or nonsensical information, necessitating ongoing improvements. Additionally, the context window has limited capacity, which can impact our agents’ understanding of complex scenarios. 

Although our agents currently focus on language understanding, we recognize the need for multi-modal comprehension. We are actively working on addressing these challenges, as well as improving long-term memory, to enhance the performance of our LLM agents.

GoodAI’s Vision for the Future

Since our inception in 2014, we have been pursuing the goal of beneficial general artificial intelligence. In 2021, we embarked on the path of LLM-driven agents, applying our findings directly in the development of the AI People game. While video games are an ideal developmental environment, we believe that the possibilities presented by collaborative LLM agents go far beyond entertainment. Some of our current collaborative agent-based work includes AI Researcher, Multi-Agent Coder, Assistant, and Stoic Mentor.

We invite you to follow our journey and get in touch with us if you would like to become part of it.

Thank you for reading this blog!


Marek Rosa
CEO, Creative Director, Founder at Keen Software House
CEO, CTO, Founder at GoodAI


For more news:
GoodAI Discord:
Space Engineers:
Keen Software House:
VRAGE Engine:
Personal Blog:


Personal bio:

Marek Rosa is the founder and CEO of GoodAI, a general artificial intelligence R&D company, and Keen Software House, an independent game development studio, started in 2010, and best known for its best-seller Space Engineers (over 5 million copies sold). Space Engineers has the 4th largest Workshop on Steam with over 500K mods, ships, stations, worlds, and more!

Marek has been interested in game development and artificial intelligence since childhood. He started his career as a programmer and later transitioned to a leadership role. After the success of Keen Software House titles, Marek was able to fund GoodAI in 2014 with a $10 Million personal investment.

Both companies now have over 100 engineers, researchers, artists, and game developers.

Marek's primary focus includes Space Engineers, the VRAGE3 engine, the AI People game, long-term memory systems (LTM), an LLM-powered personal assistant with LTM named Charlie Mnemonic, and the Groundstation.

GoodAI's mission is to develop AGI - as fast as possible - to help humanity and understand the universe. One of the commercial stepping stones is the "AI People" game, which features LLM-driven AI NPCs. These NPCs are grounded in the game world, interacting dynamically with the game environment and with other NPCs, and they possess long-term memory and developing personalities. GoodAI also works on autonomous agents that can self-improve and solve any task that a human can.

No comments:

Post a Comment