Friday, March 1, 2024

Introducing Charlie Mnemonic: The First Personal Assistant with Long-Term Memory

As part of our research efforts in continual learning, we are open-sourcing Charlie Mnemonic, the first personal assistant (LLM agent) equipped with Long-Term Memory (LTM)


At first glance, Charlie might resemble existing LLM agents like ChatGPT, Claude, and Gemini. However, its distinctive feature is the implementation of LTM, enabling it to learn from every interaction. This includes storing and integrating user messages, assistant responses, and environmental feedback into LTM for future retrieval when relevant to the task at hand.

Charlie Mnemonic employs a combination of Long-Term Memory (LTM), Short-Term Memory (STM), and episodic memory to deliver context-aware responses. This ability to remember interactions over time significantly improves the coherence and personalization of conversations.

Moreover, Charlie doesn't just memorize facts such as names, birthdays, or workplaces; it also learns instructions and skills. This means it can understand nuanced requests like writing emails differently to Anna than to John, fetching specific types of information, or managing smart home devices based on your preferences.

Envision LTM as an expandable, dynamic memory that captures and retains every detail, constantly enhancing its understanding and functionality.

What is inside:

  • The LLM powering Charlie is the OpenAI GPT-4 model, with the flexibility to switch to other LLMs in the future, including local models.
  • The LTM system, developed by GoodAI, stands at the core of Charlie's advanced capabilities.

For more details, continue to GoodAI Blog Post

Github: https://github.com/GoodAI/charlie-mnemonic

Discord: https://discord.gg/Pfzs7WWJwf

Authors: Antony Alloin, Karel Hovorka, Ondrej Nahalka, Vojtech Neoral, and Marek Rosa 

Thank you for reading this blog!

 

Best,
Marek Rosa
CEO, Creative Director, Founder at Keen Software House
CEO, CTO, Founder at GoodAI

 

For more news:
Space Engineers: www.SpaceEngineersGame.com
Keen Software House: www.keenswh.com
VRAGE Engine: www.keenswh.com/vrage/
GoodAI: www.GoodAI.com
Personal Blog: blog.marekrosa.org

 

Personal bio:

Marek Rosa is the founder and CEO of GoodAI, a general artificial intelligence R&D company, and Keen Software House, an independent game development studio, started in 2010, and best known for its best-seller Space Engineers (over 5 million copies sold). Space Engineers has the 4th largest Workshop on Steam with over 500K mods, ships, stations, worlds, and more!

Marek has been interested in game development and artificial intelligence since childhood. He started his career as a programmer and later transitioned to a leadership role. After the success of Keen Software House titles, Marek was able to fund GoodAI in 2014 with a $10 Million personal investment.

Both companies now have over 100 engineers, researchers, artists, and game developers.

Marek's primary focus includes Space Engineers, the VRAGE3 engine, the AI People game, long-term memory systems (LTM), an LLM-powered personal assistant with LTM named Charlie Mnemonic, and the Groundstation.

GoodAI's mission is to develop AGI - as fast as possible - to help humanity and understand the universe. One of the commercial stepping stones is the "AI People" game, which features LLM-driven AI NPCs. These NPCs are grounded in the game world, interacting dynamically with the game environment and with other NPCs, and they possess long-term memory and developing personalities. GoodAI also works on autonomous agents that can self-improve and solve any task that a human can.

3 comments:

  1. I am working on a similar project for my Master Thesis. However, the way in which I am handling the long-term memory is using Obsidian. In this way the long-term memory is totally open source for the user. The user can navigate the long-term memory of the Assistant and change its memory if he wants to. Or add memories in the knowledge base of the Assistant.
    Being transparent of what memories the Assistant has seems like a huge
    GDPR problem. That is why I chose to let the user essentially keep all his data locally and visible.

    On the other hand I have been struggling a little bit to find a good way to test my system. In the last month I have been working on creating an army of LangChain Agents that "Pretend" to be humans and interact with my system then report issues with it. However, your method of testing is very interesting to me and I will analyze it more because you have put some really great work into it and you have some great thinking down there.

    ReplyDelete
    Replies
    1. This comment has been removed by a blog administrator.

      Delete
  2. I am working on a similar project for my Master Thesis. However, the way in which I am handling the long-term memory is using Obsidian. In this way the long-term memory is totally open source for the user. The user can navigate the long-term memory of the Assistant and change its memory if he wants to. Or add memories in the knowledge base of the Assistant.
    Being transparent of what memories the Assistant has seems like a huge
    GDPR problem. That is why I chose to let the user essentially keep all his data locally and visible.

    On the other hand I have been struggling a little bit to find a good way to test my system. In the last month I have been working on creating an army of LangChain Agents that "Pretend" to be humans and interact with my system then report issues with it. However, your method of testing is very interesting to me and I will analyze it more because you have put some really great work into it and you have some great thinking down there.

    ReplyDelete