As part of our research efforts in continual learning, we are open-sourcing Charlie Mnemonic, the first personal assistant (LLM agent) equipped with Long-Term Memory (LTM).
At first glance, Charlie might resemble existing LLM agents like ChatGPT, Claude, and Gemini. However, its distinctive feature is the implementation of LTM, enabling it to learn from every interaction. This includes storing and integrating user messages, assistant responses, and environmental feedback into LTM for future retrieval when relevant to the task at hand.
Charlie Mnemonic employs a combination of Long-Term Memory (LTM), Short-Term Memory (STM), and episodic memory to deliver context-aware responses. This ability to remember interactions over time significantly improves the coherence and personalization of conversations.
Moreover, Charlie doesn't just memorize facts such as names, birthdays, or workplaces; it also learns instructions and skills. This means it can understand nuanced requests like writing emails differently to Anna than to John, fetching specific types of information, or managing smart home devices based on your preferences.
Envision LTM as an expandable, dynamic memory that captures and retains every detail, constantly enhancing its understanding and functionality.
What is inside:
- The LLM powering Charlie is the OpenAI GPT-4 model, with the flexibility to switch to other LLMs in the future, including local models.
- The LTM system, developed by GoodAI, stands at the core of Charlie's advanced capabilities.
For more details, continue to GoodAI Blog Post
Github: https://github.com/GoodAI/charlie-mnemonic
Discord: https://discord.gg/Pfzs7WWJwf
Authors: Antony Alloin, Karel Hovorka, Ondrej Nahalka, Vojtech Neoral, and Marek Rosa
Thank you for reading this blog!
Best,
Marek Rosa
CEO, Creative Director, Founder at Keen Software House
CEO, CTO, Founder at GoodAI
For more news:
Space Engineers: www.SpaceEngineersGame.com
Keen Software House: www.keenswh.com
VRAGE Engine: www.keenswh.com/vrage/
GoodAI: www.GoodAI.com
Personal Blog: blog.marekrosa.org
Personal bio:
Marek Rosa is the founder and CEO of GoodAI, a general artificial intelligence R&D company, and Keen Software House, an independent game development studio, started in 2010, and best known for its best-seller Space Engineers (over 5 million copies sold). Space Engineers has the 4th largest Workshop on Steam with over 500K mods, ships, stations, worlds, and more!
I am working on a similar project for my Master Thesis. However, the way in which I am handling the long-term memory is using Obsidian. In this way the long-term memory is totally open source for the user. The user can navigate the long-term memory of the Assistant and change its memory if he wants to. Or add memories in the knowledge base of the Assistant.
ReplyDeleteBeing transparent of what memories the Assistant has seems like a huge
GDPR problem. That is why I chose to let the user essentially keep all his data locally and visible.
On the other hand I have been struggling a little bit to find a good way to test my system. In the last month I have been working on creating an army of LangChain Agents that "Pretend" to be humans and interact with my system then report issues with it. However, your method of testing is very interesting to me and I will analyze it more because you have put some really great work into it and you have some great thinking down there.
This comment has been removed by a blog administrator.
DeleteI am working on a similar project for my Master Thesis. However, the way in which I am handling the long-term memory is using Obsidian. In this way the long-term memory is totally open source for the user. The user can navigate the long-term memory of the Assistant and change its memory if he wants to. Or add memories in the knowledge base of the Assistant.
ReplyDeleteBeing transparent of what memories the Assistant has seems like a huge
GDPR problem. That is why I chose to let the user essentially keep all his data locally and visible.
On the other hand I have been struggling a little bit to find a good way to test my system. In the last month I have been working on creating an army of LangChain Agents that "Pretend" to be humans and interact with my system then report issues with it. However, your method of testing is very interesting to me and I will analyze it more because you have put some really great work into it and you have some great thinking down there.