On this tutorial, we construct a common long-term reminiscence layer for AI brokers utilizing Mem0, OpenAI fashions, and ChromaDB. We design a system that may extract structured recollections from pure conversations, retailer them semantically, retrieve them intelligently, and combine them instantly into personalised agent responses. We transfer past easy chat historical past and implement persistent, user-scoped reminiscence with full CRUD management, semantic search, multi-user isolation, and customized configuration. Lastly, we assemble a production-ready memory-augmented agent structure that demonstrates how trendy AI programs can cause with contextual continuity relatively than function statelessly.
!pip set up mem0ai openai wealthy chromadb -q
import os
import getpass
from datetime import datetime
print(“=” * 60)
print(“🔐 MEM0 Superior Tutorial — API Key Setup”)
print(“=” * 60)
OPENAI_API_KEY = getpass.getpass(“Enter your OpenAI API key: “)
os.environ[“OPENAI_API_KEY”] = OPENAI_API_KEY
print(“n✅ API key set!n”)
from openai import OpenAI
from mem0 import Reminiscence
from wealthy.console import Console
from wealthy.panel import Panel
from wealthy.desk import Desk
from wealthy.markdown import Markdown
from wealthy import print as rprint
import json
console = Console()
openai_client = OpenAI()
console.rule(“[bold cyan]MODULE 1: Primary Reminiscence Setup[/bold cyan]”)
reminiscence = Reminiscence()
print(Panel(
“[green]✓ Reminiscence occasion created with default config[/green]n”
” • LLM: gpt-4.1-nano (OpenAI)n”
” • Vector Retailer: ChromaDB (native)n”
” • Embedder: text-embedding-3-small”,
title=”Reminiscence Config”, border_style=”cyan”
))
We set up all required dependencies and securely configure our OpenAI API key. We initialize the Mem0 Reminiscence occasion together with the OpenAI shopper and Wealthy console utilities. We set up the muse of our long-term reminiscence system with the default configuration powered by ChromaDB and OpenAI embeddings.
console.rule(“[bold cyan]MODULE 2: Including & Retrieving Recollections[/bold cyan]”)
USER_ID = “alice_tutorial”
print(“n📝 Including recollections for person:”, USER_ID)
conversations = [
[
{“role”: “user”, “content”: “Hi! I’m Alice. I’m a software engineer who loves Python and machine learning.”},
{“role”: “assistant”, “content”: “Nice to meet you Alice! Python and ML are great areas to be in.”}
],
[
{“role”: “user”, “content”: “I prefer dark mode in all my IDEs and I use VS Code as my main editor.”},
{“role”: “assistant”, “content”: “Good to know! VS Code with dark mode is a popular combo.”}
],
[
{“role”: “user”, “content”: “I’m currently building a RAG pipeline for my company’s internal docs. It’s for a fintech startup.”},
{“role”: “assistant”, “content”: “That’s exciting! RAG pipelines are really valuable for enterprise use cases.”}
],
[
{“role”: “user”, “content”: “I have a dog named Max and I enjoy hiking on weekends.”},
{“role”: “assistant”, “content”: “Max sounds lovely! Hiking is a great way to recharge.”}
],
]
outcomes = []
for i, convo in enumerate(conversations):
outcome = reminiscence.add(convo, user_id=USER_ID)
extracted = outcome.get(“outcomes”, [])
for mem in extracted:
outcomes.append(mem)
print(f” Dialog {i+1}: {len(extracted)} reminiscence(ies) extracted”)
print(f”n✅ Whole recollections saved: {len(outcomes)}”)
We simulate lifelike multi-turn conversations and retailer them utilizing Mem0’s automated reminiscence extraction pipeline. We add structured conversational knowledge for a particular person and permit the LLM to extract significant long-term information. We confirm what number of recollections are created, confirming that semantic data is efficiently endured.
console.rule(“[bold cyan]MODULE 3: Semantic Search[/bold cyan]”)
queries = [
“What programming languages does the user prefer?”,
“What is Alice working on professionally?”,
“What are Alice’s hobbies?”,
“What tools and IDE does Alice use?”,
]
for question in queries:
search_results = reminiscence.search(question=question, user_id=USER_ID, restrict=2)
desk = Desk(title=f”🔍 Question: {question}”, show_lines=True)
desk.add_column(“Reminiscence”, model=”white”, max_width=60)
desk.add_column(“Rating”, model=”inexperienced”, justify=”middle”)
for r in search_results.get(“outcomes”, []):
rating = r.get(“rating”, “N/A”)
score_str = f”{rating:.4f}” if isinstance(rating, float) else str(rating)
desk.add_row(r[“memory”], score_str)
console.print(desk)
print()
console.rule(“[bold cyan]MODULE 4: CRUD Operations[/bold cyan]”)
all_memories = reminiscence.get_all(user_id=USER_ID)
memories_list = all_memories.get(“outcomes”, [])
print(f”n📚 All recollections for ‘{USER_ID}’:”)
for i, mem in enumerate(memories_list):
print(f” [{i+1}] ID: {mem[‘id’][:8]}… → {mem[‘memory’]}”)
if memories_list:
first_id = memories_list[0][“id”]
original_text = memories_list[0][“memory”]
print(f”n✏️ Updating reminiscence: ‘{original_text}'”)
reminiscence.replace(memory_id=first_id, knowledge=original_text + ” (confirmed)”)
up to date = reminiscence.get(memory_id=first_id)
print(f” After replace: ‘{up to date[‘memory’]}'”)
We carry out semantic search queries to retrieve related recollections utilizing pure language. We reveal how Mem0 ranks saved recollections by similarity rating and returns essentially the most contextually aligned info. We additionally carry out CRUD operations by itemizing, updating, and validating saved reminiscence entries.
console.rule(“[bold cyan]MODULE 5: Reminiscence-Augmented Chat[/bold cyan]”)
def chat_with_memory(user_message: str, user_id: str, session_history: record) -> str:
related = reminiscence.search(question=user_message, user_id=user_id, restrict=5)
memory_context = “n”.be part of(
f”- {r[‘memory’]}” for r in related.get(“outcomes”, [])
) or “No related recollections discovered.”
system_prompt = f”””You’re a extremely personalised AI assistant.
You might have entry to long-term recollections about this person.
RELEVANT USER MEMORIES:
{memory_context}
Use these recollections to offer context-aware, personalised responses.
Be pure — do not explicitly announce that you simply’re utilizing recollections.”””
messages = [{“role”: “system”, “content”: system_prompt}]
messages.lengthen(session_history[-6:])
messages.append({“function”: “person”, “content material”: user_message})
response = openai_client.chat.completions.create(
mannequin=”gpt-4.1-nano-2025-04-14″,
messages=messages
)
assistant_response = response.decisions[0].message.content material
trade = [
{“role”: “user”, “content”: user_message},
{“role”: “assistant”, “content”: assistant_response}
]
reminiscence.add(trade, user_id=user_id)
session_history.append({“function”: “person”, “content material”: user_message})
session_history.append({“function”: “assistant”, “content material”: assistant_response})
return assistant_response
session = []
demo_messages = [
“Can you recommend a good IDE setup for me?”,
“What kind of project am I currently building at work?”,
“Suggest a weekend activity I might enjoy.”,
“What’s a good tech stack for my current project?”,
]
print(“n🤖 Beginning memory-augmented dialog with Alice…n”)
for msg in demo_messages:
print(Panel(f”[bold yellow]Consumer:[/bold yellow] {msg}”, border_style=”yellow”))
response = chat_with_memory(msg, USER_ID, session)
print(Panel(f”[bold green]Assistant:[/bold green] {response}”, border_style=”inexperienced”))
print()
We construct a totally memory-augmented chat loop that retrieves related recollections earlier than producing responses. We dynamically inject personalised context into the system immediate and retailer every new trade again into long-term reminiscence. We simulate a multi-turn session to reveal contextual continuity and personalization in motion.
console.rule(“[bold cyan]MODULE 6: Multi-Consumer Reminiscence Isolation[/bold cyan]”)
USER_BOB = “bob_tutorial”
bob_conversations = [
[
{“role”: “user”, “content”: “I’m Bob, a data scientist specializing in computer vision and PyTorch.”},
{“role”: “assistant”, “content”: “Great to meet you Bob!”}
],
[
{“role”: “user”, “content”: “I prefer Jupyter notebooks over VS Code, and I use Vim keybindings.”},
{“role”: “assistant”, “content”: “Classic setup for data science work!”}
],
]
for convo in bob_conversations:
reminiscence.add(convo, user_id=USER_BOB)
print(“n🔐 Testing reminiscence isolation between Alice and Bob:n”)
test_query = “What programming instruments does this person favor?”
alice_results = reminiscence.search(question=test_query, user_id=USER_ID, restrict=3)
bob_results = reminiscence.search(question=test_query, user_id=USER_BOB, restrict=3)
print(“👩 Alice’s recollections:”)
for r in alice_results.get(“outcomes”, []):
print(f” • {r[‘memory’]}”)
print(“n👨 Bob’s recollections:”)
for r in bob_results.get(“outcomes”, []):
print(f” • {r[‘memory’]}”)
We reveal user-level reminiscence isolation by introducing a second person with distinct preferences. We retailer separate conversational knowledge and validate that searches stay scoped to the right user_id. We affirm that reminiscence namespaces are remoted, making certain safe multi-user agent deployments.
print(“n✅ Reminiscence isolation confirmed — customers can’t see one another’s knowledge.”)
console.rule(“[bold cyan]MODULE 7: Customized Configuration[/bold cyan]”)
custom_config = {
“llm”: {
“supplier”: “openai”,
“config”: {
“mannequin”: “gpt-4.1-nano-2025-04-14”,
“temperature”: 0.1,
“max_tokens”: 2000,
}
},
“embedder”: {
“supplier”: “openai”,
“config”: {
“mannequin”: “text-embedding-3-small”,
}
},
“vector_store”: {
“supplier”: “chroma”,
“config”: {
“collection_name”: “advanced_tutorial_v2”,
“path”: “/tmp/chroma_advanced”,
}
},
“model”: “v1.1”
}
custom_memory = Reminiscence.from_config(custom_config)
print(Panel(
“[green]✓ Customized reminiscence occasion created[/green]n”
” • LLM: gpt-4.1-nano with temperature=0.1n”
” • Embedder: text-embedding-3-smalln”
” • Vector Retailer: ChromaDB at /tmp/chroma_advancedn”
” • Assortment: advanced_tutorial_v2″,
title=”Customized Config Utilized”, border_style=”magenta”
))
custom_memory.add(
[{“role”: “user”, “content”: “I’m a researcher studying neural plasticity and brain-computer interfaces.”}],
user_id=”researcher_01″
)
outcome = custom_memory.search(“What discipline does this individual work in?”, user_id=”researcher_01″, restrict=2)
print(“n🔍 Customized reminiscence search outcome:”)
for r in outcome.get(“outcomes”, []):
print(f” • {r[‘memory’]}”)
console.rule(“[bold cyan]MODULE 8: Reminiscence Historical past[/bold cyan]”)
all_alice = reminiscence.get_all(user_id=USER_ID)
alice_memories = all_alice.get(“outcomes”, [])
desk = Desk(title=f”📋 Full Reminiscence Profile: {USER_ID}”, show_lines=True, width=90)
desk.add_column(“#”, model=”dim”, width=3)
desk.add_column(“Reminiscence ID”, model=”cyan”, width=12)
desk.add_column(“Reminiscence Content material”, model=”white”)
desk.add_column(“Created At”, model=”yellow”, width=12)
for i, mem in enumerate(alice_memories):
mem_id = mem[“id”][:8] + “…”
created = mem.get(“created_at”, “N/A”)
if created and created != “N/A”:
attempt:
created = datetime.fromisoformat(created.change(“Z”, “+00:00”)).strftime(“%m/%d %H:%M”)
besides:
created = str(created)[:10]
desk.add_row(str(i+1), mem_id, mem[“memory”], created)
console.print(desk)
console.rule(“[bold cyan]MODULE 9: Reminiscence Deletion[/bold cyan]”)
all_mems = reminiscence.get_all(user_id=USER_ID).get(“outcomes”, [])
if all_mems:
last_mem = all_mems[-1]
print(f”n🗑️ Deleting reminiscence: ‘{last_mem[‘memory’]}'”)
reminiscence.delete(memory_id=last_mem[“id”])
updated_count = len(reminiscence.get_all(user_id=USER_ID).get(“outcomes”, []))
print(f”✅ Deleted. Remaining recollections for {USER_ID}: {updated_count}”)
console.rule(“[bold cyan]✅ TUTORIAL COMPLETE[/bold cyan]”)
abstract = “””
# 🎓 Mem0 Superior Tutorial Abstract
## What You Discovered:
1. **Primary Setup** — Instantiate Reminiscence with default & customized configs
2. **Add Recollections** — From conversations (auto-extracted by LLM)
3. **Semantic Search** — Retrieve related recollections by pure language question
4. **CRUD Operations** — Get, Replace, Delete particular person recollections
5. **Reminiscence-Augmented Chat** — Full pipeline: retrieve → reply → retailer
6. **Multi-Consumer Isolation** — Separate reminiscence namespaces per user_id
7. **Customized Configuration** — Customized LLM, embedder, and vector retailer
8. **Reminiscence Historical past** — View full reminiscence profiles with timestamps
9. **Cleanup** — Delete particular or all recollections
## Key Ideas:
– `reminiscence.add(messages, user_id=…)`
– `reminiscence.search(question, user_id=…)`
– `reminiscence.get_all(user_id=…)`
– `reminiscence.replace(memory_id, knowledge)`
– `reminiscence.delete(memory_id)`
– `Reminiscence.from_config(config)`
## Subsequent Steps:
– Swap ChromaDB for Qdrant, Pinecone, or Weaviate
– Use the hosted Mem0 Platform (app.mem0.ai) for manufacturing
– Combine with LangChain, CrewAI, or LangGraph brokers
– Add `agent_id` for agent-level reminiscence scoping
“””
console.print(Markdown(abstract))
We create a totally customized Mem0 configuration with express parameters for the LLM, embedder, and vector retailer. We take a look at the customized reminiscence occasion and discover reminiscence historical past, timestamps, and structured profiling. Lastly, we reveal deletion and cleanup operations, finishing the complete lifecycle administration of long-term agent reminiscence.
In conclusion, we carried out an entire reminiscence infrastructure for AI brokers utilizing Mem0 as a common reminiscence abstraction layer. We demonstrated the way to add, retrieve, replace, delete, isolate, and customise long-term recollections whereas integrating them right into a dynamic chat loop. We confirmed how semantic reminiscence retrieval transforms generic assistants into context-aware programs able to personalization and continuity throughout classes. With this basis in place, we are actually geared up to increase the structure into multi-agent programs, enterprise-grade deployments, different vector databases, and superior agent frameworks, turning reminiscence right into a core functionality relatively than an afterthought.
Take a look at the Full Implementation Code and Pocket book. Additionally, be at liberty to comply with us on Twitter and don’t neglect to affix our 130k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you may be part of us on telegram as properly.
Have to accomplice with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so forth.? Join with us

