On this tutorial, we take a deep dive into nanobot, the ultra-lightweight private AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 traces of Python. Reasonably than merely putting in and operating it out of the field, we crack open the hood and manually recreate every of its core subsystems, the agent loop, device execution, reminiscence persistence, expertise loading, session administration, subagent spawning, and cron scheduling, so we perceive precisely how they work. We wire all the pieces up with OpenAI’s gpt-4o-mini as our LLM supplier, enter our API key securely by way of the terminal (by no means exposing it in pocket book output), and progressively construct from a single tool-calling loop all the way in which to a multi-step analysis pipeline that reads and writes recordsdata, shops long-term reminiscences, and delegates duties to concurrent background employees. By the tip, we don’t simply know find out how to use nanobots, we perceive find out how to lengthen them with customized instruments, expertise, and our personal agent architectures.
import sys
import os
import subprocess
def part(title, emoji=”🔹”):
“””Fairly-print a bit header.”””
width = 72
print(f”n{‘═’ * width}”)
print(f” {emoji} {title}”)
print(f”{‘═’ * width}n”)
def information(msg):
print(f” ℹ️ {msg}”)
def success(msg):
print(f” ✅ {msg}”)
def code_block(code):
print(f” ┌─────────────────────────────────────────────────”)
for line in code.strip().cut up(“n”):
print(f” │ {line}”)
print(f” └─────────────────────────────────────────────────”)
part(“STEP 1 · Putting in nanobot-ai & Dependencies”, “📦”)
information(“Putting in nanobot-ai from PyPI (newest secure)…”)
subprocess.check_call([
sys.executable, “-m”, “pip”, “install”, “-q”,
“nanobot-ai”, “openai”, “rich”, “httpx”
])
success(“nanobot-ai put in efficiently!”)
import importlib.metadata
nanobot_version = importlib.metadata.model(“nanobot-ai”)
print(f” 📌 nanobot-ai model: {nanobot_version}”)
part(“STEP 2 · Safe OpenAI API Key Enter”, “🔑”)
information(“Your API key will NOT be printed or saved in pocket book output.”)
information(“It’s held solely in reminiscence for this session.n”)
attempt:
from google.colab import userdata
OPENAI_API_KEY = userdata.get(“OPENAI_API_KEY”)
if not OPENAI_API_KEY:
elevate ValueError(“Not set in Colab secrets and techniques”)
success(“Loaded API key from Colab Secrets and techniques (‘OPENAI_API_KEY’).”)
information(“Tip: You may set this in Colab → 🔑 Secrets and techniques panel on the left sidebar.”)
besides Exception:
import getpass
OPENAI_API_KEY = getpass.getpass(“Enter your OpenAI API key: “)
success(“API key captured securely by way of terminal enter.”)
os.environ[“OPENAI_API_KEY”] = OPENAI_API_KEY
import openai
shopper = openai.OpenAI(api_key=OPENAI_API_KEY)
attempt:
shopper.fashions.listing()
success(“OpenAI API key validated — connection profitable!”)
besides Exception as e:
print(f” ❌ API key validation failed: {e}”)
print(” Please restart and enter a legitimate key.”)
sys.exit(1)
part(“STEP 3 · Configuring nanobot for OpenAI”, “⚙️”)
import json
from pathlib import Path
NANOBOT_HOME = Path.dwelling() / “.nanobot”
NANOBOT_HOME.mkdir(mother and father=True, exist_ok=True)
WORKSPACE = NANOBOT_HOME / “workspace”
WORKSPACE.mkdir(mother and father=True, exist_ok=True)
(WORKSPACE / “reminiscence”).mkdir(mother and father=True, exist_ok=True)
config = {
“suppliers”: {
“openai”: {
“apiKey”: OPENAI_API_KEY
}
},
“brokers”: {
“defaults”: {
“mannequin”: “openai/gpt-4o-mini”,
“maxTokens”: 4096,
“workspace”: str(WORKSPACE)
}
},
“instruments”: {
“restrictToWorkspace”: True
}
}
config_path = NANOBOT_HOME / “config.json”
config_path.write_text(json.dumps(config, indent=2))
success(f”Config written to {config_path}”)
agents_md = WORKSPACE / “AGENTS.md”
agents_md.write_text(
“# Agent Instructionsnn”
“You might be nanobot 🐈, an ultra-lightweight private AI assistant.n”
“You might be useful, concise, and use instruments when wanted.n”
“At all times clarify your reasoning step-by-step.n”
)
soul_md = WORKSPACE / “SOUL.md”
soul_md.write_text(
“# Personalitynn”
“- Pleasant and approachablen”
“- Technically precisen”
“- Makes use of emoji sparingly for warmthn”
)
user_md = WORKSPACE / “USER.md”
user_md.write_text(
“# Person Profilenn”
“- The consumer is exploring the nanobot framework.n”
“- They’re thinking about AI agent architectures.n”
)
memory_md = WORKSPACE / “reminiscence” / “MEMORY.md”
memory_md.write_text(“# Lengthy-term Memorynn_No reminiscences saved but._n”)
success(“Workspace bootstrap recordsdata created:”)
for f in [agents_md, soul_md, user_md, memory_md]:
print(f” 📄 {f.relative_to(NANOBOT_HOME)}”)
part(“STEP 4 · nanobot Structure Deep Dive”, “🏗️”)
information(“””nanobot is organized into 7 subsystems in ~4,000 traces of code:
┌──────────────────────────────────────────────────────────┐
│ USER INTERFACES │
│ CLI · Telegram · WhatsApp · Discord │
└──────────────────┬───────────────────────────────────────┘
│ InboundMessage / OutboundMessage
┌──────────────────▼───────────────────────────────────────┐
│ MESSAGE BUS │
│ publish_inbound() / publish_outbound() │
└──────────────────┬───────────────────────────────────────┘
│
┌──────────────────▼───────────────────────────────────────┐
│ AGENT LOOP (loop.py) │
│ ┌─────────┐ ┌──────────┐ ┌────────────────────┐ │
│ │ Context │→ │ LLM │→ │ Software Execution │ │
│ │ Builder │ │ Name │ │ (if tool_calls) │ │
│ └─────────┘ └──────────┘ └────────┬───────────┘ │
│ ▲ │ loop again │
│ │ ◄───────────────────┘ till accomplished │
│ ┌────┴────┐ ┌──────────┐ ┌────────────────────┐ │
│ │ Reminiscence │ │ Abilities │ │ Subagent Mgr │ │
│ │ Retailer │ │ Loader │ │ (spawn duties) │ │
│ └─────────┘ └──────────┘ └────────────────────┘ │
└──────────────────────────────────────────────────────────┘
│
┌──────────────────▼───────────────────────────────────────┐
│ LLM PROVIDER LAYER │
│ OpenAI · Anthropic · OpenRouter · DeepSeek · … │
└───────────────────────────────────────────────────────────┘
The Agent Loop iterates as much as 40 instances (configurable):
1. ContextBuilder assembles system immediate + reminiscence + expertise + historical past
2. LLM is known as with instruments definitions
3. If response has tool_calls → execute instruments, append outcomes, loop
4. If response is obvious textual content → return as remaining reply
“””)
We arrange the complete basis of the tutorial by importing the required modules, defining helper features for clear part show, and putting in the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the remainder of the pocket book can work together with the mannequin with out exposing credentials within the pocket book output. After that, we configure the nanobot workspace and create the core bootstrap recordsdata, comparable to AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and research the high-level structure so we perceive how the framework is organized earlier than shifting into implementation.
part(“STEP 5 · The Agent Loop — Core Idea in Motion”, “🔄”)
information(“We’ll manually recreate nanobot’s agent loop sample utilizing OpenAI.”)
information(“That is precisely what loop.py does internally.n”)
import json as _json
import datetime
TOOLS = [
{
“type”: “function”,
“function”: {
“name”: “get_current_time”,
“description”: “Get the current date and time.”,
“parameters”: {“type”: “object”, “properties”: {}, “required”: []}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “calculate”,
“description”: “Consider a mathematical expression.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“expression”: {
“sort”: “string”,
“description”: “Math expression to judge, e.g. ‘2**10 + 42′”
}
},
“required”: [“expression”]
}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “read_file”,
“description”: “Learn the contents of a file within the workspace.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“path”: {
“sort”: “string”,
“description”: “Relative file path throughout the workspace”
}
},
“required”: [“path”]
}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “write_file”,
“description”: “Write content material to a file within the workspace.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“path”: {“sort”: “string”, “description”: “Relative file path”},
“content material”: {“sort”: “string”, “description”: “Content material to jot down”}
},
“required”: [“path”, “content”]
}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “save_memory”,
“description”: “Save a truth to the agent’s long-term reminiscence.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“truth”: {“sort”: “string”, “description”: “The very fact to recollect”}
},
“required”: [“fact”]
}
}
}
]
def execute_tool(identify: str, arguments: dict) -> str:
“””Execute a device name — mirrors nanobot’s ToolRegistry.execute().”””
if identify == “get_current_time”:
elif identify == “calculate”:
expr = arguments.get(“expression”, “”)
attempt:
consequence = eval(expr, {“__builtins__”: {}}, {“abs”: abs, “spherical”: spherical, “min”: min, “max”: max})
return str(consequence)
besides Exception as e:
return f”Error: {e}”
elif identify == “read_file”:
fpath = WORKSPACE / arguments.get(“path”, “”)
if fpath.exists():
return fpath.read_text()[:4000]
return f”Error: File not discovered — {arguments.get(‘path’)}”
elif identify == “write_file”:
fpath = WORKSPACE / arguments.get(“path”, “”)
fpath.guardian.mkdir(mother and father=True, exist_ok=True)
fpath.write_text(arguments.get(“content material”, “”))
return f”Efficiently wrote {len(arguments.get(‘content material’, ”))} chars to {arguments.get(‘path’)}”
elif identify == “save_memory”:
truth = arguments.get(“truth”, “”)
mem_file = WORKSPACE / “reminiscence” / “MEMORY.md”
present = mem_file.read_text()
timestamp = datetime.datetime.now().strftime(“%Y-%m-%d %H:%M”)
mem_file.write_text(present + f”n- [{timestamp}] {truth}n”)
return f”Reminiscence saved: {truth}”
return f”Unknown device: {identify}”
def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
“””
Recreates nanobot’s AgentLoop._process_message() logic.
The loop:
1. Construct context (system immediate + bootstrap recordsdata + reminiscence)
2. Name LLM with instruments
3. If tool_calls → execute → append outcomes → loop
4. If textual content response → return remaining reply
“””
system_parts = []
for md_file in [“AGENTS.md”, “SOUL.md”, “USER.md”]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / “reminiscence” / “MEMORY.md”
if mem_file.exists():
system_parts.append(f”n## Your Memoryn{mem_file.read_text()}”)
system_prompt = “nn”.be part of(system_parts)
messages = [
{“role”: “system”, “content”: system_prompt},
{“role”: “user”, “content”: user_message}
]
if verbose:
print(f” 📨 Person: {user_message}”)
print(f” 🧠 System immediate: {len(system_prompt)} chars ”
f”(from {len(system_parts)} bootstrap recordsdata)”)
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f” ── Iteration {iteration}/{max_iterations} ──”)
response = shopper.chat.completions.create(
mannequin=”gpt-4o-mini”,
messages=messages,
instruments=TOOLS,
tool_choice=”auto”,
max_tokens=2048
)
alternative = response.selections[0]
message = alternative.message
if message.tool_calls:
if verbose:
print(f” 🔧 LLM requested {len(message.tool_calls)} device name(s):”)
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.operate.identify
args = _json.masses(tc.operate.arguments) if tc.operate.arguments else {}
if verbose:
print(f” → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})”)
consequence = execute_tool(fname, args)
if verbose:
print(f” ← {consequence[:100]}{‘…’ if len(consequence) > 100 else ”}”)
messages.append({
“position”: “device”,
“tool_call_id”: tc.id,
“content material”: consequence
})
if verbose:
print()
else:
remaining = message.content material or “”
if verbose:
print(f” 💬 Agent: {remaining}n”)
return remaining
return “⚠️ Max iterations reached with out a remaining response.”
print(“─” * 60)
print(” DEMO 1: Time-aware calculation with device chaining”)
print(“─” * 60)
result1 = agent_loop(
“What’s the present time? Additionally, calculate 2^20 + 42 for me.”
)
print(“─” * 60)
print(” DEMO 2: File creation + reminiscence storage”)
print(“─” * 60)
result2 = agent_loop(
“Write a haiku about AI brokers to a file known as ‘haiku.txt’. ”
“Then do not forget that I take pleasure in poetry about know-how.”
)
We manually recreate the center of nanobot by defining the device schemas, implementing their execution logic, and constructing the iterative agent loop that connects the LLM to instruments. We assemble the immediate from the workspace recordsdata and reminiscence, ship the dialog to the mannequin, detect device calls, execute them, append the outcomes again into the dialog, and maintain looping till the mannequin returns a remaining reply. We then check this mechanism with sensible examples that contain time lookups, calculations, file writing, and reminiscence saving, so we are able to see the loop function precisely like the inner nanobot circulate.
part(“STEP 6 · Reminiscence System — Persistent Agent Reminiscence”, “🧠”)
information(“””nanobot’s reminiscence system (reminiscence.py) makes use of two storage mechanisms:
1. MEMORY.md — Lengthy-term information (all the time loaded into context)
2. YYYY-MM-DD.md — Each day journal entries (loaded for current days)
Reminiscence consolidation runs periodically to summarize and compress
outdated entries, conserving the context window manageable.
“””)
mem_content = (WORKSPACE / “reminiscence” / “MEMORY.md”).read_text()
print(” 📂 Present MEMORY.md contents:”)
print(” ┌─────────────────────────────────────────────”)
for line in mem_content.strip().cut up(“n”):
print(f” │ {line}”)
print(” └─────────────────────────────────────────────n”)
as we speak = datetime.datetime.now().strftime(“%Y-%m-%d”)
daily_file = WORKSPACE / “reminiscence” / f”{as we speak}.md”
daily_file.write_text(
f”# Each day Log — {as we speak}nn”
“- Person ran the nanobot superior tutorialn”
“- Explored agent loop, instruments, and memoryn”
“- Created a haiku about AI agentsn”
)
success(f”Each day journal created: reminiscence/{as we speak}.md”)
print(“n 📁 Workspace contents:”)
for merchandise in sorted(WORKSPACE.rglob(“*”)):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
measurement = merchandise.stat().st_size
print(f” {‘📄’ if merchandise.suffix == ‘.md’ else ‘📝’} {rel} ({measurement} bytes)”)
part(“STEP 7 · Abilities System — Extending Agent Capabilities”, “🎯”)
information(“””nanobot’s SkillsLoader (expertise.py) reads Markdown recordsdata from the
expertise/ listing. Every talent has:
– A reputation and outline (for the LLM to resolve when to make use of it)
– Directions the LLM follows when the talent is activated
– Some expertise are ‘all the time loaded’; others are loaded on demand
Let’s create a customized talent and see how the agent makes use of it.
“””)
skills_dir = WORKSPACE / “expertise”
skills_dir.mkdir(exist_ok=True)
data_skill = skills_dir / “data_analyst.md”
data_skill.write_text(“””# Information Analyst Ability
## Description
Analyze information, compute statistics, and supply insights from numbers.
## Directions
When requested to investigate information:
1. Establish the info sort and construction
2. Compute related statistics (imply, median, vary, std dev)
3. Search for patterns and outliers
4. Current findings in a transparent, structured format
5. Recommend follow-up questions
## At all times Out there
false
“””)
review_skill = skills_dir / “code_reviewer.md”
review_skill.write_text(“””# Code Reviewer Ability
## Description
Overview code for bugs, safety points, and greatest practices.
## Directions
When reviewing code:
1. Examine for widespread bugs and logic errors
2. Establish safety vulnerabilities
3. Recommend efficiency enhancements
4. Consider code model and readability
5. Price the code high quality on a 1-10 scale
## At all times Out there
true
“””)
success(“Customized expertise created:”)
for f in skills_dir.iterdir():
print(f” 🎯 {f.identify}”)
print(“n 🧪 Testing skill-aware agent interplay:”)
print(” ” + “─” * 56)
skills_context = “nn## Out there Skillsn”
for skill_file in skills_dir.glob(“*.md”):
content material = skill_file.read_text()
skills_context += f”n### {skill_file.stem}n{content material}n”
result3 = agent_loop(
“Overview this Python code for points:nn”
““`pythonn”
“def get_user(id):n”
” question = f’SELECT * FROM customers WHERE id = {id}’n”
” consequence = db.execute(question)n”
” return resultn”
““`”
)
We transfer into the persistent reminiscence system by inspecting the long-term reminiscence file, making a day by day journal entry, and reviewing how the workspace evolves after earlier interactions. We then lengthen the agent with a expertise system by creating markdown-based talent recordsdata that describe specialised behaviors comparable to information evaluation and code evaluation. Lastly, we simulate how skill-aware prompting works by exposing these expertise to the agent and asking it to evaluation a Python operate, which helps us see how nanobot may be guided by way of modular functionality descriptions.
part(“STEP 8 · Customized Software Creation — Extending the Agent”, “🔧”)
information(“””nanobot’s device system makes use of a ToolRegistry with a easy interface.
Every device wants:
– A reputation and outline
– A JSON Schema for parameters
– An execute() technique
Let’s create customized instruments and wire them into our agent loop.
“””)
import random
CUSTOM_TOOLS = [
{
“type”: “function”,
“function”: {
“name”: “roll_dice”,
“description”: “Roll one or more dice with a given number of sides.”,
“parameters”: {
“type”: “object”,
“properties”: {
“num_dice”: {“type”: “integer”, “description”: “Number of dice to roll”, “default”: 1},
“sides”: {“type”: “integer”, “description”: “Number of sides per die”, “default”: 6}
},
“required”: []
}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “text_stats”,
“description”: “Compute statistics a few textual content: phrase depend, char depend, sentence depend, studying time.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“textual content”: {“sort”: “string”, “description”: “The textual content to investigate”}
},
“required”: [“text”]
}
}
},
{
“sort”: “operate”,
“operate”: {
“identify”: “generate_password”,
“description”: “Generate a random safe password.”,
“parameters”: {
“sort”: “object”,
“properties”: {
“size”: {“sort”: “integer”, “description”: “Password size”, “default”: 16}
},
“required”: []
}
}
}
]
_original_execute = execute_tool
def execute_tool_extended(identify: str, arguments: dict) -> str:
if identify == “roll_dice”:
n = arguments.get(“num_dice”, 1)
s = arguments.get(“sides”, 6)
rolls = [random.randint(1, s) for _ in range(n)]
return f”Rolled {n}d{s}: {rolls} (complete: {sum(rolls)})”
elif identify == “text_stats”:
textual content = arguments.get(“textual content”, “”)
phrases = len(textual content.cut up())
chars = len(textual content)
sentences = textual content.depend(‘.’) + textual content.depend(‘!’) + textual content.depend(‘?’)
reading_time = spherical(phrases / 200, 1)
return _json.dumps({
“phrases”: phrases,
“characters”: chars,
“sentences”: max(sentences, 1),
“reading_time_minutes”: reading_time
})
elif identify == “generate_password”:
import string
size = arguments.get(“size”, 16)
chars = string.ascii_letters + string.digits + “!@#$%^&*”
pwd = ”.be part of(random.alternative(chars) for _ in vary(size))
return f”Generated password ({size} chars): {pwd}”
return _original_execute(identify, arguments)
execute_tool = execute_tool_extended
ALL_TOOLS = TOOLS + CUSTOM_TOOLS
def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
“””Agent loop with prolonged customized instruments.”””
system_parts = []
for md_file in [“AGENTS.md”, “SOUL.md”, “USER.md”]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / “reminiscence” / “MEMORY.md”
if mem_file.exists():
system_parts.append(f”n## Your Memoryn{mem_file.read_text()}”)
system_prompt = “nn”.be part of(system_parts)
messages = [
{“role”: “system”, “content”: system_prompt},
{“role”: “user”, “content”: user_message}
]
if verbose:
print(f” 📨 Person: {user_message}”)
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f” ── Iteration {iteration}/{max_iterations} ──”)
response = shopper.chat.completions.create(
mannequin=”gpt-4o-mini”,
messages=messages,
instruments=ALL_TOOLS,
tool_choice=”auto”,
max_tokens=2048
)
alternative = response.selections[0]
message = alternative.message
if message.tool_calls:
if verbose:
print(f” 🔧 {len(message.tool_calls)} device name(s):”)
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.operate.identify
args = _json.masses(tc.operate.arguments) if tc.operate.arguments else {}
if verbose:
print(f” → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})”)
consequence = execute_tool(fname, args)
if verbose:
print(f” ← {consequence[:120]}{‘…’ if len(consequence) > 120 else ”}”)
messages.append({
“position”: “device”,
“tool_call_id”: tc.id,
“content material”: consequence
})
if verbose:
print()
else:
remaining = message.content material or “”
if verbose:
print(f” 💬 Agent: {remaining}n”)
return remaining
return “⚠️ Max iterations reached.”
print(“─” * 60)
print(” DEMO 3: Customized instruments in motion”)
print(“─” * 60)
result4 = agent_loop_v2(
“Roll 3 six-sided cube for me, then generate a 20-character password, ”
“and at last analyze the textual content stats of this sentence: ”
)
part(“STEP 9 · Multi-Flip Dialog — Session Administration”, “💬”)
information(“””nanobot’s SessionManager (session/supervisor.py) maintains dialog
historical past per session_key (format: ‘channel:chat_id’). Historical past is saved
in JSON recordsdata and loaded into context for every new message.
Let’s simulate a multi-turn dialog with persistent state.
“””)
We broaden the agent’s capabilities by defining new customized instruments comparable to cube rolling, textual content statistics, and password era, after which wiring them into the device execution pipeline. We replace the executor, merge the built-in and customized device definitions, and create a second model of the agent loop that may cause over this bigger set of capabilities. We then run a demo process that forces the mannequin to chain a number of device invocations, demonstrating how straightforward it’s to increase nanobot with our personal features whereas conserving the identical total interplay sample.
class SimpleSessionManager:
“””
Minimal recreation of nanobot’s SessionManager.
Shops dialog historical past and supplies context continuity.
“””
def __init__(self, workspace: Path):
self.workspace = workspace
self.periods: dict[str, list[dict]] = {}
def get_history(self, session_key: str) -> listing[dict]:
return self.periods.get(session_key, [])
def add_turn(self, session_key: str, position: str, content material: str):
if session_key not in self.periods:
self.periods[session_key] = []
self.periods[session_key].append({“position”: position, “content material”: content material})
def save(self, session_key: str):
fpath = self.workspace / f”session_{session_key.change(‘:’, ‘_’)}.json”
fpath.write_text(_json.dumps(self.periods.get(session_key, []), indent=2))
def load(self, session_key: str):
fpath = self.workspace / f”session_{session_key.change(‘:’, ‘_’)}.json”
if fpath.exists():
self.periods[session_key] = _json.masses(fpath.read_text())
session_mgr = SimpleSessionManager(WORKSPACE)
SESSION_KEY = “cli:tutorial_user”
def chat(user_message: str, verbose: bool = True):
“””Multi-turn chat with session persistence.”””
session_mgr.add_turn(SESSION_KEY, “consumer”, user_message)
system_parts = []
for md_file in [“AGENTS.md”, “SOUL.md”]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
system_prompt = “nn”.be part of(system_parts)
historical past = session_mgr.get_history(SESSION_KEY)
messages = [{“role”: “system”, “content”: system_prompt}] + historical past
if verbose:
print(f” 👤 You: {user_message}”)
print(f” (dialog historical past: {len(historical past)} messages)”)
response = shopper.chat.completions.create(
mannequin=”gpt-4o-mini”,
messages=messages,
max_tokens=1024
)
reply = response.selections[0].message.content material or “”
session_mgr.add_turn(SESSION_KEY, “assistant”, reply)
session_mgr.save(SESSION_KEY)
if verbose:
print(f” 🐈 nanobot: {reply}n”)
return reply
print(“─” * 60)
print(” DEMO 4: Multi-turn dialog with reminiscence”)
print(“─” * 60)
chat(“Hello! My identify is Alex and I am constructing an AI agent.”)
chat(“What’s my identify? And what am I engaged on?”)
chat(“Are you able to recommend 3 options I ought to add to my agent?”)
success(“Session endured with full dialog historical past!”)
session_file = WORKSPACE / f”session_{SESSION_KEY.change(‘:’, ‘_’)}.json”
session_data = _json.masses(session_file.read_text())
print(f” 📄 Session file: {session_file.identify} ({len(session_data)} messages)”)
part(“STEP 10 · Subagent Spawning — Background Job Delegation”, “🚀”)
information(“””nanobot’s SubagentManager (agent/subagent.py) permits the primary agent
to delegate duties to impartial background employees. Every subagent:
– Will get its personal device registry (no SpawnTool to stop recursion)
– Runs as much as 15 iterations independently
– Studies outcomes again by way of the MessageBus
Let’s simulate this sample with concurrent duties.
“””)
import asyncio
import uuid
async def run_subagent(task_id: str, purpose: str, verbose: bool = True):
“””
Simulates nanobot’s SubagentManager._run_subagent().
Runs an impartial LLM loop for a selected purpose.
“””
if verbose:
print(f” 🔹 Subagent [{task_id[:8]}] began: {purpose[:60]}”)
response = shopper.chat.completions.create(
mannequin=”gpt-4o-mini”,
messages=[
{“role”: “system”, “content”: “You are a focused research assistant. ”
“Complete the assigned task concisely in 2-3 sentences.”},
{“role”: “user”, “content”: goal}
],
max_tokens=256
)
consequence = response.selections[0].message.content material or “”
if verbose:
print(f” ✅ Subagent [{task_id[:8]}] accomplished: {consequence[:80]}…”)
return {“task_id”: task_id, “purpose”: purpose, “consequence”: consequence}
async def spawn_subagents(targets: listing[str]):
“””Spawn a number of subagents concurrently — mirrors SubagentManager.spawn().”””
duties = []
for purpose in targets:
task_id = str(uuid.uuid4())
duties.append(run_subagent(task_id, purpose))
print(f”n 🚀 Spawning {len(duties)} subagents concurrently…n”)
outcomes = await asyncio.collect(*duties)
return outcomes
targets = [
“What are the 3 key components of a ReAct agent architecture?”,
“Explain the difference between tool-calling and function-calling in LLMs.”,
“What is MCP (Model Context Protocol) and why does it matter for AI agents?”,
]
attempt:
loop = asyncio.get_running_loop()
import nest_asyncio
nest_asyncio.apply()
subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(targets))
besides RuntimeError:
subagent_results = asyncio.run(spawn_subagents(targets))
besides ModuleNotFoundError:
print(” ℹ️ Operating subagents sequentially (set up nest_asyncio for async)…n”)
subagent_results = []
for purpose in targets:
task_id = str(uuid.uuid4())
response = shopper.chat.completions.create(
mannequin=”gpt-4o-mini”,
messages=[
{“role”: “system”, “content”: “Complete the task concisely in 2-3 sentences.”},
{“role”: “user”, “content”: goal}
],
max_tokens=256
)
r = response.selections[0].message.content material or “”
print(f” ✅ Subagent [{task_id[:8]}] accomplished: {r[:80]}…”)
subagent_results.append({“task_id”: task_id, “purpose”: purpose, “consequence”: r})
print(f”n 📋 All {len(subagent_results)} subagent outcomes collected!”)
for i, r in enumerate(subagent_results, 1):
print(f”n ── Outcome {i} ──”)
print(f” Objective: {r[‘goal’][:60]}”)
print(f” Reply: {r[‘result’][:200]}”)
We simulate multi-turn dialog administration by constructing a light-weight session supervisor that shops, retrieves, and persists dialog historical past throughout turns. We use that historical past to keep up continuity within the chat, permitting the agent to recollect particulars from earlier within the interplay and reply extra coherently and statefully. After that, we mannequin subagent spawning by launching concurrent background duties that every deal with a centered goal, which helps us perceive how nanobot can delegate parallel work to impartial agent employees.
part(“STEP 11 · Scheduled Duties — The Cron Sample”, “⏰”)
information(“””nanobot’s CronService (cron/service.py) makes use of APScheduler to set off
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.
Let’s display the sample with a simulated scheduler.
“””)
from datetime import timedelta
class SimpleCronJob:
“””Mirrors nanobot’s cron job construction.”””
def __init__(self, identify: str, message: str, interval_seconds: int):
self.id = str(uuid.uuid4())[:8]
self.identify = identify
self.message = message
self.interval = interval_seconds
self.enabled = True
self.last_run = None
self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)
jobs = [
SimpleCronJob(“morning_briefing”, “Give me a brief morning status update.”, 86400),
SimpleCronJob(“memory_cleanup”, “Review and consolidate my memories.”, 43200),
SimpleCronJob(“health_check”, “Run a system health check.”, 3600),
]
print(” 📋 Registered Cron Jobs:”)
print(” ┌────────┬────────────────────┬──────────┬──────────────────────┐”)
print(” │ ID │ Identify │ Interval │ Subsequent Run │”)
print(” ├────────┼────────────────────┼──────────┼──────────────────────┤”)
for job in jobs:
interval_str = f”{job.interval // 3600}h” if job.interval >= 3600 else f”{job.interval}s”
print(f” │ {job.id} │ {job.identify:<18} │ {interval_str:>8} │ {job.next_run.strftime(‘%Y-%m-%d %H:%M’)} │”)
print(” └────────┴────────────────────┴──────────┴──────────────────────┘”)
print(f”n ⏰ Simulating cron set off for ‘{jobs[2].identify}’…”)
cron_result = agent_loop_v2(jobs[2].message, verbose=True)
part(“STEP 12 · Full Agent Pipeline — Finish-to-Finish Demo”, “🎬”)
information(“””Now let’s run a fancy, multi-step process that workouts the complete
nanobot pipeline: context constructing → device use → reminiscence → file I/O.
“””)
print(“─” * 60)
print(” DEMO 5: Complicated multi-step analysis process”)
print(“─” * 60)
complex_result = agent_loop_v2(
“I would like you to assist me with a small mission:n”
“1. First, verify the present timen”
“2. Write a brief mission plan to ‘project_plan.txt’ about constructing ”
“a private AI assistant (3-4 bullet factors)n”
“3. Keep in mind that my present mission is ‘constructing a private AI assistant’n”
“4. Learn again the mission plan file to verify it was saved correctlyn”
“Then summarize all the pieces you probably did.”,
max_iterations=15
)
part(“STEP 13 · Ultimate Workspace Abstract”, “📊”)
print(” 📁 Full workspace state after tutorial:n”)
total_files = 0
total_bytes = 0
for merchandise in sorted(WORKSPACE.rglob(“*”)):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
measurement = merchandise.stat().st_size
total_files += 1
total_bytes += measurement
icon = {“md”: “📄”, “txt”: “📝”, “json”: “📋”}.get(merchandise.suffix.lstrip(“.”), “📎”)
print(f” {icon} {rel} ({measurement:,} bytes)”)
print(f”n ── Abstract ──”)
print(f” Complete recordsdata: {total_files}”)
print(f” Complete measurement: {total_bytes:,} bytes”)
print(f” Config: {config_path}”)
print(f” Workspace: {WORKSPACE}”)
print(“n 🧠 Ultimate Reminiscence State:”)
mem_content = (WORKSPACE / “reminiscence” / “MEMORY.md”).read_text()
print(” ┌─────────────────────────────────────────────”)
for line in mem_content.strip().cut up(“n”):
print(f” │ {line}”)
print(” └─────────────────────────────────────────────”)
part(“COMPLETE · What’s Subsequent?”, “🎉”)
print(“”” You have explored the core internals of nanobot! Here is what to attempt subsequent:
🔹 Run the true CLI agent:
nanobot onboard && nanobot agent
🔹 Hook up with Telegram:
Add a bot token to config.json and run `nanobot gateway`
🔹 Allow internet search:
Add a Courageous Search API key beneath instruments.internet.search.apiKey
🔹 Attempt MCP integration:
nanobot helps Mannequin Context Protocol servers for exterior instruments
🔹 Discover the supply (~4K traces):
https://github.com/HKUDS/nanobot
🔹 Key recordsdata to learn:
• agent/loop.py — The agent iteration loop
• agent/context.py — Immediate meeting pipeline
• agent/reminiscence.py — Persistent reminiscence system
• agent/instruments/ — Constructed-in device implementations
• agent/subagent.py — Background process delegation
“””)
We display the cron-style scheduling sample by defining easy scheduled jobs, itemizing their intervals and subsequent run instances, and simulating the triggering of an automatic agent process. We then run a bigger end-to-end instance that mixes context constructing, device use, reminiscence updates, and file operations right into a single multi-step workflow, so we are able to see the complete pipeline working collectively in a sensible process. On the finish, we examine the ultimate workspace state, evaluation the saved reminiscence, and shut the tutorial with clear subsequent steps that join this pocket book implementation to the true nanobot mission and its supply code.
In conclusion, we walked by way of each main layer of the nanobot’s structure, from the iterative LLM-tool loop at its core to the session supervisor that provides our agent conversational reminiscence throughout turns. We constructed 5 built-in instruments, three customized instruments, two expertise, a session persistence layer, a subagent spawner, and a cron simulator, all whereas conserving all the pieces in a single runnable script. What stands out is how nanobot proves {that a} production-grade agent framework doesn’t want a whole bunch of 1000’s of traces of code; the patterns we carried out right here, context meeting, device dispatch, reminiscence consolidation, and background process delegation, are the identical patterns that energy far bigger methods, simply stripped right down to their essence. We now have a working psychological mannequin of agentic AI internals and a codebase sufficiently small to learn in a single sitting, which makes nanobot a really perfect alternative for anybody trying to construct, customise, or analysis AI brokers from the bottom up.
Take a look at the Full Codes right here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be part of us on telegram as effectively.
Michal Sutter is an information science skilled with a Grasp of Science in Information Science from the College of Padova. With a stable basis in statistical evaluation, machine studying, and information engineering, Michal excels at remodeling advanced datasets into actionable insights.

