On this tutorial, we construct an entire AgentScope workflow from the bottom up and run every thing in Colab. We begin by wiring OpenAI by AgentScope and validating a fundamental mannequin name to know how messages and responses are dealt with. From there, we outline customized device capabilities, register them in a toolkit, and examine the auto-generated schemas to see how instruments are uncovered to the agent. We then transfer right into a ReAct-based agent that dynamically decides when to name instruments, adopted by a multi-agent debate setup utilizing MsgHub to simulate structured interplay between brokers. Lastly, we implement structured outputs with Pydantic and execute a concurrent multi-agent pipeline during which a number of specialists analyze an issue in parallel, and a synthesiser combines their insights.
import subprocess, sys
subprocess.check_call([
sys.executable, “-m”, “pip”, “install”, “-q”,
“agentscope”, “openai”, “pydantic”, “nest_asyncio”,
])
print(“✅ All packages put in.n”)
import nest_asyncio
nest_asyncio.apply()
import asyncio
import json
import getpass
import math
import datetime
from typing import Any
from pydantic import BaseModel, Subject
from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter
from agentscope.reminiscence import InMemoryMemory
from agentscope.message import Msg, TextBlock, ToolUseBlock
from agentscope.mannequin import OpenAIChatModel
from agentscope.pipeline import MsgHub, sequential_pipeline
from agentscope.device import Toolkit, ToolResponse
OPENAI_API_KEY = getpass.getpass(“🔑 Enter your OpenAI API key: “)
MODEL_NAME = “gpt-4o-mini”
print(f”n✅ API key captured. Utilizing mannequin: {MODEL_NAME}n”)
print(“=” * 72)
def make_model(stream: bool = False) -> OpenAIChatModel:
return OpenAIChatModel(
model_name=MODEL_NAME,
api_key=OPENAI_API_KEY,
stream=stream,
generate_kwargs={“temperature”: 0.7, “max_tokens”: 1024},
)
print(“n” + “═” * 72)
print(” PART 1: Fundamental Mannequin Name”)
print(“═” * 72)
async def part1_basic_model_call():
mannequin = make_model()
response = await mannequin(
messages=[{“role”: “user”, “content”: “What is AgentScope in one sentence?”}],
)
textual content = response.content material[0][“text”]
print(f”n🤖 Mannequin says: {textual content}”)
print(f”📊 Tokens used: {response.utilization}”)
asyncio.run(part1_basic_model_call())
We set up all required dependencies and patch the occasion loop to make sure asynchronous code runs easily in Colab. We securely seize the OpenAI API key and configure the mannequin by a helper perform for reuse. We then run a fundamental mannequin name to confirm the setup and examine the response and token utilization.
print(“n” + “═” * 72)
print(” PART 2: Customized Device Features & Toolkit”)
print(“═” * 72)
async def calculate_expression(expression: str) -> ToolResponse:
allowed = {
“abs”: abs, “spherical”: spherical, “min”: min, “max”: max,
“sum”: sum, “pow”: pow, “int”: int, “float”: float,
“sqrt”: math.sqrt, “pi”: math.pi, “e”: math.e,
“log”: math.log, “sin”: math.sin, “cos”: math.cos,
“tan”: math.tan, “factorial”: math.factorial,
}
attempt:
consequence = eval(expression, {“__builtins__”: {}}, allowed)
return ToolResponse(content material=[TextBlock(type=”text”, text=str(result))])
besides Exception as exc:
return ToolResponse(content material=[TextBlock(type=”text”, text=f”Error: {exc}”)])
async def get_current_datetime(timezone_offset: int = 0) -> ToolResponse:
now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))
return ToolResponse(
content material=[TextBlock(type=”text”, text=now.strftime(“%Y-%m-%d %H:%M:%S %Z”))],
)
toolkit = Toolkit()
toolkit.register_tool_function(calculate_expression)
toolkit.register_tool_function(get_current_datetime)
schemas = toolkit.get_json_schemas()
print(“n📋 Auto-generated device schemas:”)
print(json.dumps(schemas, indent=2))
async def part2_test_tool():
result_gen = await toolkit.call_tool_function(
ToolUseBlock(
kind=”tool_use”, id=”test-1″,
title=”calculate_expression”,
enter={“expression”: “factorial(10)”},
),
)
async for resp in result_gen:
print(f”n🔧 Device consequence for factorial(10): {resp.content material[0][‘text’]}”)
asyncio.run(part2_test_tool())
We outline customized device capabilities for mathematical analysis and datetime retrieval utilizing managed execution. We register these instruments right into a toolkit and examine their auto-generated JSON schemas to know how AgentScope exposes them. We then simulate a direct device name to validate that the device execution pipeline works accurately.
print(“n” + “═” * 72)
print(” PART 3: ReAct Agent with Instruments”)
print(“═” * 72)
async def part3_react_agent():
agent = ReActAgent(
title=”MathBot”,
sys_prompt=(
“You might be MathBot, a useful assistant that solves math issues. ”
“Use the calculate_expression device for any computation. ”
“Use get_current_datetime when requested concerning the time.”
),
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIChatFormatter(),
toolkit=toolkit,
max_iters=5,
)
queries = [
“What’s the current time in UTC+5?”,
]
for q in queries:
print(f”n👤 Consumer: {q}”)
msg = Msg(“consumer”, q, “consumer”)
response = await agent(msg)
print(f”🤖 MathBot: {response.get_text_content()}”)
agent.reminiscence.clear()
asyncio.run(part3_react_agent())
print(“n” + “═” * 72)
print(” PART 4: Multi-Agent Debate (MsgHub)”)
print(“═” * 72)
DEBATE_TOPIC = (
“Ought to synthetic common intelligence (AGI) analysis be open-sourced, ”
“or ought to it stay behind closed doorways at main labs?”
)
We assemble a ReAct agent that causes about when to make use of instruments and dynamically executes them. We cross consumer queries and observe how the agent combines reasoning with device utilization to supply solutions. We additionally reset reminiscence between queries to make sure unbiased and clear interactions.
async def part4_debate():
proponent = ReActAgent(
title=”Proponent”,
sys_prompt=(
f”You’re the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI analysis. ”
f”Matter: {DEBATE_TOPIC}n”
“Hold every response to 2-3 concise paragraphs. Deal with the opposite aspect’s factors instantly.”
),
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIMultiAgentFormatter(),
)
opponent = ReActAgent(
title=”Opponent”,
sys_prompt=(
f”You’re the Opponent in a debate. You argue AGAINST open-sourcing AGI analysis. ”
f”Matter: {DEBATE_TOPIC}n”
“Hold every response to 2-3 concise paragraphs. Deal with the opposite aspect’s factors instantly.”
),
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIMultiAgentFormatter(),
)
num_rounds = 2
for rnd in vary(1, num_rounds + 1):
print(f”n{‘─’ * 60}”)
print(f” ROUND {rnd}”)
print(f”{‘─’ * 60}”)
async with MsgHub(
members=[proponent, opponent],
announcement=Msg(“Moderator”, f”Spherical {rnd} — start. Matter: {DEBATE_TOPIC}”, “assistant”),
):
pro_msg = await proponent(
Msg(“Moderator”, “Proponent, please current your argument.”, “consumer”),
)
print(f”n✅ Proponent:n{pro_msg.get_text_content()}”)
opp_msg = await opponent(
Msg(“Moderator”, “Opponent, please reply and current your counter-argument.”, “consumer”),
)
print(f”n❌ Opponent:n{opp_msg.get_text_content()}”)
print(f”n{‘─’ * 60}”)
print(” DEBATE COMPLETE”)
print(f”{‘─’ * 60}”)
asyncio.run(part4_debate())
print(“n” + “═” * 72)
print(” PART 5: Structured Output with Pydantic”)
print(“═” * 72)
class MovieReview(BaseModel):
12 months: int = Subject(description=”The discharge 12 months.”)
style: str = Subject(description=”Major style of the film.”)
score: float = Subject(description=”Score from 0.0 to 10.0.”)
professionals: listing[str] = Subject(description=”Listing of 2-3 strengths of the film.”)
cons: listing[str] = Subject(description=”Listing of 1-2 weaknesses of the film.”)
verdict: str = Subject(description=”A one-sentence closing verdict.”)
We create two brokers with opposing roles and join them utilizing MsgHub for a structured multi-agent debate. We simulate a number of rounds during which every agent responds to the others whereas sustaining context by shared communication. We observe how agent coordination permits coherent argument alternate throughout turns.
async def part5_structured_output():
agent = ReActAgent(
title=”Critic”,
sys_prompt=”You’re a skilled film critic. When requested to evaluate a film, present an intensive evaluation.”,
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIChatFormatter(),
)
msg = Msg(“consumer”, “Overview the film ‘Inception’ (2010) by Christopher Nolan.”, “consumer”)
response = await agent(msg, structured_model=MovieReview)
print(“n🎬 Structured Film Overview:”)
print(f” Title : {response.metadata.get(‘title’, ‘N/A’)}”)
print(f” Yr : {response.metadata.get(’12 months’, ‘N/A’)}”)
print(f” Style : {response.metadata.get(‘style’, ‘N/A’)}”)
print(f” Score : {response.metadata.get(‘score’, ‘N/A’)}/10″)
professionals = response.metadata.get(‘professionals’, [])
cons = response.metadata.get(‘cons’, [])
if professionals:
print(f” Professionals : {‘, ‘.be part of(str(p) for p in professionals)}”)
if cons:
print(f” Cons : {‘, ‘.be part of(str(c) for c in cons)}”)
print(f” Verdict : {response.metadata.get(‘verdict’, ‘N/A’)}”)
print(f”n📝 Full textual content response:n{response.get_text_content()}”)
asyncio.run(part5_structured_output())
print(“n” + “═” * 72)
print(” PART 6: Concurrent Multi-Agent Pipeline”)
print(“═” * 72)
async def part6_concurrent_agents():
specialists = {
“Economist”: “You might be an economist. Analyze the given matter from an financial perspective in 2-3 sentences.”,
“Ethicist”: “You might be an ethicist. Analyze the given matter from an moral perspective in 2-3 sentences.”,
“Technologist”: “You’re a technologist. Analyze the given matter from a expertise perspective in 2-3 sentences.”,
}
brokers = []
for title, immediate in specialists.objects():
brokers.append(
ReActAgent(
title=title,
sys_prompt=immediate,
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIChatFormatter(),
)
)
topic_msg = Msg(
“consumer”,
“Analyze the affect of enormous language fashions on the worldwide workforce.”,
“consumer”,
)
print(“n⏳ Working 3 specialist brokers concurrently…”)
outcomes = await asyncio.collect(*(agent(topic_msg) for agent in brokers))
for agent, end in zip(brokers, outcomes):
print(f”n🧠 {agent.title}:n{consequence.get_text_content()}”)
synthesiser = ReActAgent(
title=”Synthesiser”,
sys_prompt=(
“You’re a synthesiser. You obtain analyses from an Economist, ”
“an Ethicist, and a Technologist. Mix their views into ”
“a single coherent abstract of 3-4 sentences.”
),
mannequin=make_model(),
reminiscence=InMemoryMemory(),
formatter=OpenAIMultiAgentFormatter(),
)
combined_text = “nn”.be part of(
f”[{agent.name}]: {r.get_text_content()}” for agent, r in zip(brokers, outcomes)
)
synthesis = await synthesiser(
Msg(“consumer”, f”Listed here are the specialist analyses:nn{combined_text}nnPlease synthesise.”, “consumer”),
)
print(f”n🔗 Synthesised Abstract:n{synthesis.get_text_content()}”)
asyncio.run(part6_concurrent_agents())
print(“n” + “═” * 72)
print(” 🎉 TUTORIAL COMPLETE!”)
print(” You could have coated:”)
print(” 1. Fundamental mannequin calls with OpenAIChatModel”)
print(” 2. Customized device capabilities & auto-generated JSON schemas”)
print(” 3. ReAct Agent with device use”)
print(” 4. Multi-agent debate with MsgHub”)
print(” 5. Structured output with Pydantic fashions”)
print(” 6. Concurrent multi-agent pipelines”)
print(“═” * 72)
We implement structured outputs utilizing a Pydantic schema to extract constant fields from mannequin responses. We then construct a concurrent multi-agent pipeline the place a number of specialist brokers analyze a subject in parallel. Lastly, we combination their outputs utilizing a synthesiser agent to supply a unified and coherent abstract.
In conclusion, we have now carried out a full-stack agentic system that goes past easy prompting and into orchestrated reasoning, device utilization, and collaboration. We now perceive how AgentScope manages reminiscence, formatting, and gear execution underneath the hood, and the way ReAct brokers bridge reasoning with motion. We additionally noticed how multi-agent programs may be coordinated each sequentially and concurrently, and the way structured outputs guarantee reliability in downstream purposes. With these constructing blocks, we’re ready to design extra superior agent architectures, lengthen device ecosystems, and deploy scalable, production-ready AI programs.
Try the Full Pocket book right here. Additionally, be at liberty to observe us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be part of us on telegram as properly.

