Modern AI applications rely on intelligent agents that think, cooperate, and execute complex workflows, while single-agent systems struggle with scalability, coordination, and long-term context. AgentScope AI addresses this by offering a modular, extensible framework for building structured multi-agent systems, enabling role assignment, memory control, tool integration, and efficient communication without unnecessary complexity for developers and researchers alike seeking practical guidance today now clearly. In this article, we provide a practical overview of its architecture, features, comparisons, and real-world use cases.
AgentScope is an open-source multi-agent framework for AI agent systems which are structured, scalable, and production-ready. Its main focus is on clear abstractions, modular design along with communication between agents rather than ad-hoc prompt chaining.
The AI systems community’s researchers and engineers primarily created AgentScope to overcome the obstacles of coordination and observability in intricate agent workflows. The fact that it can be used in research and production environments makes it a rigour-laden, reproducible and extensible framework that can still be reliable and experimental at the same time.
Also Read: Single-Agent vs Multi-Agent Systems
As LLM applications grow more complex, developers increasingly rely on multiple agents working together. However, many teams struggle with managing agent interactions, shared state, and long-term memory reliably.
AgentScope solves these problems by introducing explicit agent abstractions, message-passing mechanisms, and structured memory management. Its core goals include:
In conclusion, AgentScope is designed to make the development of complex, agent-based AI systems easier. It provides modular building blocks and orchestration tools, thus occupying the middle ground between simple LLM utilities and scalable multi-agent platforms.


AgentScope is an all-in-one package of several powerful features which allows multi-agent workflows. Here are some principal strengths of the framework already mentioned:
async with MsgHub(
participants=[agent1, agent2, agent3],
announcement=Msg("Host", "Introduce yourselves.", "assistant"),
) as hub:
await sequential_pipeline([agent1, agent2, agent3])
# Add or remove agents on the fly
hub.add(agent4)
hub.delete(agent3)
await hub.broadcast(Msg("Host", "Wrap up."), to=[])

If you follow the official quickstart, the process of getting AgentScope up and running is quite straightforward. The framework necessitates Python version 3.10 or above. Installation can be performed either through PyPI or from the source:
From PyPI:
Run the following commands in the command-line:
pip install agentscope
to install the most recent version of AgentScope and its dependencies. (If you are using the uv environment, execute uv pip install agentscope as described in the docs)
From Source:
Step 1: Clone the GitHub repository:
git clone -b main https://github.com/agentscope-ai/agentscope.git
cd agentscope
Step 2: Install in editable mode:
pip install -e .
This will install AgentScope in your Python environment, linking to your local copy. You can also use uv pip install -e . if using an uv environment.
After the installation, you should have access to the AgentScope classes within Python code. The Hello AgentScope example of the repository presents a very basic conversation loop with a ReActAgent and a UserAgent.
AgentScope doesn’t require any extra server configurations; it simply is a Python library. Following the installation, you will be able to create agents, design pipelines, and do some testing immediately.
Let’s create a functional multi-agent system in which two AI models, Claude and ChatGPT, possess different roles and compete with each other: Claude generates problems while GPT attempts to solve them. We shall explain each part of the code and see how AgentScope actually manages to perform this interaction.
Importing Required Libraries
import os
import asyncio
from typing import List
from pydantic import BaseModel
from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter, AnthropicChatFormatter
from agentscope.message import Msg
from agentscope.model import OpenAIChatModel, AnthropicChatModel
from agentscope.pipeline import MsgHub
All the necessary modules from AgentScope and Python’s standard library are imported. The ReActAgent class is used to create the intelligent agents whereas the formatters ensure that messages are prepared accordingly for the various AI models. Msg is the communication method between agents provided by AgentScope.
Configuring API Keys and Model Names
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
os.environ["ANTHROPIC_API_KEY"] = "your_claude_api_key"
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
ANTHROPIC_API_KEY = os.environ["ANTHROPIC_API_KEY"]
CLAUDE_MODEL_NAME = "claude-sonnet-4-20250514"
GPT_SOLVER_MODEL_NAME = "gpt-4.1-mini"
This setup will help in authenticating the API credentials for both OpenAI and Anthropic. And to access a particular model we have to pass the specific model’s name also.
Round Log Structure:
class RoundLog(BaseModel):
round_index: int
creator_model: str
solver_model: str
problem: str
solver_answer: str
judge_decision: str
solver_score: int
creator_score: int
This data model holds all the information regarding every round of the contest in real-time. Participating models, generated problems, solver’s feedback, and current scores are being recorded thus making it easy to review and analyze each interaction.
Global Score Structure:
class GlobalScore(BaseModel):
total_rounds: int
creator_model: str
solver_model: str
creator_score: int
solver_score: int
rounds: List[RoundLog]
The overall competition results across all rounds are kept in this structure. It preserves the final scores and the entire rounds history thus offering us a comprehensive view of agents’ performance in the complete workflow.
Normalizing Agent Messages
def extract_text(msg) -> str:
"""Normalize an AgentScope message (or similar) into a plain string."""
if isinstance(msg, str):
return msg
get_tc = getattr(msg, "get_text_content", None)
if callable(get_tc):
text = get_tc()
if isinstance(text, str):
return text
content = getattr(msg, "content", None)
if isinstance(content, str):
return content
if isinstance(content, list):
parts = []
for block in content:
if isinstance(block, dict) and "text" in block:
parts.append(block["text"])
if parts:
return "\n".join(parts)
text_attr = getattr(msg, "text", None)
if isinstance(text_attr, str):
return text_attr
messages_attr = getattr(msg, "messages", None)
if isinstance(messages_attr, list) and messages_attr:
last = messages_attr[-1]
last_content = getattr(last, "content", None)
if isinstance(last_content, str):
return last_content
last_text = getattr(last, "text", None)
if isinstance(last_text, str):
return last_text
return ""
Our function here is a supporting one that allows us to obtain readable text from agent responses with reliability regardless of the message format. Different AI models have different structures for their responses so this function takes care of all the different formats and turns them into simple strings we can work with.
Creating the Problem Creator Agent (Claude)
def create_creator_agent() -> ReActAgent:
return ReActAgent(
name="ClaudeCreator",
sys_prompt=(
"You are Claude Sonnet, acting as a problem creator. "
"Your task: in each round, create ONE realistic everyday problem that "
"some people might face (e.g., scheduling, budgeting, productivity, "
"communication, personal decision making). "
"The problem should:\n"
"- Be clearly described in 3–6 sentences.\n"
"- Be self-contained and solvable with reasoning and common sense.\n"
"- NOT require private data or external tools.\n"
"Return ONLY the problem description, no solution."
),
model=AnthropicChatModel(
model_name=CLAUDE_MODEL_NAME,
api_key=ANTHROPIC_API_KEY,
stream=False,
),
formatter=AnthropicChatFormatter(),
)
This utility produces an assistant that takes on the role of Claude and invents realistic problems of everyday life that are not necessarily such. The system prompt specifies the kind of problems to be created, primarily making it the scenarios where reasoning is needed but no external tools or private information are required for solving them.
Creating the Problem Solver Agent (GPT)
def create_solver_agent() -> ReActAgent:
return ReActAgent(
name="GPTSolver",
sys_prompt=(
"You are GPT-4.1 mini, acting as a problem solver. "
"You will receive a realistic everyday problem. "
"Your task:\n"
"- Understand the problem.\n"
"- Propose a clear, actionable solution.\n"
"- Explain your reasoning in 3–8 sentences.\n"
"If the problem is unclear or impossible to solve with the given "
"information, you MUST explicitly say: "
"\"I cannot solve this problem with the information provided.\""
),
model=OpenAIChatModel(
model_name=GPT_SOLVER_MODEL_NAME,
api_key=OPENAI_API_KEY,
stream=False,
),
formatter=OpenAIChatFormatter(),
)
This tool also gives birth to another agent powered by GPT-4.1 mini whose main task is to find a solution to the problem. The system prompt dictates that it must give a clear solution along with the reasoning, and most importantly, to recognize when a problem cannot be solved; this frank recognition is essential for proper scoring in the competition.
Determining Solution Success
def solver_succeeded(solver_answer: str) -> bool:
"""Heuristic: did the solver manage to solve the problem?"""
text = solver_answer.lower()
failure_markers = [
"i cannot solve this problem",
"i can't solve this problem",
"cannot solve with the information provided",
"not enough information",
"insufficient information",
]
return not any(marker in text for marker in failure_markers)
This judging function is simple yet powerful. If the solver has actually provided a solution or confessed failure the function will check. By searching for certain expressions that show the solver was not able to manage the issue, the winner of every round can be decided automatically without the need for human intervention.
Main Competition Loop
async def run_competition(num_rounds: int = 5) -> GlobalScore:
creator_agent = create_creator_agent()
solver_agent = create_solver_agent()
creator_score = 0
solver_score = 0
round_logs: List[RoundLog] = []
for i in range(1, num_rounds + 1):
print(f"\n========== ROUND {i} ==========\n")
# Step 1: Claude creates a problem
creator_msg = await creator_agent(
Msg(
role="user",
content="Create one realistic everyday problem now.",
name="user",
),
)
problem_text = extract_text(creator_msg)
print("Problem created by Claude:\n")
print(problem_text)
print("\n---\n")
# Step 2: GPT-4.1 mini tries to solve it
solver_msg = await solver_agent(
Msg(
role="user",
content=(
"Here is the problem you must solve:\n\n"
f"{problem_text}\n\n"
"Provide your solution and reasoning."
),
name="user",
),
)
solver_text = extract_text(solver_msg)
print("GPT-4.1 mini's solution:\n")
print(solver_text)
print("\n---\n")
# Step 3: Judge the result
if solver_succeeded(solver_text):
solver_score += 1
judge_decision = "Solver (GPT-4.1 mini) successfully solved the problem."
else:
creator_score += 1
judge_decision = (
"Creator (Claude Sonnet) gets the point; solver failed or admitted failure."
)
print("Judge decision:", judge_decision)
print(f"Current score -> Claude: {creator_score}, GPT-4.1 mini: {solver_score}")
round_logs.append(
RoundLog(
round_index=i,
creator_model=CLAUDE_MODEL_NAME,
solver_model=GPT_SOLVER_MODEL_NAME,
problem=problem_text,
solver_answer=solver_text,
judge_decision=judge_decision,
solver_score=solver_score,
creator_score=creator_score,
)
)
global_score = GlobalScore(
total_rounds=num_rounds,
creator_model=CLAUDE_MODEL_NAME,
solver_model=GPT_SOLVER_MODEL_NAME,
creator_score=creator_score,
solver_score=solver_score,
rounds=round_logs,
)
# Final summary print
print("\n========== FINAL RESULT ==========\n")
print(f"Total rounds: {num_rounds}")
print(f"Creator (Claude Sonnet) score: {creator_score}")
print(f"Solver (GPT-4.1 mini) score: {solver_score}")
if solver_score > creator_score:
print("\nOverall winner: GPT-4.1 mini (solver)")
elif creator_score > solver_score:
print("\nOverall winner: Claude Sonnet (creator)")
else:
print("\nOverall result: Draw")
return global_score
This represents the core of our multi-agent process. Every round Claude proposes an issue, GPT tries to solve it, and we decide the scores are updated and everything is logged. The async/await pattern makes the execution smooth, and after all the rounds are over, we present the complete results that indicate which AI model was overall better.
global_result = await run_competition(num_rounds=5)
This single statement is the starting point of the entire multi-agent competition for 5 rounds. Since we are using await, this runs perfectly in Jupyter notebooks or other async-enabled environments, and the global_result variable will store all the detailed statistics and logs from the entire competition
AgentScope is a highly versatile tool that finds practical applications in a wide range of areas including research, automation, and corporate markets. It can be deployed for both experimental and production purposes.

AgentScope is your go-to solution when you require a multi-agent system that is scalable, maintainable, and production-ready. It is a good choice for teams that need to have a clear understanding and oversight. It may be heavier than the lightweight frameworks but it will definitely repay the effort when the systems become more complicated.

Developers take the opportunity to experiment with concrete examples to get the most out of AgentScope and get an insight into its design philosophy. Such patterns represent typical instances of agentic behaviors.

AgentScope AI is the perfect tool for making scalable and multi-agent systems that are clear and have control. It is the best solution in case several AI agents need to perform the task together, with no confusion in workflows and mastery of memory management. It is the use of explicit abstractions, structured messaging, and modular memory design that brings this technology forward and solves a lot of issues that are commonly associated with prompt-centric frameworks.
By following this guide; you now have a complete comprehension of the architecture, installation, and capabilities of AgentScope. For teams building large-scale agentic applications, AgentScope acts as a future-proof approach that combines flexibility and engineering discipline in quite a balanced way. That is how the multi-agent systems will be the main part of AI workflows, and frameworks like AgentScope will be the ones to set the standard for the next generation of intelligent systems.
A. AgentScope AI is an open-source framework for building scalable, structured, multi-agent AI systems. pasted
A. It was created by AI researchers and engineers focused on coordination and observability. pasted
A. To solve coordination, memory, and scalability issues in multi-agent workflows.