Skip to main content
A ready-to-run example is available here!

Overview

The TaskToolSet lets a parent agent launch sub-agents that handle complex, multi-step tasks autonomously. Each sub-agent runs synchronously — the parent blocks until the sub-agent finishes and returns its result. Sub-agents can be resumed later using a task ID, preserving their full conversation context. This pattern is useful when:
  • Delegating specialized work to purpose-built sub-agents
  • Breaking a problem into sequential steps handled by different experts
  • Maintaining conversational context across multiple interactions with a sub-agent
  • Isolating sub-task complexity from the parent agent’s context
For parallel sub-agent execution, see Sub-Agent Delegation. TaskToolSet is designed for sequential blocking tasks.

How It Works

The agent calls the task tool with a prompt and a sub-agent type. The TaskManager creates (or resumes) a sub-agent conversation, runs it to completion, and returns the result to the parent.
Parent Agent                    TaskManager                    Sub-Agent
     │                              │                              │
     │── task(prompt, type) ───────>│                              │
     │                              │── create/resume ────────────>│
     │                              │                              │── runs autonomously
     │                              │                              │── ...
     │                              │<── result ──────────────────│
     │<── TaskObservation ─────────│                              │
     │                              │   (persists for resume)      │

Task Lifecycle

  1. Creation — A fresh sub-agent and conversation are created
  2. Running — The sub-agent processes the prompt autonomously
  3. Completion — The final response is extracted and returned
  4. Persistence — The conversation is saved to disk for potential resumption
  5. Resumption (optional) — A previously completed task continues with full context

Setting Up the TaskToolSet

1

Register Custom Sub-Agent Types (Optional)

By default, a "default" general-purpose agent is available. Register custom types for specialized behavior:
from openhands.sdk import LLM, Agent, AgentContext
from openhands.sdk.context import Skill
from openhands.tools.delegate import register_agent

def create_code_reviewer(llm: LLM) -> Agent:
    return Agent(
        llm=llm,
        tools=[],
        agent_context=AgentContext(
            skills=[
                Skill(
                    name="code_review",
                    content="You are an expert code reviewer. Analyze code for bugs, style issues, and suggest improvements.",
                    trigger=None,
                )
            ],
        ),
    )

register_agent(
    name="code_reviewer",
    factory_func=create_code_reviewer,
    description="Reviews code for bugs, style issues, and improvements.",
)
2

Add TaskToolSet to the Agent

from openhands.sdk import Agent, Tool
from openhands.tools.task import TaskToolSet

agent = Agent(
    llm=llm,
    tools=[Tool(name=TaskToolSet.name)],
)
The tool auto-registers on import — no explicit register_tool() call is needed.
3

Create a Conversation

from openhands.sdk import Conversation
from openhands.tools.delegate import DelegationVisualizer

conversation = Conversation(
    agent=agent,
    workspace=os.getcwd(),
    visualizer=DelegationVisualizer(name="Orchestrator"),
)
The DelegationVisualizer is optional but recommended — it shows the multi-agent conversation flow in the terminal.

Tool Parameters

When the parent agent calls the task tool, it provides these parameters:
ParameterTypeRequiredDescription
promptstrYesThe instruction for the sub-agent
subagent_typestrNoWhich registered agent type to use (default: "default")
descriptionstrNoShort label (3-5 words) for display and tracking
resumestrNoTask ID from a previous invocation to continue
max_turnsintNoMaximum agent iterations before stopping (default: 500)

Task Observation

The tool returns a TaskObservation containing:
FieldDescription
task_idUnique identifier (e.g., task_00000001) — use this for resumption
subagentThe agent type that handled the task
statusFinal status: succeeded, empty_success, or error
textThe sub-agent’s response (or error message)

Resuming Tasks

A key feature of TaskToolSet is the ability to resume a previously completed task. When a task finishes, its conversation is persisted to disk. Passing the resume parameter with the task ID reloads the full conversation history, allowing the sub-agent to continue where it left off.
# First call — sub-agent generates a quiz question
conversation.send_message(
    "Use the task tool with subagent_type='quiz_expert' to generate "
    "a multiple-choice question about zebras."
)
conversation.run()
# The agent receives task_id "task_00000001" in the observation

# Second call — resume the same sub-agent to verify the answer
conversation.send_message(
    "The user answered A. Use the task tool with resume='task_00000001' "
    "to ask the same sub-agent whether that answer is correct."
)
conversation.run()

TaskToolSet vs DelegateTool

TaskToolSetDelegateTool
ExecutionSequential (blocking)Parallel (concurrent)
ConcurrencyOne task at a timeMultiple sub-agents simultaneously
ResumptionBuilt-in via resume parameterPersistent sub-agents by ID
APISingle task tool callspawn + delegate commands
Best forExpert delegation, multi-turn workflowsFan-out / fan-in parallelism

Ready-to-run Example

This example is available on GitHub: examples/01_standalone_sdk/40_task_tool_set.py
examples/01_standalone_sdk/40_task_tool_set.py
"""
Animal Quiz with Task Tool Set

Demonstrates the TaskToolSet with a main agent delegating to an
animal-expert sub-agent. The flow is:

1. User names an animal.
2. Main agent delegates to the "animal_expert" sub-agent to generate
   a multiple-choice question about that animal.
3. Main agent shows the question to the user.
4. User picks an answer.
5. Main agent delegates again to the same sub-agent type to check
   whether the answer is correct and explain why.
"""

import os

from pydantic import SecretStr

from openhands.sdk import LLM, Agent, AgentContext, Conversation, Tool
from openhands.sdk.context import Skill
from openhands.tools.delegate import DelegationVisualizer, register_agent
from openhands.tools.task import TaskToolSet


# ── LLM setup ────────────────────────────────────────────────────────

api_key = os.getenv("LLM_API_KEY")
assert api_key is not None, "LLM_API_KEY environment variable is not set."

llm = LLM(
    model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"),
    api_key=SecretStr(api_key),
    base_url=os.getenv("LLM_BASE_URL", None),
)

# ── Register the animal expert sub-agent ─────────────────────────────


def create_animal_expert(llm: LLM) -> Agent:
    """Factory for the animal-expert sub-agent."""
    return Agent(
        llm=llm,
        tools=[],  # no tools needed – pure knowledge
        agent_context=AgentContext(
            skills=[
                Skill(
                    name="animal_expertise",
                    content=(
                        "You are a world-class zoologist. "
                        "When asked to generate a quiz question, respond with "
                        "EXACTLY this format and nothing else:\n\n"
                        "Question: <question text>\n"
                        "A) <option>\n"
                        "B) <option>\n"
                        "C) <option>\n"
                        "D) <option>\n\n"
                        "When asked to verify an answer, state whether it is "
                        "correct or incorrect, reveal the right answer, and "
                        "give a short fun-fact explanation."
                    ),
                    trigger=None,  # always active
                )
            ],
            system_message_suffix="Keep every response concise.",
        ),
    )


register_agent(
    name="animal_expert",
    factory_func=create_animal_expert,
    description="Zoologist that creates and verifies animal quiz questions.",
)

# ── Main agent ───────────────────────────────────────────────────────

main_agent = Agent(
    llm=llm,
    tools=[Tool(name=TaskToolSet.name)],
)

conversation = Conversation(
    agent=main_agent,
    workspace=os.getcwd(),
    visualizer=DelegationVisualizer(name="QuizHost"),
)

# ── Round 1: generate the question ──────────────────────────────────

animal = input("Pick an animal: ")

conversation.send_message(
    f"The user chose the animal: {animal}. "
    "Use the task tool to delegate to the 'animal_expert' sub-agent "
    "and ask it to generate a single multiple-choice question (A-D) "
    f"about {animal}. "
    "Once you get the question back, display it to the user exactly "
    "as the sub-agent returned it and ask the user to pick A, B, C, or D."
)
conversation.run()

# ── Round 2: verify the answer ──────────────────────────────────────

answer = input("Your answer (A/B/C/D): ")

conversation.send_message(
    f"The user answered: {answer}. "
    "Use the task tool to delegate to the 'animal_expert' sub-agent again "
    f"and ask it whether '{answer}' is the correct answer to the question "
    "it generated earlier. Don't include the question; instead, use the "
    "'resume' parameter to continue the previous conversation."
)
conversation.run()

# ── Done ────────────────────────────────────────────────────────────

cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"\nEXAMPLE_COST: {cost}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o). The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.

Next Steps