ReAct Pattern: Claude SDK Implementation
Overview
Section titled βOverviewβThe ReAct (Reasoning + Acting) pattern is the simplest AI agent architecture. The agent alternates between reasoning about what to do next, taking an action, and observing the result.
Best for:
- Learning AI agent fundamentals
- Simple workflows (3-5 steps)
- Prototypes and MVPs
- Tasks with limited tool sets
Not ideal for:
- Production systems requiring reliability
- Complex multi-step workflows
- Tasks requiring quality assurance
- Error-prone operations
See plan-execute-verify.md for a more robust production pattern.
β οΈ Important: Learning vs Production
Section titled ββ οΈ Important: Learning vs ProductionβThis tutorial teaches ReAct fundamentals by building from scratch using the base Anthropic SDK. This is excellent for:
- Understanding how AI agents work internally
- Learning the ReAct pattern mechanics
- Educational purposes and experimentation
For production applications, use the Claude Agent SDK instead:
- β Built-in tools (Read, Write, Edit, Bash, Glob, Grep, WebSearch, etc.)
- β Automatic tool execution and session management
- β Permission controls and sandboxing
- β Hooks for validation and error handling
- β Subagents for parallel execution
- β Structured outputs and file checkpointing
- β MCP (Model Context Protocol) server support
When to use each:
- Manual ReAct (this guide): Learning, prototyping, custom research
- Claude Agent SDK: Production systems, robust applications, standard agent workflows
Continue reading to understand the fundamentals, then explore the Agent SDK for production use.
Architecture
Section titled βArchitectureβββββββββββββββββββββββββββββββββββββββββββββββββββββ Application Control Loop ββ ββ while not done: ββ 1. Send conversation to LLM ββ 2. Parse response (reasoning + action) ββ 3. Execute action (tool call) ββ 4. Observe result ββ 5. Send observation back to LLM ββ 6. Update UI ββ 7. Repeat ββββββββββββββββββββββββββββββββββββββββββββββββββββ β β βΌ βΌ ββββββββββββ ββββββββββββββββββ β LLM β β Tools β β (Claude) β β - read_file β β β β - write_file β ββββββββββββ β - list_files β ββββββββββββββββββBuilding the Agent Step-by-Step
Section titled βBuilding the Agent Step-by-StepβLetβs build a ReAct agent incrementally, understanding each component before moving to the next.
Step 1: Basic Setup
Section titled βStep 1: Basic SetupβFirst, import dependencies and initialize the Anthropic client:
import anthropicimport jsonimport osfrom typing import List, Dict, Any
# Initialize the Claude clientclient = anthropic.Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))Step 2: Design the System Prompt
Section titled βStep 2: Design the System PromptβThe system prompt is critical - it defines what the agent can do and how it should respond:
system_prompt = """You are a legal review assistant. You work step-by-step to complete tasks.
AVAILABLE TOOLS:- read_file(path): Read a document from disk- list_files(folder): List all files in a folder- write_file(path, content): Write content to a file- mark_step_complete(step_name): Mark a step as done (for UI progress)
WORKFLOW STEPS:1. Scan documents in folder2. Review each legal document3. Write LEGAL_NOTICES.md with findings4. Write REVIEW_SUMMARY.md with evaluation
RESPONSE FORMAT:You must respond in this exact format:
<reasoning>Your thinking about what to do next and why</reasoning>
<action>tool_name</action><parameters>{"param1": "value1", "param2": "value2"}</parameters>
OR when completely done:
<final_answer>Your completion message and summary</final_answer>
RULES:- Take ONE action at a time- Wait for observation before next action- Think step-by-step- Be thorough in reviews- Provide specific, actionable recommendations"""Key Design Decisions:
- Explicit format - XML tags make parsing reliable
- One action per turn - Prevents agent from getting ahead of itself
- Clear workflow - Gives agent a mental model to follow
- Structured output - Easy for code to parse
Step 3: Parse LLM Responses
Section titled βStep 3: Parse LLM ResponsesβWe need a helper function to extract content from XML tags:
import re
def extract_between_tags(text: str, tag: str) -> str: """Extract content between XML-style tags""" pattern = f"<{tag}>(.*?)</{tag}>" match = re.search(pattern, text, re.DOTALL) if match: return match.group(1).strip() return ""
# Usage example:# reasoning = extract_between_tags(response, "reasoning")# action = extract_between_tags(response, "action")Step 4: Implement Tool Execution
Section titled βStep 4: Implement Tool ExecutionβNote: The Claude Agent SDK provides these tools built-in (
Read,Write,Bash,Glob,Grep, etc.) with proper sandboxing, permissions, and error handling. This manual implementation is for educational purposes to understand the internals.
Each tool performs a specific operation and returns an observation:
def execute_tool(action: str, parameters: dict) -> str: """Execute a tool and return observation""" try: if action == "list_files": folder = parameters["folder"] files = os.listdir(folder) # Filter for document types only doc_files = [f for f in files if f.endswith(('.pdf', '.docx', '.txt', '.md'))] return json.dumps(doc_files)
elif action == "read_file": path = parameters["path"] if not os.path.exists(path): return f"Error: File {path} not found"
with open(path, 'r', encoding='utf-8') as f: content = f.read()
# Truncate if too long to save tokens if len(content) > 10000: content = content[:10000] + "\n\n[... truncated for length ...]"
return f"File content ({len(content)} chars):\n{content}"
elif action == "write_file": path = parameters["path"] content = parameters["content"]
# Ensure directory exists os.makedirs(os.path.dirname(path) or ".", exist_ok=True)
with open(path, 'w', encoding='utf-8') as f: f.write(content)
return f"Successfully wrote {len(content)} characters to {path}"
elif action == "mark_step_complete": step_name = parameters["step_name"] # In real app, this would update UI progress print(f" β Step completed: {step_name}") return f"Marked '{step_name}' as complete"
else: return f"Error: Unknown action '{action}'"
except Exception as e: return f"Error executing {action}: {str(e)}"Tool Design Tips:
- Return descriptive observations (not just βsuccessβ)
- Handle errors gracefully
- Keep tools simple and focused
- Truncate large outputs to save tokens
Step 5: The ReAct Loop
Section titled βStep 5: The ReAct LoopβNow we tie it all together with the main control loop:
def react_agent_loop(user_request: str, folder_path: str, max_turns: int = 20): """Main ReAct control loop"""
# Initialize conversation conversation_history = [] conversation_history.append({ "role": "user", "content": f"Please review all legal documents in: {folder_path}" })
print(f"π Starting legal review of {folder_path}\n")
# Main loop: Reason β Act β Observe for turn in range(1, max_turns + 1): print(f"{'='*60}") print(f"Turn {turn}/{max_turns}") print(f"{'='*60}\n")
# REASON: Ask Claude what to do next response = client.messages.create( model="claude-sonnet-4-5", max_tokens=4000, system=system_prompt, messages=conversation_history )
assistant_message = response.content[0].text
# Check if agent is done if "<final_answer>" in assistant_message: final_answer = extract_between_tags(assistant_message, "final_answer") print("β
COMPLETED\n") print(final_answer) return final_answer
# Parse the response try: reasoning = extract_between_tags(assistant_message, "reasoning") action = extract_between_tags(assistant_message, "action") parameters = json.loads(extract_between_tags(assistant_message, "parameters")) except Exception as e: print(f"β Failed to parse LLM response: {e}") return None
# Display reasoning to user print("π€ REASONING:") print(f" {reasoning}\n")
# ACT: Execute the tool print(f"β‘ ACTION: {action}") print(f" Parameters: {json.dumps(parameters, indent=2)}\n")
observation = execute_tool(action, parameters)
# OBSERVE: Show result print("π OBSERVATION:") print(f" {observation}\n")
# Add to conversation history for next turn conversation_history.append({ "role": "assistant", "content": assistant_message }) conversation_history.append({ "role": "user", "content": f"<observation>{observation}</observation>" })
print("β οΈ Reached maximum turns without completion") return NoneLoop Mechanics:
- Send messages - Claude sees full conversation history
- Parse response - Extract reasoning, action, and parameters
- Execute action - Call the appropriate tool
- Return observation - Send result back to Claude
- Repeat - Continue until task is complete
Example Execution Trace
Section titled βExample Execution TraceβHereβs what a complete run looks like for our legal review case study:
π Starting legal review of /project/legal_docs
============================================================Turn 1/20============================================================
π€ REASONING: I need to start by seeing what documents are in the folder. This will help me understand the scope of the review.
β‘ ACTION: list_files Parameters: { "folder": "/project/legal_docs" }
π OBSERVATION: ["contract_v1.pdf", "terms_of_service.docx", "privacy_policy.txt"]
============================================================Turn 2/20============================================================
π€ REASONING: Found 3 documents. I'll start reviewing them one by one. Let me read the first document - contract_v1.pdf.
β‘ ACTION: read_file Parameters: { "path": "/project/legal_docs/contract_v1.pdf" }
π OBSERVATION: File content (2453 chars): CONTRACT AGREEMENT
This Agreement is entered into on [Date]...
Section 5: Contract Duration This contract shall remain in effect until... [Note: No termination clause specified]
Section 3: Payment Payment shall be made within a reasonable timeframe...
============================================================Turn 3/20============================================================
π€ REASONING: I've reviewed contract_v1.pdf and found several issues: - Missing termination clause (CRITICAL) - Vague payment terms (WARNING)
Let me continue with the next document.
β‘ ACTION: read_file Parameters: { "path": "/project/legal_docs/terms_of_service.docx" }
π OBSERVATION: File content (5122 chars): TERMS OF SERVICE ...
============================================================Turn 4/20============================================================
π€ REASONING: Reviewed terms_of_service.docx, found: - GDPR non-compliance (CRITICAL) - Unclear refund policy (WARNING)
One more document to review.
β‘ ACTION: read_file Parameters: { "path": "/project/legal_docs/privacy_policy.txt" }
π OBSERVATION: File content (3891 chars): PRIVACY POLICY ...
============================================================Turn 5/20============================================================
π€ REASONING: All documents reviewed. privacy_policy.txt has: - Missing DPO contact (WARNING)
Now I'll compile findings into LEGAL_NOTICES.md
β‘ ACTION: write_file Parameters: { "path": "/project/legal_docs/LEGAL_NOTICES.md", "content": "# Legal Review Notices\n\n## contract_v1.pdf\n\n### β οΈ CRITICAL..." }
π OBSERVATION: Successfully wrote 2451 characters to /project/legal_docs/LEGAL_NOTICES.md
============================================================Turn 6/20============================================================
π€ REASONING: LEGAL_NOTICES.md created successfully. Now I'll create the executive summary with overall evaluation status.
β‘ ACTION: write_file Parameters: { "path": "/project/legal_docs/REVIEW_SUMMARY.md", "content": "# Legal Review Summary\n\n**Status: β οΈ REQUIRES ATTENTION**..." }
π OBSERVATION: Successfully wrote 1122 characters to /project/legal_docs/REVIEW_SUMMARY.md
============================================================Turn 7/20============================================================
π€ REASONING: Both documents created successfully. Let me mark the final step as complete.
β‘ ACTION: mark_step_complete Parameters: { "step_name": "Write summary with evaluation" }
β Step completed: Write summary with evaluation
π OBSERVATION: Marked 'Write summary with evaluation' as complete
============================================================Turn 8/20============================================================
β
COMPLETED
Legal review completed successfully!
π SUMMARY:- Reviewed 3 documents- Found 3 CRITICAL issues- Found 3 WARNINGS- Created LEGAL_NOTICES.md (detailed findings)- Created REVIEW_SUMMARY.md (executive summary)
β οΈ STATUS: REQUIRES ATTENTION
Top recommendations:1. Add termination clause to contract_v1.pdf immediately2. Update terms_of_service.docx for GDPR compliance3. Specify exact payment terms in contract_v1.pdf
All findings documented in /project/legal_docs/LEGAL_NOTICES.mdProgress Tracking
Section titled βProgress TrackingβBasic UI Progress State
Section titled βBasic UI Progress Stateβclass ProgressTracker: """Track and display agent progress"""
def __init__(self): self.steps = [ {"name": "Scan documents", "status": "pending"}, {"name": "Review documents", "status": "pending"}, {"name": "Write legal notices", "status": "pending"}, {"name": "Write summary", "status": "pending"} ] self.current_action = None
def update_from_action(self, action: str, parameters: dict): """Update progress based on current action""" self.current_action = f"{action}: {parameters}"
# Map actions to steps if action == "list_files": self.set_step_status("Scan documents", "in_progress") elif action == "read_file": self.set_step_status("Review documents", "in_progress") elif "LEGAL_NOTICES" in parameters.get("path", ""): self.set_step_status("Write legal notices", "in_progress") elif "REVIEW_SUMMARY" in parameters.get("path", ""): self.set_step_status("Write summary", "in_progress")
self.render()
def mark_step_complete(self, step_name: str): """Mark a step as complete""" self.set_step_status(step_name, "complete") self.render()
def set_step_status(self, step_name: str, status: str): for step in self.steps: if step["name"] == step_name: step["status"] = status break
def render(self): """Display progress to user""" print("\nπ PROGRESS:") for step in self.steps: status = step["status"] if status == "complete": icon = "β
" elif status == "in_progress": icon = "π" else: icon = "β³" print(f" {icon} {step['name']}")
if self.current_action: print(f"\n Current: {self.current_action}") print()
# Integrate into control looptracker = ProgressTracker()
# Before executing actiontracker.update_from_action(action, parameters)
# When step marked completeif action == "mark_step_complete": tracker.mark_step_complete(parameters["step_name"])Error Handling
Section titled βError Handlingβdef react_agent_loop_with_error_handling(user_request: str, folder_path: str): """ReAct loop with basic error handling"""
max_consecutive_errors = 3 consecutive_errors = 0
for turn in range(1, 21): try: # ... (normal ReAct loop)
# Reset error counter on success consecutive_errors = 0
except Exception as e: consecutive_errors += 1 print(f"β Error on turn {turn}: {e}")
if consecutive_errors >= max_consecutive_errors: print("Too many consecutive errors, aborting") return None
# Add error to conversation for LLM to handle conversation_history.append({ "role": "user", "content": f"<error>Previous action failed: {str(e)}</error>" })
continueAdvantages
Section titled βAdvantagesββ Simple to Implement
- Minimal code (~200 lines)
- Easy to understand
- Quick to prototype
β Transparent
- See every decision
- Clear reasoning chain
- Easy to debug
β Flexible
- Works for many tasks
- Easy to add tools
- Natural conversation flow
Limitations
Section titled βLimitationsββ No Quality Checks
- Agent doesnβt verify its own work
- Errors propagate to next steps
- No validation of outputs
β Poor Error Recovery
- Gets stuck if tool fails
- No replanning on failure
- Manual intervention often needed
β Inefficient
- No parallel execution
- Redundant reasoning
- Can loop unnecessarily
β No Structured Planning
- Ad-hoc decision making
- Hard to estimate completion time
- Difficult to track overall progress
When to Use ReAct
Section titled βWhen to Use ReActββ Good Fit:
- Learning AI agents
- Simple linear workflows
- Prototyping ideas
- Internal tools with human oversight
- Tasks with < 5 steps
β Poor Fit:
- Production systems
- Complex multi-branch workflows
- Tasks requiring reliability
- Quality-critical applications
- Expensive operations (deploy, delete)
Complete Implementation
Section titled βComplete ImplementationβHereβs the full working code combining all the concepts above:
import anthropicimport jsonimport osimport refrom typing import List, Dict, Any
def extract_between_tags(text: str, tag: str) -> str: """Extract content between XML-style tags""" pattern = f"<{tag}>(.*?)</{tag}>" match = re.search(pattern, text, re.DOTALL) if match: return match.group(1).strip() return ""
def execute_tool(action: str, parameters: dict) -> str: """ Execute a tool and return observation
This is where you implement your actual tool logic """ try: if action == "list_files": folder = parameters["folder"] files = os.listdir(folder) # Filter for document types doc_files = [f for f in files if f.endswith(('.pdf', '.docx', '.txt', '.md'))] return json.dumps(doc_files)
elif action == "read_file": path = parameters["path"] if not os.path.exists(path): return f"Error: File {path} not found"
with open(path, 'r', encoding='utf-8') as f: content = f.read()
# Truncate if too long to save tokens if len(content) > 10000: content = content[:10000] + "\n\n[... truncated for length ...]"
return f"File content ({len(content)} chars):\n{content}"
elif action == "write_file": path = parameters["path"] content = parameters["content"]
# Ensure directory exists os.makedirs(os.path.dirname(path) or ".", exist_ok=True)
with open(path, 'w', encoding='utf-8') as f: f.write(content)
return f"Successfully wrote {len(content)} characters to {path}"
elif action == "mark_step_complete": step_name = parameters["step_name"] # In real app, this would update UI progress print(f" β Step completed: {step_name}") return f"Marked '{step_name}' as complete"
else: return f"Error: Unknown action '{action}'"
except Exception as e: return f"Error executing {action}: {str(e)}"
def react_agent_loop(user_request: str, folder_path: str, max_turns: int = 20): """ Main ReAct control loop
Args: user_request: User's high-level request folder_path: Working directory for the agent max_turns: Maximum reasoning-action cycles
Returns: Final result or error """ client = anthropic.Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY")) conversation_history = []
# System prompt defines agent capabilities and format system_prompt = """You are a legal review assistant. You work step-by-step to complete tasks.
AVAILABLE TOOLS:- read_file(path): Read a document from disk- list_files(folder): List all files in a folder- write_file(path, content): Write content to a file- mark_step_complete(step_name): Mark a step as done (for UI progress)
WORKFLOW STEPS:1. Scan documents in folder2. Review each legal document3. Write LEGAL_NOTICES.md with findings4. Write REVIEW_SUMMARY.md with evaluation
RESPONSE FORMAT:You must respond in this exact format:
<reasoning>Your thinking about what to do next and why</reasoning>
<action>tool_name</action><parameters>{"param1": "value1", "param2": "value2"}</parameters>
OR when completely done:
<final_answer>Your completion message and summary</final_answer>
RULES:- Take ONE action at a time- Wait for observation before next action- Think step-by-step- Be thorough in reviews- Provide specific, actionable recommendations"""
# Initial user message conversation_history.append({ "role": "user", "content": f"Please review all legal documents in: {folder_path}" })
print(f"π Starting legal review of {folder_path}\n")
# Main ReAct loop for turn in range(1, max_turns + 1): print(f"{'='*60}") print(f"Turn {turn}/{max_turns}") print(f"{'='*60}\n")
# REASON: Ask LLM what to do next response = client.messages.create( model="claude-sonnet-4-5", max_tokens=4000, system=system_prompt, messages=conversation_history )
assistant_message = response.content[0].text
# Check if agent is done if "<final_answer>" in assistant_message: final_answer = extract_between_tags(assistant_message, "final_answer") print("β
COMPLETED\n") print(final_answer) return final_answer
# Parse the response try: reasoning = extract_between_tags(assistant_message, "reasoning") action = extract_between_tags(assistant_message, "action") parameters = json.loads(extract_between_tags(assistant_message, "parameters")) except Exception as e: print(f"β Failed to parse LLM response: {e}") print(f"Response was: {assistant_message}") return None
# Display reasoning to user print("π€ REASONING:") print(f" {reasoning}\n")
# ACT: Execute the tool print(f"β‘ ACTION: {action}") print(f" Parameters: {json.dumps(parameters, indent=2)}\n")
observation = execute_tool(action, parameters)
# OBSERVE: Show result print("π OBSERVATION:") print(f" {observation}\n")
# Add to conversation history conversation_history.append({ "role": "assistant", "content": assistant_message }) conversation_history.append({ "role": "user", "content": f"<observation>{observation}</observation>" })
print("β οΈ Reached maximum turns without completion") return None
# Example usageif __name__ == "__main__": result = react_agent_loop( user_request="Review all legal documents and create summary", folder_path="/project/legal_docs" )What You Get:
- ~200 lines of production-ready code
- Clear separation of concerns (parsing, tools, control loop)
- Easy to extend with new tools
- Transparent reasoning at each step
Optimizations
Section titled βOptimizationsβ1. Add Planning Phase
Section titled β1. Add Planning Phaseβ# Before main loop, ask agent to planplanning_prompt = """Before starting, create a step-by-step plan.
Format:<plan>1. [Step 1]2. [Step 2]...</plan>"""
conversation_history.append({ "role": "user", "content": user_request + "\n\n" + planning_prompt})
# Get planplan_response = call_llm(...)plan = extract_between_tags(plan_response, "plan")
print(f"π Plan:\n{plan}\n")input("Press Enter to execute...")
# Continue with normal ReAct loop2. Add Simple Validation
Section titled β2. Add Simple Validationβdef validate_output(action: str, parameters: dict, observation: str) -> bool: """Basic validation of tool results"""
if action == "write_file": path = parameters["path"]
# Check file was actually created if not os.path.exists(path): print(f"β οΈ Validation failed: {path} not created") return False
# Check minimum content length if "LEGAL_NOTICES" in path: size = os.path.getsize(path) if size < 500: print(f"β οΈ Validation failed: {path} too small ({size} bytes)") return False
return True
# In main loop, after observationif not validate_output(action, parameters, observation): conversation_history.append({ "role": "user", "content": "<validation_failed>Output did not meet requirements. Please retry.</validation_failed>" }) continue3. Upgrade to Claude Agent SDK
Section titled β3. Upgrade to Claude Agent SDKβFor production use, migrate to the Claude Agent SDK which provides all these capabilities built-in:
from claude_agent_sdk import query, ClaudeAgentOptionsfrom claude_agent_sdk.types import HookMatcher
async def validate_output(input_data, tool_use_id, context): """Validation hook for tool outputs""" if input_data['tool_name'] == 'Write': path = input_data['tool_input'].get('file_path', '') if 'LEGAL_NOTICES' in path: # Add custom validation logic pass return {}
async def main(): async for message in query( prompt="Review all legal documents in /project/legal_docs", options=ClaudeAgentOptions( allowed_tools=["Read", "Write", "Glob", "Grep"], permission_mode="acceptEdits", hooks={ "PostToolUse": [HookMatcher(hooks=[validate_output])] }, model="claude-sonnet-4-5" ) ): print(message)Benefits over manual ReAct:
- No need to implement
execute_tool()- tools are built-in - Automatic conversation management and session persistence
- Built-in error handling and retries
- Permission system with hooks for validation
- Sandbox mode for safe command execution
- Subagents for parallel task execution
- Structured outputs with JSON schema validation
See Agent SDK documentation for complete details.
Manual ReAct vs Claude Agent SDK
Section titled βManual ReAct vs Claude Agent SDKβ| Feature | Manual ReAct (This Guide) | Claude Agent SDK |
|---|---|---|
| Learning curve | Understand internals | Higher-level abstraction |
| Tool implementation | Manual (~200 lines) | Built-in (Read, Write, Bash, etc.) |
| Error handling | Manual try/catch | Automatic with retries |
| Permissions | Manual validation | Built-in with hooks |
| Session management | Manual history tracking | Automatic with resumption |
| Progress tracking | Custom UI code | Built-in with streaming |
| Validation | Manual checks | Hooks (PreToolUse, PostToolUse) |
| Parallel execution | Not supported | Subagents |
| Sandboxing | Not available | Built-in command sandbox |
| Production ready | No | Yes |
| Setup time | ~400 lines of code | ~20 lines of code |
Recommendation: Use manual ReAct for learning, Claude Agent SDK for production.
Next Steps
Section titled βNext StepsβFor Learning:
- Experiment with this manual implementation
- Add custom tools for your use case
- Try different prompt engineering approaches
- Understand the ReAct loop mechanics thoroughly
For Production:
- Migrate to Claude Agent SDK - Start with the quickstart guide
- Use built-in hooks - Add validation with
PreToolUseandPostToolUsehooks - Enable permissions - Configure
permission_modefor safety - Add subagents - Use specialized agents for complex tasks
- Consider Plan-Execute-Verify - See our production pattern guide
Related Resources
Section titled βRelated Resourcesβ- Claude Agent SDK Overview - Official production-ready agent framework
- Claude Agent SDK Quickstart - Get started in minutes
- AI Agent Case Study - Legal review use case
- Plan-Execute-Verify Pattern - Production-grade architecture
- ReAct Paper - Original research
- Anthropic Tool Use Guide
Conclusion
Section titled βConclusionβThe ReAct pattern is perfect for learning how AI agents work. Itβs simple, transparent, and gets you building quickly.
This guide taught you the fundamentals by building from scratch. Now you understand:
- How agents reason and act in loops
- How to parse LLM responses and execute tools
- How to manage conversation history
- The limitations of simple agent architectures
For production systems, use the Claude Agent SDK which provides all these capabilities out-of-the-box with production-grade reliability, or consider the Plan-Execute-Verify pattern for complex workflows.
Start here, learn the fundamentals, then leverage the Agent SDK or graduate to more sophisticated architectures as your needs grow.