ReAct Pattern: Multi-Provider Implementation
Overview
Section titled âOverviewâThis guide shows how to build ReAct agents that work seamlessly across multiple LLM providers (Claude, GPT, Gemini, local models) without rewriting your agent logic.
Two Approaches:
- LangChain (Recommended) - Production-ready framework with built-in abstractions
- Manual Abstraction (Educational) - Build your own to understand the internals
Why Provider Abstraction Matters
Section titled âWhy Provider Abstraction MattersâWithout Abstraction:
# Locked to Claudeimport anthropicclient = anthropic.Anthropic()response = client.messages.create(model="claude-sonnet-4-5", ...)
# Want to switch to GPT? Rewrite everything!import openaiclient = openai.OpenAI()response = client.chat.completions.create(model="gpt-4", ...)With Abstraction:
# Switch providers with one line# llm = ChatAnthropic(model="claude-sonnet-4-5")llm = ChatOpenAI(model="gpt-4-turbo") # Just change this!
# Agent code stays the sameresponse = llm.invoke("Your prompt")Benefits:
- Vendor Independence - Not locked to one provider
- Cost Optimization - Use cheaper models for simple tasks
- Reliability - Automatic fallback if one provider fails
- A/B Testing - Compare model performance easily
Approach 1: LangChain (Recommended)
Section titled âApproach 1: LangChain (Recommended)âLatest: LangChain 1.2.8 (2026) with LangGraph for production agents
Installation
Section titled âInstallationâpip install langchain==1.2.8 langchain-anthropic langchain-openai langchain-google-genai langchain-communityStep 1: Understanding LangChainâs Unified Interface
Section titled âStep 1: Understanding LangChainâs Unified InterfaceâLangChain provides a standard interface across all providers:
from langchain_anthropic import ChatAnthropicfrom langchain_openai import ChatOpenAIfrom langchain_google_genai import ChatGoogleGenerativeAIfrom langchain_community.llms import Ollama
# All have the SAME interfacellm_claude = ChatAnthropic(model="claude-sonnet-4-5", temperature=0)llm_gpt = ChatOpenAI(model="gpt-4-turbo", temperature=0)llm_gemini = ChatGoogleGenerativeAI(model="gemini-1.5-pro", temperature=0)llm_local = Ollama(model="mistral")
# Same method works for allresponse_claude = llm_claude.invoke("Hello!")response_gpt = llm_gpt.invoke("Hello!")response_gemini = llm_gemini.invoke("Hello!")Key Insight: Write once, run anywhere.
Step 2: Define Tools Once
Section titled âStep 2: Define Tools OnceâUse LangChainâs @tool decorator to define tools that work with all providers:
from langchain_core.tools import toolimport os
@tooldef read_file(path: str) -> str: """Read a file from disk.
Args: path: The file path to read """ try: with open(path, 'r') as f: content = f.read() return f"Success: {content[:1000]}..." except Exception as e: return f"Error: {str(e)}"
@tooldef write_file(path: str, content: str) -> str: """Write content to a file.
Args: path: The file path to write content: The content to write """ try: os.makedirs(os.path.dirname(path) or ".", exist_ok=True) with open(path, 'w') as f: f.write(content) return f"Success: Wrote to {path}" except Exception as e: return f"Error: {str(e)}"
@tooldef list_files(folder: str) -> str: """List files in a directory.
Args: folder: The folder path """ try: files = os.listdir(folder) docs = [f for f in files if f.endswith(('.pdf', '.txt', '.md'))] return f"Found {len(docs)} documents: {', '.join(docs)}" except Exception as e: return f"Error: {str(e)}"
# These tools work with ANY provider!tools = [read_file, write_file, list_files]Tool Binding - Attach tools to any LLM:
claude_with_tools = llm_claude.bind_tools(tools)gpt_with_tools = llm_gpt.bind_tools(tools)gemini_with_tools = llm_gemini.bind_tools(tools)
# All work the same way!Step 3: Using create_react_agent (LangChain Classic)
Section titled âStep 3: Using create_react_agent (LangChain Classic)âThe traditional approach using AgentExecutor:
from langchain.agents import create_react_agent, AgentExecutorfrom langchain_core.prompts import PromptTemplate
# Create ReAct prompt templatereact_prompt = PromptTemplate.from_template("""You are a legal review assistant. Answer the following question as best you can.
You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question
Begin!
Question: {input}Thought: {agent_scratchpad}""")
# Choose your provider (just change this line!)llm = ChatAnthropic(model="claude-sonnet-4-5", temperature=0)# llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)# llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro", temperature=0)
# Create agentagent = create_react_agent( llm=llm, tools=tools, prompt=react_prompt)
# Create executoragent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, handle_parsing_errors=True, max_iterations=20)
# Run!result = agent_executor.invoke({ "input": "Review all legal documents in /project/legal_docs and create LEGAL_NOTICES.md"})
print(result["output"])Step 4: Custom ReAct Loop (More Control)
Section titled âStep 4: Custom ReAct Loop (More Control)âFor fine-grained control, build your own loop:
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage, ToolMessage
class CustomReActAgent: """Custom ReAct agent with LangChain components"""
def __init__(self, llm, tools: list): self.llm = llm self.tools = tools self.tool_map = {tool.name: tool for tool in tools} self.llm_with_tools = llm.bind_tools(tools)
def run(self, user_request: str, max_iterations: int = 20) -> str: """Run the ReAct loop"""
messages = [ SystemMessage(content="""You are a legal review assistant.Work step-by-step:1. Scan documents in folder2. Review each document3. Create LEGAL_NOTICES.md with findings4. Create REVIEW_SUMMARY.md with summary
When completely done, respond with your final summary (don't call more tools)."""), HumanMessage(content=user_request) ]
for iteration in range(1, max_iterations + 1): print(f"\n{'='*60}") print(f"Iteration {iteration}/{max_iterations}") print(f"{'='*60}")
# Call LLM response = self.llm_with_tools.invoke(messages)
# Check if done (no more tool calls) if not response.tool_calls: print("\nâ
COMPLETED") return response.content
# Add AI response to history messages.append(response)
# Execute tool calls for tool_call in response.tool_calls: tool_name = tool_call["name"] tool_args = tool_call["args"]
print(f"\n⥠Action: {tool_name}") print(f" Args: {tool_args}")
# Execute if tool_name in self.tool_map: result = self.tool_map[tool_name].invoke(tool_args) else: result = f"Error: Unknown tool {tool_name}"
print(f"đ Observation: {result[:200]}...")
# Add tool result messages.append(ToolMessage( content=str(result), tool_call_id=tool_call["id"] ))
return "Failed to complete within iteration limit"
# Usage - easily switch providers!llm = ChatAnthropic(model="claude-sonnet-4-5", temperature=0)# llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)# llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro", temperature=0)
agent = CustomReActAgent(llm, tools)result = agent.run("Review all legal documents in /project/legal_docs")Step 5: LangGraph (Production Recommended)
Section titled âStep 5: LangGraph (Production Recommended)âNew in 2026: LangGraph is now the recommended framework for production agents
# Modern approach with LangGraphfrom langgraph.prebuilt import create_react_agent as create_agent_langgraph
# Choose providerllm = ChatAnthropic(model="claude-sonnet-4-5")
# Create agent with LangGraph (better defaults, more robust)agent = create_agent_langgraph( model=llm, tools=tools, # Built-in features: # - Automatic retry logic # - Error handling middleware # - State management # - Streaming support)
# Run agentresult = agent.invoke({ "messages": [HumanMessage(content="Review legal docs in /project/legal_docs")]})
print(result["messages"][-1].content)LangGraph Benefits:
- â More robust error handling
- â Better retry logic with exponential backoff
- â Native streaming support
- â Modular agent design
- â Compatible with MCP (Model Context Protocol)
Approach 2: Manual Abstraction (Educational)
Section titled âApproach 2: Manual Abstraction (Educational)âUnderstanding how to build abstraction from scratch helps you understand what LangChain does internally.
Step 1: Standard Data Models
Section titled âStep 1: Standard Data ModelsâDefine provider-agnostic data structures:
from dataclasses import dataclassfrom typing import List, Dict, Any, Optional, Literalfrom enum import Enum
class MessageRole(Enum): USER = "user" ASSISTANT = "assistant" SYSTEM = "system"
@dataclassclass Message: """Standardized message format""" role: MessageRole content: str
@dataclassclass Tool: """Standardized tool definition""" name: str description: str parameters: Dict[str, Any] # JSON Schema
@dataclassclass ToolCall: """Standardized tool call""" id: str name: str arguments: Dict[str, Any]
@dataclassclass LLMResponse: """Standardized LLM response""" content: str tool_calls: List[ToolCall] finish_reason: Literal["stop", "tool_calls", "length"] metadata: Dict[str, Any] # Usage statsStep 2: Abstract Provider Interface
Section titled âStep 2: Abstract Provider Interfaceâfrom abc import ABC, abstractmethod
class LLMProvider(ABC): """Abstract base for all providers"""
@abstractmethod def complete(self, messages: List[Message], tools: List[Tool]) -> LLMResponse: """Send request and get standardized response""" pass
@abstractmethod def supports_native_tools(self) -> bool: """Does this provider support native tool calling?""" passStep 3: Claude Adapter (Example)
Section titled âStep 3: Claude Adapter (Example)âimport anthropic
class ClaudeProvider(LLMProvider): """Adapter for Anthropic's Claude"""
def __init__(self, api_key: str): self.client = anthropic.Anthropic(api_key=api_key)
def complete(self, messages: List[Message], tools: List[Tool]) -> LLMResponse: """Convert to Claude format and back"""
# Convert messages claude_messages = [ {"role": m.role.value, "content": m.content} for m in messages if m.role != MessageRole.SYSTEM ]
# Extract system system = "\n\n".join([ m.content for m in messages if m.role == MessageRole.SYSTEM ]) or None
# Convert tools claude_tools = [ { "name": t.name, "description": t.description, "input_schema": t.parameters } for t in tools ] if tools else None
# Call API response = self.client.messages.create( model="claude-sonnet-4-5", max_tokens=4000, system=system, messages=claude_messages, tools=claude_tools )
# Convert response back to standard format content = "" tool_calls = []
for block in response.content: if hasattr(block, 'text'): content += block.text elif block.type == "tool_use": tool_calls.append(ToolCall( id=block.id, name=block.name, arguments=block.input ))
return LLMResponse( content=content, tool_calls=tool_calls, finish_reason="tool_calls" if tool_calls else "stop", metadata={"usage": { "input_tokens": response.usage.input_tokens, "output_tokens": response.usage.output_tokens }} )
def supports_native_tools(self) -> bool: return TrueSimilar adapters can be built for OpenAI, Gemini, and local models (see complete implementation below).
Step 4: Provider-Agnostic Agent
Section titled âStep 4: Provider-Agnostic Agentâclass AgnosticReActAgent: """ReAct agent that works with any provider"""
def __init__(self, provider: LLMProvider): self.provider = provider
def run(self, user_request: str) -> Optional[str]: """Run agent with any provider"""
messages = [ Message(MessageRole.SYSTEM, "You are a legal review assistant..."), Message(MessageRole.USER, user_request) ]
tools = self._define_tools()
for turn in range(20): # Call provider (abstracted!) response = self.provider.complete(messages, tools)
if "<final_answer>" in response.content: return self._extract_final(response.content)
# Execute tool calls for tool_call in response.tool_calls: result = self._execute_tool(tool_call.name, tool_call.arguments) messages.append(Message(MessageRole.USER, f"<observation>{result}</observation>"))
return NoneUsage:
# Switch providers easily# provider = ClaudeProvider(os.getenv("ANTHROPIC_API_KEY"))# provider = OpenAIProvider(os.getenv("OPENAI_API_KEY"))provider = GeminiProvider(os.getenv("GOOGLE_API_KEY"))
agent = AgnosticReActAgent(provider)result = agent.run("Review legal docs")Provider Comparison Matrix
Section titled âProvider Comparison Matrixâ| Provider | Native Tools | Speed | Cost | Best For |
|---|---|---|---|---|
| Claude Sonnet 4.5 | â Yes | Fast | $$ | General purpose, high quality |
| Claude Opus 4.6 | â Yes | Slow | $$$$ | Complex reasoning, planning |
| Claude Haiku 4.5 | â Yes | Very Fast | $ | Simple tasks, verification |
| GPT-4 Turbo | â Yes | Fast | $$$ | General purpose |
| GPT-3.5 Turbo | â Yes | Very Fast | $ | Simple tasks |
| Gemini 1.5 Pro | â Yes | Fast | $$ | Multimodal, long context |
| Gemini 1.5 Flash | â Yes | Very Fast | $ | Fast inference |
| Mistral (Local) | â ď¸ Via XML | Depends | Free | Privacy, offline |
| Llama 3 (Local) | â ď¸ Via XML | Depends | Free | Privacy, offline |
Complete Implementations
Section titled âComplete ImplementationsâLangChain Implementation (Recommended)
Section titled âLangChain Implementation (Recommended)âSee full working code in sources: LangChain Agents Documentation, create_react_agent API Reference
Manual Implementation
Section titled âManual ImplementationâClick to expand complete manual abstraction code (~500 lines)
Full implementation includes:
- All adapter classes (Claude, OpenAI, Gemini, Local)
- Provider-agnostic agent
- Tool execution logic
- Error handling
See the model-agnostic archive for complete code or adapt the step-by-step examples above.
Production Recommendations
Section titled âProduction RecommendationsâWhen to Use What
Section titled âWhen to Use WhatâUse LangChain/LangGraph when:
- â Building production applications
- â Need robust error handling
- â Want to switch providers easily
- â Benefit from ecosystem (tools, memory, chains)
- â Need rapid development
Build Manual Abstraction when:
- đ Learning how agents work internally
- đ§ Need very specific control
- ⥠Performance is critical (minimal overhead)
- đ Security requires avoiding dependencies
Testing Across Providers
Section titled âTesting Across Providersâimport pytest
def test_agent_all_providers(): """Verify agent works with all providers"""
providers = { "claude": ChatAnthropic(model="claude-sonnet-4-5"), "gpt": ChatOpenAI(model="gpt-4-turbo"), "gemini": ChatGoogleGenerativeAI(model="gemini-1.5-pro"), }
for name, llm in providers.items(): print(f"\nTesting with {name}...")
agent = CustomReActAgent(llm, tools) result = agent.run("List files in /test")
assert result is not None, f"{name} failed" print(f"â
{name} passed")Cost Optimization Strategy
Section titled âCost Optimization Strategyâdef get_llm_for_task(complexity: str): """Choose model based on task complexity"""
if complexity == "low": # Use cheapest option return ChatAnthropic(model="claude-haiku-4-5") # or ChatOpenAI(model="gpt-3.5-turbo")
elif complexity == "medium": # Balance cost and quality return ChatAnthropic(model="claude-sonnet-4-5") # or ChatOpenAI(model="gpt-4-turbo")
else: # high # Use most capable return ChatAnthropic(model="claude-opus-4-6") # or ChatOpenAI(model="gpt-4")
# Usagellm = get_llm_for_task("medium")agent = CustomReActAgent(llm, tools)Key Takeaways
Section titled âKey Takeawaysâ- LangChain provides abstraction for free - Use it unless you have specific reasons not to
- Provider switching is trivial - Change one line of code
- Test with multiple providers - Behavior can differ subtly
- Local models need XML fallbacks - Most donât support native tool calling
- LangGraph is the future - Use it for new production agents
Related Resources
Section titled âRelated ResourcesâNext Steps
Section titled âNext Stepsâ- Start Simple: Use LangChainâs
create_react_agentfor quick prototypes - Go Production: Migrate to LangGraphâs
create_agentfor robust applications - Learn Internals: Build manual abstraction to understand what LangChain does
- Advanced Patterns: Explore Plan-Execute-Verify for production systems