Guides

Building Multi-agent orchestration patterns with LangChai...

This guide provides a structured approach to implementing AI agents with tool integration, focusing on reliability, observability, and cost control. Follow these steps to build and debug agent workflows using common frameworks.

2-3 hours5 steps
1

Define tool function schemas

Create explicit JSON schemas for all tools your agent will use. Use the 'tool' parameter in LangChain's StructuredOutputParser to enforce strict formatting.

tool_schemas.py
from langchain.output_parsers import StructuredOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field

class WeatherTool(BaseModel):
    location: str = Field(description="City name")
    unit: str = Field(description="Temperature unit", enum=["C", "F"])

parser = StructuredOutputParser.from_instructions("Return a JSON object with location and unit fields.")

⚠ Common Pitfalls

  • Using vague schema definitions leading to hallucinated tool calls
  • Forgetting to include required fields in function parameters
2

Implement agent state management

Create a state class that tracks tool calls, responses, and execution context. Use LangGraph's StateGraph to define workflow transitions.

agent_state.py
from langgraph.graph import StateGraph, START, END
from typing import TypedDict

class AgentState(TypedDict):
    messages: list
    tool_calls: list
    context: dict

workflow = StateGraph(AgentState)
workflow.add_node("tool_call", tool_call_node)
workflow.add_edge(START, "tool_call")
workflow.add_edge("tool_call", END)

⚠ Common Pitfalls

  • Not persisting state between steps causing context loss
  • Incorrect edge definitions leading to workflow deadlocks
3

Add observability hooks

Integrate LangSmith to track agent execution. Use the 'langsmith' callback to log tool calls and responses for debugging.

observability.py
from langchain_community.callbacks import LangSmithTracer

tracer = LangSmithTracer()
tracer.set_project("agent-debugging")

# Add to LLM call:
llm = OpenAI(callbacks=[tracer])

⚠ Common Pitfalls

  • Forgetting to set the project name leading to unorganized traces
  • Not capturing all tool call outputs in logs
4

Implement cost monitoring

Track token usage and API costs using middleware. Add a counter that triggers alerts when thresholds are exceeded.

cost_monitor.py
class CostMonitor:
    def __init__(self):
        self.total_cost = 0.0
        self.token_count = 0

    def on_llm_end(self, response):
        self.token_count += response.usage.total_tokens
        self.total_cost += calculate_cost(response)
        if self.token_count > 1000:
            raise Exception("Token limit exceeded")

⚠ Common Pitfalls

  • Not accounting for different pricing models across LLM providers
  • Ignoring background cost monitoring during long-running tasks
5

Add human-in-the-loop validation

Implement a step that requires human approval for critical decisions. Use a tool that returns a 'needs_approval' flag.

approval_flow.py
def approval_node(state):
    tool_response = state["tool_response"]
    if tool_response.get("needs_approval"):
        return {"approval_required": True, "content": tool_response["content"]}
    return {"approval_required": False}

⚠ Common Pitfalls

  • Not handling approval timeouts leading to workflow stalls
  • Skipping validation for non-critical decisions

What you built

By following these steps, you've implemented a robust agent workflow with proper tool integration, observability, and safety controls. Regularly review logs in LangSmith and adjust cost thresholds based on usage patterns.