Quickstart: LangChain / LangGraph
LangChain provides an MCP adapter (langchain-mcp-adapters) that converts MCP tools into LangChain BaseTool instances. Wire Pipeworx in once and any LangChain or LangGraph agent can use it.
Install
pip install langchain langchain-mcp-adapters
Wire it up
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
client = MultiServerMCPClient({
"pipeworx": {
"transport": "streamable_http",
"url": "https://gateway.pipeworx.io/mcp",
}
})
tools = await client.get_tools()
agent = create_react_agent(
ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools,
)
response = await agent.ainvoke({
"messages": [{"role": "user", "content": "Get Apple's latest 10-K"}]
})
The agent now has access to all Pipeworx tools as LangChain tools, including the meta-tools (ask_pipeworx, discover_tools, resolve_entity, compare_entities).
Scope to a vertical
client = MultiServerMCPClient({
"pipeworx-pharma": {
"transport": "streamable_http",
"url": "https://gateway.pipeworx.io/mcp?vertical=pharma",
}
})
Only pharma-relevant tools load — see context tax.
Multi-server setup
client = MultiServerMCPClient({
"pipeworx_finance": {"transport": "streamable_http", "url": "https://gateway.pipeworx.io/mcp?vertical=fintech"},
"pipeworx_pharma": {"transport": "streamable_http", "url": "https://gateway.pipeworx.io/mcp?vertical=pharma"},
"pipeworx_research": {"transport": "streamable_http", "url": "https://gateway.pipeworx.io/mcp?task=academic+papers"},
})
tools = await client.get_tools() # all servers' tools, namespaced
Auth
{
"transport": "streamable_http",
"url": "https://gateway.pipeworx.io/mcp",
"headers": {"Authorization": f"Bearer {pipeworx_token}"}
}
Reading Pipeworx response metadata
Every Pipeworx tool response embeds _meta (cost, freshness, retry hints, examples, alternatives) inside the text content. With langchain-mcp-adapters the raw response is preserved as the tool’s output:
async for event in agent.astream({"messages": [...]}, stream_mode="events"):
if event["event"] == "on_tool_end":
text = event["data"]["output"].content[0].text
# Parse the JSON-stringified result
import json
parsed = json.loads(text) if text.startswith("{") else {}
# The wrapper carries _meta when it's a Pipeworx tool
meta = parsed.get("_meta") or {}
if meta.get("feedback_hint"):
print(f"feedback nudge: {meta['feedback_hint']}")
Useful for production agents that need cost transparency, freshness gates, or auto-file feedback on errors.
Memory across runs
Pipeworx’s remember / recall / forget tools persist agent state across LangGraph runs when authenticated as the same account:
# Day 1 — pin a research target
await agent.ainvoke({"messages": [{"role": "user", "content": "Save 'focus_ticker' as AAPL."}]})
# Day 2 — pick up
await agent.ainvoke({"messages": [{"role": "user", "content": "What's our focus ticker, and pull its latest 10-K."}]})
# The agent calls recall({key: "focus_ticker"}) → "AAPL", then edgar_company_filings
See memory.
Auto-passing serverInstructions
LangChain’s tool router doesn’t natively forward MCP serverInstructions — the gateway sends them on initialize but create_react_agent only sees tool definitions. For best results, pass them into the system prompt manually:
async with client.session("pipeworx") as session:
init = await session.initialize()
instructions = init.serverInfo.get("instructions", "")
agent = create_react_agent(
ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools,
state_modifier=f"You are a research assistant.\n\n{instructions}",
)
This briefs the model on ask_pipeworx, discover_tools, compound _intel tools, and the prompt playbooks — all upfront context that improves first-call accuracy.
Caveats
langchain-mcp-adapterslazy-loads tool definitions on first call. First invocation has a small latency overhead while it fetchestools/list.- LangGraph’s checkpointer (memory between agent steps) is separate from Pipeworx’s
remember/recall(memory across user sessions). Use checkpointer for in-flight state, Pipeworx memory for cross-session. - For long-running agents, scope to a vertical to keep tool count low — see reducing context.