Quickstart: OpenAI Agents SDK
The OpenAI Agents SDK supports MCP servers as tool sources. Pipeworx plugs in like any other MCP.
Install
pip install openai-agents
Wire it up
from agents import Agent, MCPServerSse
pipeworx = MCPServerSse(
name="pipeworx",
params={"url": "https://gateway.pipeworx.io/mcp"},
)
agent = Agent(
name="ResearchAgent",
instructions="You are a research assistant. Use Pipeworx for live data.",
mcp_servers=[pipeworx],
)
# Run
async with agent:
result = await agent.run("What is the current 30-year mortgage rate?")
print(result.final_output)
The agent automatically calls pipeworx’s tools when relevant. The gateway’s serverInstructions brief the model on ask_pipeworx, discover_tools, etc.
Scope by task
For tighter context budgets:
pipeworx_housing = MCPServerSse(
name="pipeworx-housing",
params={"url": "https://gateway.pipeworx.io/mcp?vertical=housing"},
)
Only ~20 housing tools surface in tools/list. See context tax.
Multiple gateways
You can mount Pipeworx multiple times with different scopes:
agent = Agent(
...,
mcp_servers=[
MCPServerSse(name="pw-finance", params={"url": "https://gateway.pipeworx.io/mcp?vertical=fintech"}),
MCPServerSse(name="pw-pharma", params={"url": "https://gateway.pipeworx.io/mcp?vertical=pharma"}),
MCPServerSse(name="pw-research", params={"url": "https://gateway.pipeworx.io/mcp?task=academic+research"}),
],
)
The agent SDK namespaces tools by server, so there’s no name collision even with overlapping pack content.
Authentication
Pass headers via params:
MCPServerSse(
name="pipeworx",
params={
"url": "https://gateway.pipeworx.io/mcp",
"headers": {"Authorization": f"Bearer {pipeworx_token}"}
},
)
Pipeworx-aware system prompt
The gateway’s serverInstructions are excellent agent guidance (“use ask_pipeworx for plain English routing”, “prefer compound _intel tools”, etc.). The Agents SDK reads them automatically on connect, but if you want to reinforce them in the agent’s primary instructions:
async with pipeworx as pw:
init = await pw.session.initialize() # returns capabilities + instructions
agent = Agent(
name="ResearchAgent",
instructions=f"You are a research assistant.\n\n{init.instructions}",
mcp_servers=[pipeworx],
)
Without this, the SDK still passes instructions to the model — but as a separate field that some model versions weight less than the primary instructions.
Reading response metadata
Every Pipeworx tool response includes _meta with cost, freshness, retry hints, and (on errors) examples and alternatives. The Agents SDK exposes the raw response to your tool-call hooks:
@agent.on_tool_call_complete
async def log_meta(call):
if call.result and "_meta" in call.result:
meta = call.result["_meta"]
if "cache" in meta:
print(f" freshness: {meta['cache'].get('fresh_until')}")
if "feedback_hint" in meta:
print(f" feedback: {meta['feedback_hint']}")
Useful for production agents that need to surface freshness to humans or auto-file feedback on errors.
Memory across runs
Pipeworx exposes remember / recall / forget as tools. Agents authenticated as the same account share state across runs:
# Run 1
await agent.run("Remember the focus company for this research is AAPL.")
# Later, separate run, same account
result = await agent.run("Pick up where I left off — the focus company.")
# Agent calls recall({key: "..."}) and continues
See memory.
Caveats
- The OpenAI Agents SDK reads tool definitions on every turn unless you cache them. For heavy use, scope by vertical/task or batch tool calls within a turn.
_meta.cache.fresh_untilintools/callresponses — your logic can decide whether to re-call vs. trust prior data. See caching and freshness.- The SDK’s default retry behavior re-issues the same tool call on error; Pipeworx returns
_meta.examplesand_meta.alternativeson errors so a smarter retry can pivot. See error recovery.