Creating Framework Integrations
This guide explains how to create a new framework adapter for the Thenvoi SDK using the composition-based architecture.
Architecture Overview
The composition pattern separates concerns:
- Agent: Manages platform connection, event loop, room lifecycle
- Adapter: Handles LLM interaction for your framework
- Tools: Platform capabilities exposed to the LLM (send_message, add_participant, etc.)
Critical Concept: Platform tools like send_message are called BY THE LLM, not by your adapter. Your adapter’s job is to give tools to the LLM and let it decide when to use them.
What the Platform Guarantees
Before diving into implementation, understand what the platform provides:
historyis already converted by yourHistoryConverter(or rawHistoryProviderif none set)participants_msgis only set when the participant list has changed since the last messageis_session_bootstrapmeans “first message delivery for this room session”, not “first message ever in the room”- Adapters should NOT call
send_messagedirectly for normal responses - let the LLM decide via tool calls. Direct calls are only for emergency/fallback behavior.
Two Patterns for Tool Execution
The real difference (one sentence):
- Pattern 1: Your framework runs the agent loop and calls tools itself
- Pattern 2: You run the agent loop and call tools yourself
Everything else is detail.
Concrete example — send_message:
Rule of thumb:
- Framework already knows how to run agents → Pattern 1
- Raw LLM API (Anthropic, OpenAI) → Pattern 2
- Unsure → Pattern 2 (it always works)
Pattern 1: Framework Manages Tools (LangGraph-style)
When your framework has its own tool execution loop (like LangGraph’s ReAct agent):
- Convert
AgentToolsto framework-specific tool format - Pass tools to the framework/graph
- Framework calls tools internally as part of its agent loop
Example: LangGraph adapter
Pattern 2: Adapter Manages Tool Loop (Anthropic-style)
When you need to manage the tool execution loop yourself:
- Get tool schemas via
tools.get_tool_schemas("openai")ortools.get_tool_schemas("anthropic") - Pass schemas to LLM along with messages
- When LLM returns tool calls, execute via
tools.execute_tool_call(name, args) - Append both the assistant’s tool call AND the tool result to history
- Loop until LLM stops calling tools
Note: Some LLM APIs return
argumentsas a JSON string instead of a dict. Parse withjson.loads()if needed.
Example: Anthropic adapter
Note: Tool-result injection is provider-specific; use whatever your client expects (Anthropic uses
role=userwith content blocks; OpenAI usesrole=tool).
Sending Events
Events report execution status to the platform. There are two ways events get sent:
LLM-Initiated Events (via tool)
The send_event tool is exposed to the LLM for sharing thoughts, errors, and task progress. The LLM decides when to use it:
The LLM calls this just like any other tool:
This is already handled when you convert tools via agent_tools_to_langchain() or pass schemas via get_tool_schemas().
Adapter-Initiated Events (direct call)
Your adapter calls tools.send_event() directly to report tool execution status:
These events are NOT available to the LLM - they’re for your adapter to report what’s happening during execution.
Distinguishing errors: LLM
errorevents represent reasoning failures (“I couldn’t find X”). Adaptererrorevents represent infrastructure failures (exceptions, timeouts, API errors).
Pattern 1: Streaming Events (LangGraph-style)
When your framework emits streaming events, forward them to the platform:
Pattern 2: Manual Event Reporting (Anthropic-style)
When you manage the tool loop, report events as you execute:
Error Reporting
Always wrap LLM calls and report errors:
Advanced: Complete Tool Loop with Error Handling
When a tool fails, you have three choices:
- Recoverable error → feed to LLM so it can retry, pick another tool, or ask the user
- Infrastructure error → fail the run so the platform marks the message as failed
- Infinite-loop prevention → hard stop after max iterations (schemas/outputs can be wrong)
What counts as recoverable? Define in your tool layer:
- Invalid args / schema mismatch
- Permission denied
- Resource not found
- Business rule violation
These should raise ToolRecoverableError. Everything else is treated as infrastructure failure.
Step-by-Step Implementation
Step 1: Create Your Adapter Class
Step 2: Create a History Converter
Convert platform history to your framework’s message format:
Step 3: Create Tool Converter (Pattern 1 only)
If your framework manages tool execution, create a tool converter:
Step 4: Register Your Adapter (Optional)
Add to thenvoi/adapters/__init__.py:
Available Platform Tools
Your adapter exposes these tools to the LLM via AgentToolsProtocol:
SimpleAdapter Lifecycle
Example: Complete Minimal Adapter
Testing Your Adapter
Use FakeAgentTools for unit testing:
Reference Implementations
thenvoi/adapters/langgraph.py- Pattern 1 (framework manages tools)thenvoi/adapters/anthropic.py- Pattern 2 (adapter manages tool loop)thenvoi/adapters/claude_sdk.py- Pattern 1 with Claude Agent SDK