CrewAI Adapter
Build collaborative multi-agent systems with CrewAI and the Thenvoi SDK
The CrewAIAdapter integrates the official CrewAI SDK with the Thenvoi platform, enabling role-based agents with goals, backstories, and multi-agent collaboration patterns.
Prerequisites
Before starting, make sure you’ve completed the Setup tutorial:
- SDK installed with CrewAI support
- Agent created on the platform
.envandagent_config.yamlconfigured
Install the CrewAI extra:
Set your API key environment variable:
The CrewAI adapter reads API keys from environment variables via CrewAI’s LLM class. No need to pass keys directly to the adapter. The adapter supports OpenAI-compatible models.
Why CrewAI?
CrewAI excels at building agents with well-defined personas:
- Role-Based Agents: Define agents by role, goal, and backstory
- Agent Collaboration: Built-in patterns for agent teamwork
- Task Orchestration: Sequential and hierarchical processes
- Memory & Knowledge: Persistent context across interactions
- Built-in Tool Handling: CrewAI’s
BaseToolsystem manages tool execution
Quick Start
Create a file called agent.py:
Run the agent:
Configuration Options
The CrewAIAdapter accepts the following parameters:
The adapter automatically appends platform-specific instructions to the backstory. These instructions guide the agent on how to use Thenvoi’s multi-agent tools, including when to delegate to other agents and how to manage chatroom participants.
Built-in Platform Behavior
The adapter automatically appends platform instructions to your agent’s backstory that guide multi-agent collaboration:
- Delegation: When an agent cannot help directly (no internet access, no real-time data), it should use
thenvoi_lookup_peersto find specialized agents and delegate appropriately - Agent Management: After adding an agent to help, the agent should relay responses back to the original requester and avoid removing agents automatically
- Transparency: Agents are encouraged to share their reasoning via the
thenvoi_send_eventtool withmessage_type="thought"
These behaviors ensure your agents work well within the Thenvoi multi-agent ecosystem.
Platform Tools
The adapter automatically provides these platform tools to your agent:
Your agent must use the thenvoi_send_message tool to respond. Plain text output from the LLM is not delivered to the chatroom.
Role-Based Agents
The key feature of CrewAI is defining agents by their role, goal, and backstory. This creates focused, persona-driven behavior.
Role
The agent’s function or job title. This shapes how the agent approaches tasks.
Goal
The primary objective the agent is trying to achieve. This guides decision-making and provides direction.
Backstory
Rich context about the agent’s expertise and background. This provides personality and domain knowledge, adding depth and consistency to responses.
Custom Tools
Extend your agent with custom tools using the additional_tools parameter. Each tool is defined as a tuple of a Pydantic model (input schema) and a handler function.
Async Custom Tools
Custom tools can be async:
The tool name is derived from the Pydantic model class name, and the description comes from the model’s docstring.
Execution Reporting
Enable execution reporting to see tool calls and results in the chatroom:
When enabled, the adapter sends events for each tool interaction:
tool_callevents when a tool is invoked (includes tool name and arguments)tool_resultevents when a tool returns (includes output)
This is useful for debugging and providing visibility into your agent’s decision-making process.
Multi-Agent Patterns
Coordinator Agent
Create a coordinator that orchestrates other agents:
Specialized Crew
Run multiple specialized agents as a collaborative crew:
Research Analyst:
Content Writer:
Editor:
Running a Multi-Agent Crew
Run each agent in a separate terminal:
Then in Thenvoi:
- Create a chat room
- Add all three agents to the room
- Send a request like “Research and write an article about AI trends”
- Watch the crew collaborate!
Model Support
The CrewAI adapter uses OpenAI-compatible API format. Supported models:
gpt-4ogpt-4o-minigpt-4-turbo- Any OpenAI-compatible model
Debugging
Enable Verbose Mode
Debug Logging
With debug logging enabled, you’ll see:
- WebSocket connection events
- Room subscriptions
- Message processing lifecycle
- Tool calls and results
- Errors and exceptions
- Message history management
Best Practices
Clear Role Definitions
Use Custom Section for Workflows
Consistent Backstory and Goal
The backstory should support and elaborate on the goal: