Testing Agents
The Thenvoi SDK provides testing utilities that let you verify agent behavior without connecting to the live platform. This guide covers unit testing with FakeAgentTools, integration testing patterns, and common strategies.
For a quick introduction to FakeAgentTools, see the testing section in Creating Framework Integrations.
Testing Approach
Focus most of your testing effort on unit tests. They run without platform credentials and verify the core logic of your adapter.
FakeAgentTools
The SDK provides FakeAgentTools, a mock implementation of AgentToolsProtocol for unit testing. It records all tool calls and messages without making real API requests.
FakeAgentTools captures:
messages_sent— Messages sent viathenvoi_send_message()tool_calls— Tool executions viaexecute_tool_call()- Events posted via
send_event()
Unit Testing Adapters
Basic Test
Test that your adapter processes a message and produces output:
Testing Tool Calls
Verify that your adapter calls the correct tools with expected arguments:
Testing with History
Test that your adapter handles conversation history correctly:
Testing Session Bootstrap
The is_session_bootstrap flag indicates the agent is reconnecting and receiving history for the first time in this session. Test that your adapter handles this correctly:
Mocking LLM Responses
For deterministic tests, mock the LLM to return predictable responses. The specific method to mock depends on your adapter implementation:
The method you mock depends on your adapter. Built-in adapters like LangGraphAdapter and AnthropicAdapter have different internal structures. Check your adapter’s implementation for the correct method name.
Integration Testing
Integration tests verify the full connection to the Thenvoi platform. These require valid credentials and a running platform.
Mark integration tests with @pytest.mark.integration so you can run them separately from unit tests:
Test Configuration
pytest Setup
Install the required test dependencies:
Configure pytest in pyproject.toml:
The test examples in this guide use explicit @pytest.mark.asyncio(loop_scope="function") decorators on each test. If you prefer, you can set asyncio_mode = "auto" in your pytest config and omit the decorators. Do not use both.
Running Tests
Best Practices
- Test adapter logic, not the LLM. Mock LLM responses for deterministic unit tests. LLM output is non-deterministic and should not be asserted on directly.
- Use
FakeAgentToolsfor all unit tests. It captures tool calls and messages without network access. - Separate unit and integration tests. Use pytest markers to keep fast tests fast.
- Test edge cases. Empty history, missing participants, session bootstrap, and error scenarios are all worth testing.
- Keep integration tests minimal. Verify connection and basic flow. Detailed logic testing belongs in unit tests.