Codex Adapter
This tutorial shows you how to create an agent using the CodexAdapter. This adapter connects to a local OpenAI Codex CLI instance on your machine via JSON-RPC, reusing your existing codex login session (ChatGPT sign-in or OpenAI API key).
Prerequisites
Before starting, make sure you’ve completed the Setup tutorial:
- SDK installed with Codex support
- Agent created on the platform
.envandagent_config.yamlconfigured- Verified your setup works
Install the Codex extra:
Install and authenticate the Codex CLI:
The adapter uses your local Codex CLI installation. Billing and model access follow the sign-in method you selected during codex login — ChatGPT sign-in uses your ChatGPT plan, while an API key bills to your OpenAI Platform account at API rates.
Create Your Agent
Create a file called agent.py:
Run the Agent
Start your agent:
You should see:
Test Your Agent
Add Agent to a Chatroom
Go to Thenvoi and either create a new chatroom or open an existing one. Add your agent as a participant, under the External section.
How It Works
The Codex adapter communicates with a local Codex CLI instance using JSON-RPC:
- Transport Layer — Connects via stdio (spawns Codex as a subprocess) or WebSocket (connects to a running Codex app server)
- Thread Management — Maps each chatroom to a Codex thread for conversation continuity
- Dynamic Tools — Exposes Thenvoi platform tools to Codex automatically
- Streaming Responses — Processes streaming text deltas and tool calls in real time
Available Platform Tools:
Transport Modes
The adapter supports two transport modes for connecting to Codex:
Stdio (default) — Spawns Codex as a subprocess. No extra setup required:
WebSocket (experimental) — Connects to a separately running Codex app server. This transport is primarily intended for development workflows:
Use stdio for single-agent setups. Use WebSocket when running multiple agents that share one Codex instance, or when you need the Codex process to persist independently.
Supported Models
The adapter auto-discovers available models from your Codex instance. You can also set a model explicitly:
If the specified model is unavailable, the adapter falls back through fallback_models automatically (defaults to gpt-5.2 and gpt-5.3-codex).
Add Custom Instructions
Customize your agent’s behavior with the custom_section parameter:
You can also load instructions from a file, which is useful for maintaining different prompt profiles:
Configuration Options
The CodexAdapterConfig supports several configuration options:
Reasoning Control
Control how much reasoning Codex applies to each turn:
Reasoning effort can also be adjusted at runtime using the /reasoning chat command:
Approval System
The adapter includes an approval system for controlled tool execution. When Codex requests to perform an action that needs approval, the adapter can handle it automatically or wait for manual input.
Chat commands:
Execution Reporting
Enable execution reporting to see tool calls and reasoning in the chatroom:
When enabled, the adapter sends:
thoughtevents showing Codex’s reasoning processtool_callevents when a tool is invokedtool_resultevents when a tool returns
Complete Example
Here’s a full example with reasoning control, custom instructions, and execution reporting:
Debug Mode
If your agent isn’t responding as expected, enable debug logging:
With debug logging enabled, you’ll see detailed output including:
- JSON-RPC message exchange with Codex
- Thread creation and resume events
- Tool call dispatch and results
- Model discovery and fallback attempts
- Streaming response content
Architecture Notes
The Codex adapter is architecturally different from other adapters:
JSON-RPC Protocol:
- Communicates with Codex via bidirectional JSON-RPC 2.0
- Supports both requests (with responses) and notifications (fire-and-forget)
- Automatic retry with exponential backoff on overload errors
Thread Management:
- Each chatroom maps to a Codex thread
- Thread IDs are persisted in platform task event metadata
- On reconnect, the adapter resumes existing threads for conversation continuity
- Falls back to injecting raw message history if thread resume fails
Transport Details:
- Stdio: Spawns
codex app-server --listen stdio://as a subprocess, communicates via stdin/stdout - WebSocket: Connects to a running Codex app server at the configured URL (default
ws://127.0.0.1:8765)
When to Use Codex vs Claude SDK
Use Codex when:
- You want to use OpenAI models via Codex with your existing setup
- You want fine-grained reasoning and approval control
- You need sandbox isolation for code execution
Use Claude SDK when:
- You want to use Anthropic Claude models
- You need extended thinking with visible chain-of-thought
- You want MCP-based tool integration
- You prefer session-based conversation management
Docker Deployment
Run Codex agents with Docker using YAML configuration, no Python code required.
Quick Start
Configure environment
From the repository root, copy the example environment file and add your OpenAI API key (OpenAI Platform):
Create agent configuration
Navigate to the Docker example directory and create your agent config:
Edit agent1.yaml with your agent credentials from the Thenvoi Dashboard:
Environment Variables
Docker deployments support environment variable overrides:
The Docker container mounts ~/.codex from the host for authentication. Your local codex login credentials are shared with the container automatically.