Codex Adapter

Build agents using the OpenAI Codex CLI with the Thenvoi SDK

This tutorial shows you how to create an agent using the CodexAdapter. This adapter connects to a local OpenAI Codex CLI instance on your machine via JSON-RPC, reusing your existing codex login session (ChatGPT sign-in or OpenAI API key).

Prerequisites

Before starting, make sure you’ve completed the Setup tutorial:

  • SDK installed with Codex support
  • Agent created on the platform
  • .env and agent_config.yaml configured
  • Verified your setup works

Install the Codex extra:

$uv add "thenvoi-sdk[codex]"

Install and authenticate the Codex CLI:

$npm install -g @openai/codex
$codex login

The adapter uses your local Codex CLI installation. Billing and model access follow the sign-in method you selected during codex login — ChatGPT sign-in uses your ChatGPT plan, while an API key bills to your OpenAI Platform account at API rates.


Create Your Agent

Create a file called agent.py:

1import asyncio
2import logging
3import os
4from dotenv import load_dotenv
5from thenvoi import Agent
6from thenvoi.adapters import CodexAdapter, CodexAdapterConfig
7from thenvoi.config import load_agent_config
8
9logging.basicConfig(level=logging.INFO)
10logger = logging.getLogger(__name__)
11
12async def main():
13 load_dotenv()
14
15 # Load agent credentials
16 agent_id, api_key = load_agent_config("my_agent")
17
18 # Create adapter with Codex
19 adapter = CodexAdapter(
20 config=CodexAdapterConfig(
21 transport="stdio",
22 )
23 )
24
25 # Create and run the agent
26 agent = Agent.create(
27 adapter=adapter,
28 agent_id=agent_id,
29 api_key=api_key,
30 ws_url=os.getenv("THENVOI_WS_URL"),
31 rest_url=os.getenv("THENVOI_REST_URL"),
32 )
33
34 logger.info("Agent is running! Press Ctrl+C to stop.")
35 await agent.run()
36
37if __name__ == "__main__":
38 asyncio.run(main())

Run the Agent

Start your agent:

$uv run python agent.py

You should see:

INFO:__main__:Agent is running! Press Ctrl+C to stop.

Test Your Agent

1

Add Agent to a Chatroom

Go to Thenvoi and either create a new chatroom or open an existing one. Add your agent as a participant, under the External section.

2

Send a Message

In the chatroom, mention your agent:

@MyAgent Hello! Can you help me?
3

See the Response

Your agent will process the message and respond in the chatroom.


How It Works

The Codex adapter communicates with a local Codex CLI instance using JSON-RPC:

  1. Transport Layer — Connects via stdio (spawns Codex as a subprocess) or WebSocket (connects to a running Codex app server)
  2. Thread Management — Maps each chatroom to a Codex thread for conversation continuity
  3. Dynamic Tools — Exposes Thenvoi platform tools to Codex automatically
  4. Streaming Responses — Processes streaming text deltas and tool calls in real time

Available Platform Tools:

ToolDescription
thenvoi_send_messageSend a message to the chat room
thenvoi_send_eventSend events (thought, error, etc.)
thenvoi_add_participantAdd a user or agent to the room
thenvoi_remove_participantRemove a participant
thenvoi_get_participantsList current room participants
thenvoi_lookup_peersFind available peers to add

Transport Modes

The adapter supports two transport modes for connecting to Codex:

Stdio (default) — Spawns Codex as a subprocess. No extra setup required:

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="stdio",
4 )
5)

WebSocket (experimental) — Connects to a separately running Codex app server. This transport is primarily intended for development workflows:

$# Start the Codex app server first
$codex app-server --listen ws://127.0.0.1:8765
1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="ws",
4 codex_ws_url="ws://127.0.0.1:8765",
5 )
6)

Use stdio for single-agent setups. Use WebSocket when running multiple agents that share one Codex instance, or when you need the Codex process to persist independently.


Supported Models

The adapter auto-discovers available models from your Codex instance. You can also set a model explicitly:

1# Auto-discover (default)
2adapter = CodexAdapter(
3 config=CodexAdapterConfig(transport="stdio")
4)
5
6# Explicit model
7adapter = CodexAdapter(
8 config=CodexAdapterConfig(
9 transport="stdio",
10 model="gpt-5.3-codex",
11 )
12)

If the specified model is unavailable, the adapter falls back through fallback_models automatically (defaults to gpt-5.2 and gpt-5.3-codex).


Add Custom Instructions

Customize your agent’s behavior with the custom_section parameter:

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="stdio",
4 custom_section="""
5 You are a helpful assistant that specializes in answering
6 questions about Python programming. Be concise and include
7 code examples when helpful.
8 """,
9 )
10)

You can also load instructions from a file, which is useful for maintaining different prompt profiles:

1from pathlib import Path
2
3prompt = Path("prompts/coding.md").read_text()
4
5adapter = CodexAdapter(
6 config=CodexAdapterConfig(
7 transport="stdio",
8 custom_section=prompt,
9 )
10)

Configuration Options

The CodexAdapterConfig supports several configuration options:

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 # Transport: "stdio" (spawns subprocess) or "ws" (WebSocket)
4 transport="stdio",
5
6 # Model to use (auto-discovered if not set)
7 model="gpt-5.3-codex",
8
9 # Models to try when the primary model is unavailable
10 fallback_models=("gpt-5.2", "gpt-5.3-codex"),
11
12 # Communication style, maps to Codex personalization settings
13 # See: https://developers.openai.com/codex/app/settings/#personalization
14 # Options: "friendly", "pragmatic", or "none"
15 personality="pragmatic",
16
17 # Working directory for Codex execution
18 cwd="/path/to/workspace",
19
20 # Custom instructions appended to the system prompt
21 custom_section="You are a helpful assistant.",
22
23 # Reasoning control
24 reasoning_effort="medium", # none, minimal, low, medium, high, xhigh
25 reasoning_summary="concise", # auto, concise, detailed, none
26
27 # Approval handling
28 approval_policy="never", # Codex-level approval policy
29 approval_mode="manual", # manual, auto_accept, auto_decline
30 approval_wait_timeout_s=300.0, # Seconds to wait for manual approval
31 approval_timeout_decision="decline", # Default decision on timeout
32 approval_text_notifications=True, # Send approval prompts to chat
33
34 # Codex app-server sandbox mode. Applied at both thread creation
35 # (as SandboxMode) and each turn (converted to sandboxPolicy).
36 # Options: read-only, workspace-write, danger-full-access,
37 # external-sandbox (external-sandbox is turn-level only)
38 sandbox="external-sandbox",
39
40 # Full sandbox policy dict. Applied at both thread creation and
41 # each turn. Takes precedence over `sandbox` when set. Accepts a
42 # "type" key (readOnly, workspaceWrite, dangerFullAccess,
43 # externalSandbox) plus optional extra fields forwarded to Codex.
44 sandbox_policy={"type": "workspaceWrite"},
45
46 # Emit thought events showing Codex reasoning
47 emit_thought_events=False,
48
49 # Emit task events at the start and end of each Codex turn
50 # (requires enable_task_events=True)
51 emit_turn_task_markers=False,
52
53 # Enable visibility into tool calls
54 enable_execution_reporting=False,
55
56 # Register LLM-callable tools that let Codex adjust its own
57 # model and reasoning settings during a conversation
58 enable_self_config_tools=False,
59
60 # Enable task lifecycle events
61 enable_task_events=True,
62
63 # Maximum time for a single turn (seconds)
64 turn_timeout_s=180.0,
65
66 # --- Stdio transport options ---
67
68 # Command to spawn the Codex CLI (auto-resolved from PATH if None)
69 codex_command=None,
70
71 # Extra environment variables merged with os.environ when spawning Codex
72 codex_env=None,
73
74 # WebSocket URL when transport is "ws"
75 codex_ws_url="ws://127.0.0.1:8765",
76
77 # --- Client identity reported to the Codex server ---
78
79 client_name="thenvoi_codex_adapter",
80 client_title="Thenvoi Codex Adapter",
81 client_version="0.1.0",
82 )
83)

Reasoning Control

Control how much reasoning Codex applies to each turn:

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="stdio",
4 reasoning_effort="high", # none, minimal, low, medium, high, xhigh
5 reasoning_summary="concise", # auto, concise, detailed, none
6 )
7)

Reasoning effort can also be adjusted at runtime using the /reasoning chat command:

/reasoning high

Approval System

The adapter includes an approval system for controlled tool execution. When Codex requests to perform an action that needs approval, the adapter can handle it automatically or wait for manual input.

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="stdio",
4 approval_mode="manual", # manual, auto_accept, auto_decline
5 approval_wait_timeout_s=300.0, # Timeout for manual approvals
6 approval_timeout_decision="decline", # Default decision on timeout
7 )
8)

Chat commands:

CommandDescription
/approve <id>Approve a pending action
/decline <id>Decline a pending action
/approvalsList all pending approvals
/statusShow adapter status and config
/model <id>Override the model for subsequent turns
/model listList available models from the Codex instance
/reasoning <level>Set reasoning effort (none, minimal, low, medium, high, xhigh)
/helpShow all available commands

Execution Reporting

Enable execution reporting to see tool calls and reasoning in the chatroom:

1adapter = CodexAdapter(
2 config=CodexAdapterConfig(
3 transport="stdio",
4 enable_execution_reporting=True,
5 emit_thought_events=True,
6 )
7)

When enabled, the adapter sends:

  • thought events showing Codex’s reasoning process
  • tool_call events when a tool is invoked
  • tool_result events when a tool returns

Complete Example

Here’s a full example with reasoning control, custom instructions, and execution reporting:

1import asyncio
2import logging
3import os
4from dotenv import load_dotenv
5from thenvoi import Agent
6from thenvoi.adapters import CodexAdapter, CodexAdapterConfig
7from thenvoi.config import load_agent_config
8
9logging.basicConfig(level=logging.INFO)
10logger = logging.getLogger(__name__)
11
12async def main():
13 load_dotenv()
14 agent_id, api_key = load_agent_config("my_agent")
15
16 adapter = CodexAdapter(
17 config=CodexAdapterConfig(
18 transport="stdio",
19 personality="pragmatic",
20 model="gpt-5.3-codex",
21 cwd=os.getcwd(),
22 custom_section="""
23 You are a senior Python developer. When users ask questions:
24 1. Think through the problem carefully
25 2. Provide clear, step-by-step explanations
26 3. Include code examples when relevant
27 4. Suggest tests for any code changes
28 """,
29 reasoning_effort="high",
30 reasoning_summary="concise",
31 approval_mode="manual",
32 enable_task_events=True,
33 enable_execution_reporting=True,
34 )
35 )
36
37 agent = Agent.create(
38 adapter=adapter,
39 agent_id=agent_id,
40 api_key=api_key,
41 ws_url=os.getenv("THENVOI_WS_URL"),
42 rest_url=os.getenv("THENVOI_REST_URL"),
43 )
44
45 logger.info("Codex agent is running! Press Ctrl+C to stop.")
46 await agent.run()
47
48if __name__ == "__main__":
49 asyncio.run(main())

Debug Mode

If your agent isn’t responding as expected, enable debug logging:

1import asyncio
2import os
3import logging
4from dotenv import load_dotenv
5from thenvoi import Agent
6from thenvoi.adapters import CodexAdapter, CodexAdapterConfig
7from thenvoi.config import load_agent_config
8
9# Enable debug logging for the SDK
10logging.basicConfig(
11 level=logging.WARNING,
12 format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
13 datefmt="%Y-%m-%d %H:%M:%S",
14)
15logging.getLogger("thenvoi").setLevel(logging.DEBUG)
16logger = logging.getLogger(__name__)
17
18async def main():
19 load_dotenv()
20 agent_id, api_key = load_agent_config("my_agent")
21
22 adapter = CodexAdapter(
23 config=CodexAdapterConfig(
24 transport="stdio",
25 )
26 )
27
28 agent = Agent.create(
29 adapter=adapter,
30 agent_id=agent_id,
31 api_key=api_key,
32 ws_url=os.getenv("THENVOI_WS_URL"),
33 rest_url=os.getenv("THENVOI_REST_URL"),
34 )
35
36 logger.info("Agent running with DEBUG logging. Press Ctrl+C to stop.")
37 await agent.run()
38
39if __name__ == "__main__":
40 asyncio.run(main())

With debug logging enabled, you’ll see detailed output including:

  • JSON-RPC message exchange with Codex
  • Thread creation and resume events
  • Tool call dispatch and results
  • Model discovery and fallback attempts
  • Streaming response content

Architecture Notes

The Codex adapter is architecturally different from other adapters:

JSON-RPC Protocol:

  • Communicates with Codex via bidirectional JSON-RPC 2.0
  • Supports both requests (with responses) and notifications (fire-and-forget)
  • Automatic retry with exponential backoff on overload errors

Thread Management:

  • Each chatroom maps to a Codex thread
  • Thread IDs are persisted in platform task event metadata
  • On reconnect, the adapter resumes existing threads for conversation continuity
  • Falls back to injecting raw message history if thread resume fails

Transport Details:

  • Stdio: Spawns codex app-server --listen stdio:// as a subprocess, communicates via stdin/stdout
  • WebSocket: Connects to a running Codex app server at the configured URL (default ws://127.0.0.1:8765)

When to Use Codex vs Claude SDK

FeatureCodexClaude SDK
ProviderOpenAI Codex CLIAnthropic
TransportJSON-RPC (stdio/WebSocket)MCP
Authenticationcodex login (local)ANTHROPIC_API_KEY
Reasoning Controleffort + summary levelsExtended thinking tokens
Approval SystemYes (manual/auto)No
Model DiscoveryAutomatic via model/listExplicit
Session PersistenceThread IDs in platform metadataIn-memory per room
Sandbox SupportConfigurable sandbox policyNone

Use Codex when:

  • You want to use OpenAI models via Codex with your existing setup
  • You want fine-grained reasoning and approval control
  • You need sandbox isolation for code execution

Use Claude SDK when:

  • You want to use Anthropic Claude models
  • You need extended thinking with visible chain-of-thought
  • You want MCP-based tool integration
  • You prefer session-based conversation management

Docker Deployment

Run Codex agents with Docker using YAML configuration, no Python code required.

Quick Start

1

Configure environment

From the repository root, copy the example environment file and add your OpenAI API key (OpenAI Platform):

$cp .env.example .env
$# Edit .env and add your OPENAI_API_KEY
2

Create agent configuration

Navigate to the Docker example directory and create your agent config:

$cd examples/codex
$cp example_agent.yaml agent1.yaml

Edit agent1.yaml with your agent credentials from the Thenvoi Dashboard:

1agent_id: "agt_abc123xyz" # Your Agent ID
2api_key: "sk_live_..." # Your API Key
3
4model: gpt-5.3-codex
5
6prompt: |
7 You are a helpful coding assistant.
8 Be concise and pragmatic.
9
10# Optional: reasoning control
11# reasoning_effort: high
12# reasoning_summary: concise
13
14# Optional: approval mode
15# approval_mode: manual
3

Build and run

$docker compose build
$docker compose up

Environment Variables

Docker deployments support environment variable overrides:

VariableDefaultDescription
CODEX_TRANSPORTstdioTransport mode (stdio or ws)
CODEX_CWD/workspace/repoWorking directory for Codex
CODEX_MODELautoModel ID override
CODEX_SANDBOXexternal-sandboxSandbox mode
CODEX_REASONING_EFFORTReasoning effort level
CODEX_APPROVAL_MODEmanualApproval handling mode
CODEX_ROLELoads prompt from prompts/{role}.md

The Docker container mounts ~/.codex from the host for authentication. Your local codex login credentials are shared with the container automatically.


Next Steps