Primary navigation

Codex changelog

Latest updates to Codex, OpenAI’s coding agent

February 2026

  • Codex CLI 0.104.0

    $ npm install -g @openai/codex@0.104.0
    View details

    New Features

    • Added WS_PROXY/WSS_PROXY environment support (including lowercase variants) for websocket proxying in the network proxy. (#11784)
    • App-server v2 now emits notifications when threads are archived or unarchived, enabling clients to react without polling. (#12030)
    • Protocol/core now carry distinct approval IDs for command approvals to support multiple approvals within a single shell command execution flow. (#12051)

    Bug Fixes

    • Ctrl+C/Ctrl+D now cleanly exits the cwd-change prompt during resume/fork flows instead of implicitly selecting an option. (#12040)
    • Reduced false-positive safety-check downgrade behavior by relying on the response header model (and websocket top-level events) rather than the response body model slug. (#12061)

    Documentation

    • Updated docs and schemas to cover websocket proxy configuration, new thread archive/unarchive notifications, and the command approval ID plumbing. (#11784, #12030, #12051)

    Chores

    • Made the Rust release workflow resilient to npm publish attempts for an already-published version. (#12044)
    • Standardized remote compaction test mocking and refreshed related snapshots to align with the default production-shaped behavior. (#12050)

    Changelog

    Full Changelog: rust-v0.103.0...rust-v0.104.0

    Full release on Github

  • Codex CLI 0.103.0

    $ npm install -g @openai/codex@0.103.0
    View details

    New Features

    • App listing responses now include richer app details (app_metadata, branding, and labels), so clients can render more complete app cards without extra requests. (#11706)
    • Commit co-author attribution now uses a Codex-managed prepare-commit-msg hook, with command_attribution override support (default label, custom label, or disable). (#11617)

    Bug Fixes

    • Removed the remote_models feature flag to prevent fallback model metadata when it was disabled, improving model selection reliability and performance. (#11699)

    Chores

    • Updated Rust dependencies (clap, env_logger, arc-swap) and refreshed Bazel lock state as routine maintenance. (#11888, #11889, #11890, #12032)
    • Reverted the Rust toolchain bump to 1.93.1 after CI breakage. (#11886, #12035)

    Changelog

    Full Changelog: rust-v0.102.0...rust-v0.103.0

    Full release on Github

  • Codex CLI 0.102.0

    $ npm install -g @openai/codex@0.102.0
    View details

    New Features

    • Added a more unified permissions flow, including clearer permissions history in the TUI and a slash command to grant sandbox read access when directories are blocked. (#11633, #11512, #11550, #11639)
    • Introduced structured network approval handling, with richer host/protocol context shown directly in approval prompts. (#11672, #11674)
    • Expanded app-server fuzzy file search with explicit session-complete signaling so clients can stop loading indicators reliably. (#10268, #11773)
    • Added customizable multi-agent roles via config, including migration toward the new multi-agent naming/config surface. (#11917, #11982, #11939, #11918)
    • Added a model/rerouted notification so clients can detect and render model reroute events explicitly. (#12001)

    Bug Fixes

    • Fixed remote image attachments so they persist correctly across resume/backtrack and history replay in the TUI. (#10590)
    • Fixed a TUI accessibility regression where animation gating for screen reader users was not consistently respected. (#11860)
    • Fixed app-server thread resume behavior to correctly rejoin active in-memory threads and tighten invalid resume cases. (#11756)
    • Fixed model/list output to return full model data plus visibility metadata, avoiding unintended server-side filtering. (#11793)
    • Fixed several js_repl stability issues, including reset hangs, in-flight tool-call races, and a view_image panic path. (#11932, #11922, #11800, #11796)
    • Fixed app integration edge cases in mention parsing and app list loading/filtering behavior. (#11894, #11518, #11697)

    Documentation

    • Updated contributor guidance to require snapshot coverage for user-visible TUI changes. (#10669)
    • Updated docs/help text around Codex app and MCP command usage. (#11926, #11813)

    Chores

    • Improved developer log tooling with new just log --search and just log --compact modes. (#11995, #11994)
    • Updated vendored rg and tightened Bazel/Cargo lockfile sync checks to reduce dependency drift. (#12007, #11790)

    Changelog

    Full Changelog: rust-v0.101.0...rust-v0.102.0

    Full release on Github

  • Codex app v260212

    New features

    • Support for GPT-5.3-Codex-Spark
    • Added conversation forking
    • Added floating pop-out window to take a conversation with you

    Bug fixes

    • Improved performance and bug fixes

    Alpha testing for the Codex app on Windows is also starting. Sign up here to be a potential alpha tester.

  • Introducing GPT-5.3-Codex-Spark

    Today, we’re releasing a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex and our first model designed for real-time coding. Codex-Spark is optimized to feel near-instant, delivering more than 1000 tokens per second while remaining highly capable for real-world coding tasks.

    Codex-Spark is available in research preview for ChatGPT Pro users in the latest Codex app, CLI, and IDE extension. This release also marks the first milestone in our partnership with Cerebras.

    At launch, Codex-Spark is text-only with a 128k context window. During the research preview, usage has separate model-specific limits and doesn’t count against standard Codex limits. During high demand, access may slow down or queue while we balance reliability across users.

    To switch to GPT-5.3-Codex-Spark:

    • In the CLI, start a new thread with:
      codex --model gpt-5.3-codex-spark
      Or use /model during a session.
    • In the IDE extension, choose GPT-5.3-Codex-Spark from the model selector in the composer.
    • In the Codex app, choose GPT-5.3-Codex-Spark from the model selector in the composer.

    If you don’t see GPT-5.3-Codex-Spark yet, update the CLI, IDE extension, or Codex app to the latest version.

    GPT-5.3-Codex-Spark isn’t available in the API at launch. For API-key workflows, continue using gpt-5.2-codex.

  • Codex CLI 0.101.0

    $ npm install -g @openai/codex@0.101.0
    View details

    Bug Fixes

    • Model resolution now preserves the requested model slug when selecting by prefix, so model references stay stable instead of being rewritten. (#11602)
    • Developer messages are now excluded from phase-1 memory input, reducing noisy or irrelevant content entering memory. (#11608)
    • Memory phase processing concurrency was reduced to make consolidation/staging more stable under load. (#11614)

    Chores

    • Cleaned and simplified the phase-1 memory pipeline code paths. (#11605)
    • Minor repository maintenance: formatting and test-suite hygiene updates in remote model tests. (#11619)

    Changelog

    Full Changelog: rust-v0.100.0...rust-v0.101.0

    Full release on Github

  • Codex CLI 0.100.0

    $ npm install -g @openai/codex@0.100.0
    View details

    New Features

    • Added an experimental, feature-gated JavaScript REPL runtime (js_repl) that can persist state across tool calls, with optional runtime path overrides. (#10674)
    • Added support for multiple simultaneous rate limits across the protocol, backend client, and TUI status surfaces. (#11260)
    • Reintroduced app-server websocket transport with a split inbound/outbound architecture, plus connection-aware thread resume subscriptions. (#11370, #11474)
    • Added memory management slash commands in the TUI (/m_update, /m_drop) and expanded memory-read/metrics plumbing. (#11569, #11459, #11593)
    • Enabled Apps SDK apps in ChatGPT connector handling. (#11486)
    • Promoted sandbox capabilities on both Linux and Windows, and introduced a new ReadOnlyAccess policy shape for configurable read access. (#11381, #11341, #11387)

    Bug Fixes

    • Fixed websocket incremental output duplication, prevented appends after response.completed, and treated response.incomplete as an error path. (#11383, #11402, #11558)
    • Improved websocket session stability by continuing ping handling when idle and suppressing noisy first-retry errors during quick reconnects. (#11413, #11548)
    • Fixed stale thread entries by dropping missing rollout files and cleaning stale DB metadata during thread listing. (#11572)
    • Fixed Windows multi-line paste reliability in terminals (especially VS Code integrated terminal) by increasing paste burst timing tolerance. (#9348)
    • Fixed incorrect inheritance of limit_name when merging partial rate-limit updates. (#11557)
    • Reduced repeated skill parse-error spam during active edits by increasing file-watcher debounce from 1s to 10s. (#11494)

    Documentation

    • Added JS REPL documentation and config/schema guidance for enabling and configuring the feature. (#10674)
    • Updated app-server websocket transport documentation in the app-server README. (#11370)

    Chores

    • Split codex-common into focused codex-utils-* crates to simplify dependency boundaries across Rust workspace components. (#11422)
    • Improved Rust release pipeline throughput and reliability for Windows and musl targets, including parallel Windows builds and musl link fixes. (#11488, #11500, #11556)
    • Prevented GitHub release asset upload collisions by excluding duplicate cargo-timing.html artifacts. (#11564)

    Changelog

    Full Changelog: rust-v0.99.0...rust-v0.100.0

    Full release on Github

  • Codex CLI 0.99.0

    $ npm install -g @openai/codex@0.99.0
    View details

    New Features

    • Running direct shell commands no longer interrupts an in-flight turn; commands can execute concurrently when a turn is active. (#10513)
    • Added /statusline to configure which metadata appears in the TUI footer interactively. (#10546)
    • The TUI resume picker can now toggle sort order between creation time and last-updated time with an in-picker mode indicator. (#10752)
    • App-server clients now get dedicated APIs for steering active turns, listing experimental features, resuming agents, and opting out of specific notifications. (#10721, #10821, #10903, #11319)
    • Enterprise/admin requirements can now restrict web search modes and define network constraints through requirements.toml. (#10964, #10958)
    • Image attachments now accept GIF and WebP inputs in addition to existing formats. (#11237)
    • Enable snapshotting of the shell environment and rc files (#11172)

    Bug Fixes

    • Fixed a Windows startup issue where buffered keypresses could cause the TUI sign-in flow to exit immediately. (#10729)
    • Required MCP servers now fail fast during start/resume flows instead of continuing in a broken state. (#10902)
    • Fixed a file-watcher bug that emitted spurious skills reload events and could generate very large log files. (#11217)
    • Improved TUI input reliability: long option labels wrap correctly, Tab submits in steer mode when idle, history recall keeps cursor placement consistent, and stashed drafts restore image placeholders correctly. (#11123, #10035, #11295, #9040)
    • Fixed model-modality edge cases by surfacing clearer view_image errors on text-only models and stripping unsupported image history during model switches. (#11336, #11349)
    • Reduced false approval mismatches for wrapped/heredoc shell commands and guarded against empty command lists in exec policy evaluation. (#10941, #11397)

    Documentation

    • Expanded app-server docs and protocol references for turn/steer, experimental-feature discovery, resume_agent, notification opt-outs, and null developer_instructions normalization. (#10721, #10821, #10903, #10983, #11319)
    • Updated TUI composer docs to reflect draft/image restoration, steer-mode Tab submit behavior, and history-navigation cursor semantics. (#9040, #10035, #11295)

    Chores

    • Reworked npm release packaging so platform-specific binaries are distributed via @openai/codex dist-tags, reducing package-size pressure while preserving platform-specific installs (including @alpha). (#11318, #11339)
    • Pulled in a security-driven dependency update for time (RUSTSEC-2026-0009). (#10876)

    Changelog

    Full Changelog: rust-v0.98.0...rust-v0.99.0

    Full release on Github

  • GPT-5.3-Codex in Cursor and VS Code

    Starting today, GPT-5.3-Codex is available natively in Cursor and VS Code.

    API access is starting with a small set of customers as part of a phased release.

    This is the first model treated as a high security capability under the Preparedness Framework.

    Safety controls will continue to scale, and API access will expand over the next few weeks.

  • Codex app v260205

    New features

    • Support for GPT-5.3-Codex.
    • Added mid-turn steering. Submit a message while Codex is working to direct its behavior.
    • Attach or drop any file type.

    Bug fixes

    • Fix flickering of the app.
  • Introducing GPT-5.3-Codex

    Today we’re releasing GPT-5.3-Codex, the most capable agentic coding model to date for complex, real-world software engineering.

    GPT-5.3-Codex combines the frontier coding performance of GPT-5.2-Codex with stronger reasoning and professional knowledge capabilities, and runs 25% faster for Codex users. It’s also better at collaboration while the agent is working—delivering more frequent progress updates and responding to steering in real time.

    GPT-5.3-Codex is available with paid ChatGPT plans everywhere you can use Codex: the Codex app, the CLI, the IDE extension, and Codex Cloud on the web. API access for the model will come soon.

    To switch to GPT-5.3-Codex:

    • In the CLI, start a new thread with:
      codex --model gpt-5.3-codex
      Or use /model during a session.
    • In the IDE extension, make sure you are signed in with ChatGPT, then choose GPT-5.3-Codex from the model selector in the composer.
    • In the Codex app, make sure you are signed in with ChatGPT, then choose GPT-5.3-Codex from the model selector in the composer.
    • If you don’t see GPT-5.3-Codex, update the CLI, IDE extension, or Codex app to the latest version.

    For API-key workflows, continue using gpt-5.2-codex while API support rolls out.

  • Codex CLI 0.98.0

    $ npm install -g @openai/codex@0.98.0
    View details

    New Features

    • Introducing GPT-5.3-Codex. Learn More
    • Steer mode is now stable and enabled by default, so Enter sends immediately during running tasks while Tab explicitly queues follow-up input. (#10690)

    Bug Fixes

    • Fixed resumeThread() argument ordering in the TypeScript SDK so resuming with local images no longer starts an unintended new session. (#10709)
    • Fixed model-instruction handling when changing models mid-conversation or resuming with a different model, ensuring the correct developer instructions are applied. (#10651, #10719)
    • Fixed a remote compaction mismatch where token pre-estimation and compact payload generation could use different base instructions, improving trim accuracy and avoiding context overflows. (#10692)
    • Cloud requirements now reload immediately after login instead of requiring a later refresh path to take effect. (#10725)

    Chores

    • Restored the default assistant personality to Pragmatic across config and related tests/UI snapshots. (#10705)
    • Unified collaboration mode naming and metadata across prompts, tools, protocol types, and TUI labels for more consistent mode behavior and messaging. (#10666)

    Changelog

    Full Changelog: rust-v0.97.0...rust-v0.98.0

    Full release on Github

  • Codex CLI 0.97.0

    $ npm install -g @openai/codex@0.97.0
    View details

    New Features

    • Added a session-scoped “Allow and remember” option for MCP/App tool approvals, so repeated calls to the same tool can be auto-approved during the session. (#10584)
    • Added live skill update detection, so skill file changes are picked up without restarting. (#10478)
    • Added support for mixed text and image content in dynamic tool outputs for app-server integrations. (#10567)
    • Added a new /debug-config slash command in the TUI to inspect effective configuration. (#10642)
    • Introduced initial memory plumbing (API client + local persistence) to support thread memory summaries. (#10629, #10634)
    • Added configurable log_dir so logs can be redirected (including via -c overrides) more easily. (#10678)

    Bug Fixes

    • Fixed jitter in the TUI apps/connectors picker by stabilizing description-column rendering. (#10593)
    • Restored and stabilized the TUI “working” status indicator/shimmer during preamble and early exec flows. (#10700, #10701)
    • Improved cloud requirements reliability with higher timeouts, retries, and corrected precedence over MDM settings. (#10631, #10633, #10659)
    • Persisted pending-input user events more consistently for mid-turn injected input handling. (#10656)

    Documentation

    • Documented how to opt in to the experimental app-server API. (#10667)
    • Updated docs/schema coverage for new log_dir configuration behavior. (#10678)

    Chores

    • Added a gated Bubblewrap (bwrap) Linux sandbox path to improve filesystem isolation options. (#9938)
    • Refactored model client lifecycle to be session-scoped and reduced implicit client state. (#10595, #10664)
    • Added caching for MCP actions from apps to reduce repeated load latency for users with many installed apps. (#10662)
    • Added a none personality option in protocol/config surfaces. (#10688)

    Changelog

    Full Changelog: rust-v0.96.0...rust-v0.97.0

    Full release on Github

  • Codex app v260204

    New features

    • Added Zed and Textmate as options to open files and folders.
    • Added PDF preview in the review panel.

    Bug fixes

    • Performance improvements.
  • Codex CLI 0.96.0

    $ npm install -g @openai/codex@0.96.0
    View details

    New Features

    • Added thread/compact to the v2 app-server API as an async trigger RPC, so clients can start compaction immediately and track completion separately. (#10445)
    • Added websocket-side rate limit signaling via a new codex.rate_limits event, with websocket parity for ETag/reasoning metadata handling. (#10324)
    • Enabled unified_exec on all non-Windows platforms. (#10641)
    • Constrained requirement values now include source provenance, enabling source-aware config debugging in UI flows like /debug-config. (#10568)

    Bug Fixes

    • Fixed Esc handling in the TUI request_user_input overlay: when notes are open, Esc now exits notes mode instead of interrupting the session. (#10569)
    • Thread listing now queries the state DB first (including archived threads) and falls back to filesystem traversal only when needed, improving listing correctness and resilience. (#10544)
    • Fixed thread path lookup to require that the resolved file actually exists, preventing invalid thread-id resolutions. (#10618)
    • Dynamic tool injection now runs in a single transaction to avoid partial state updates. (#10614)
    • Refined request_rule guidance used in approval-policy prompting to correct rule behavior. (#10379, #10598)

    Documentation

    • Updated app-server docs for thread/compact to clarify its asynchronous behavior and thread-busy lifecycle. (#10445)
    • Updated TUI docs to match the mode-specific Esc behavior in request_user_input. (#10569)

    Chores

    • Migrated state DB helpers to a versioned SQLite filename scheme and cleaned up legacy state files during runtime initialization. (#10623)
    • Expanded runtime telemetry with websocket timing metrics and simplified internal metadata flow in core client plumbing. (#10577, #10589)

    Changelog

    Full Changelog: rust-v0.95.0...rust-v0.96.0

    Full release on Github

  • Codex CLI 0.95.0

    $ npm install -g @openai/codex@0.95.0
    View details

    New Features

    • Added codex app <path> on macOS to launch Codex Desktop from the CLI, with automatic DMG download if it is missing. (#10418)
    • Added personal skill loading from ~/.agents/skills (with ~/.codex/skills compatibility), plus app-server APIs/events to list and download public remote skills. (#10437, #10448)
    • /plan now accepts inline prompt arguments and pasted images, and slash-command editing/highlighting in the TUI is more polished. (#10269)
    • Shell-related tools can now run in parallel, improving multi-command execution throughput. (#10505)
    • Shell executions now receive CODEX_THREAD_ID, so scripts and skills can detect the active thread/session. (#10096)
    • Added vendored Bubblewrap + FFI wiring in the Linux sandbox as groundwork for upcoming runtime integration. (#10413)

    Bug Fixes

    • Hardened Git command safety so destructive or write-capable invocations no longer bypass approval checks. (#10258)
    • Improved resume/thread browsing reliability by correctly showing saved thread names and fixing thread listing behavior. (#10340, #10383)
    • Fixed first-run trust-mode handling so sandbox mode is reported consistently, and made $PWD/.agents read-only like $PWD/.codex. (#10415, #10524)
    • Fixed codex exec hanging after interrupt in websocket/streaming flows; interrupted turns now shut down cleanly. (#10519)
    • Fixed review-mode approval event wiring so requestApproval IDs align with the corresponding command execution items. (#10416)
    • Improved 401 error diagnostics by including server message/body details plus cf-ray and requestId. (#10508)

    Documentation

    • Expanded TUI chat composer docs to cover slash-command arguments and attachment handling in plan/review flows. (#10269)
    • Refreshed issue templates and labeler prompts to better separate CLI/app bug reporting and feature requests. (#10411, #10453, #10548, #10552)

    Chores

    • Completed migration off the deprecated mcp-types crate to rmcp-based protocol types/adapters, then removed the legacy crate. (#10356, #10349, #10357)
    • Updated the bytes dependency for a security advisory and cleaned up resolved advisory configuration. (#10525)

    Changelog

    Full Changelog: rust-v0.94.0...rust-v0.95.0

    Full release on Github

  • Introducing the Codex app

    Codex app

    The Codex app for macOS is a desktop interface for running agent threads in parallel and collaborating with agents on long-running tasks. It includes a project sidebar, thread list, and review pane for tracking work across projects.

    Key features:

    For a limited time, ChatGPT Free and Go include Codex, and Plus, Pro, Business, Enterprise, and Edu plans get double rate limits. Those higher limits apply in the app, the CLI, your IDE, and the cloud.

    Learn more in the Introducing the Codex app blog post.

    Check out the Codex app documentation for more.

  • Codex CLI 0.94.0

    $ npm install -g @openai/codex@0.94.0
    View details

    New Features

    • Plan mode is now enabled by default with updated interaction guidance in the plan prompt. (#10313, #10308, #10329)
    • Personality configuration is now stable: default is friendly, the config key is personality, and existing settings migrate forward. (#10305, #10314, #10310, #10307)
    • Skills can be loaded from .agents/skills, with clearer relative-path instructions and nested-folder markers supported. (#10317, #10282, #10350)
    • Console output now includes runtime metrics for easier diagnostics. (#10278)

    Bug Fixes

    • Unarchiving a thread updates its timestamp so sidebar ordering refreshes. (#10280)
    • Conversation rules output is capped and prefix rules are deduped to avoid repeated rules. (#10351, #10309)
    • Override turn context no longer appends extra items. (#10354)

    Documentation

    • Fixed a broken image link in the npm README. (#10303)

    Changelog

    Full Changelog: rust-v0.93.0...rust-v0.94.0

    Full release on Github

January 2026

December 2025

  • Agent skills in Codex

    Codex now supports agent skills: reusable bundles of instructions (plus optional scripts and resources) that help Codex reliably complete specific tasks.

    Skills are available in both the Codex CLI and IDE extensions.

    You can invoke a skill explicitly by typing $skill-name (for example, $skill-installer or the experimental $create-plan skill after installing it), or let Codex select a skill automatically based on your prompt.

    Learn more in the skills documentation.

    Folder-based standard (agentskills.io)

    Following the open agent skills specification, a skill is a folder with a required SKILL.md and optional supporting files:

    my-skill/
      SKILL.md       # Required: instructions + metadata
      scripts/       # Optional: executable code
      references/    # Optional: documentation
      assets/        # Optional: templates, resources

    Install skills per-user or per-repo

    You can install skills for just yourself in ~/.codex/skills, or for everyone on a project by checking them into .codex/skills in the repository.

    Codex also ships with a few built-in system skills to get started, including $skill-creator and $skill-installer. The $create-plan skill is experimental and needs to be installed (for example: $skill-installer install the create-plan skill from the .experimental folder).

    Curated skills directory

    Codex ships with a small curated set of skills inspired by popular workflows at OpenAI. Install them with $skill-installer, and expect more over time.

  • Introducing GPT-5.2-Codex

    Today we are releasing GPT-5.2-Codex, the most advanced agentic coding model yet for complex, real-world software engineering.

    GPT-5.2-Codex is a version of GPT-5.2 further optimized for agentic coding in Codex, including improvements on long-horizon work through context compaction, stronger performance on large code changes like refactors and migrations, improved performance in Windows environments, and significantly stronger cybersecurity capabilities.

    Starting today, the CLI and IDE Extension will default to gpt-5.2-codex for users who are signed in with ChatGPT. API access for the model will come soon.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.2-codex for a new Codex CLI session using:

    codex --model gpt-5.2-codex

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.2-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.2-codex by updating your config.toml configuration file:

    model = "gpt-5.2-codex”
  • Introducing Codex for Linear

    Assign or mention @Codex in an issue to kick-off a Codex cloud task. As Codex works, it posts updates back to Linear, providing a link to the completed task so you can review, open a PR, or keep working.

    Screenshot of a successful Codex task started in Linear

    To learn more about how to connect Codex to Linear both locally through MCP and through the new integration, check out the Codex for Linear documentation.

November 2025

  • Usage and credits fixes

    Minor updates to address a few issues with Codex usage and credits:

    • Adjusted all usage dashboards to show “limits remaining” for consistency. The CLI previously displayed “limits used.”
    • Fixed an issue preventing users from buying credits if their ChatGPT subscription was purchased via iOS or Google Play.
    • Fixed an issue where the CLI could display stale usage information; it now refreshes without needing to send a message first.
    • Optimized the backend to help smooth out usage throughout the day, irrespective of overall Codex load or how traffic is routed. Before, users could get unlucky and hit a few cache misses in a row, leading to much less usage.
  • Introducing GPT-5.1-Codex-Max

    Today we are releasing GPT-5.1-Codex-Max, our new frontier agentic coding model.

    GPT‑5.1-Codex-Max is built on an update to our foundational reasoning model, which is trained on agentic tasks across software engineering, math, research, and more. GPT‑5.1-Codex-Max is faster, more intelligent, and more token-efficient at every stage of the development cycle–and a new step towards becoming a reliable coding partner.

    Starting today, the CLI and IDE Extension will default to gpt-5.1-codex-max for users that are signed in with ChatGPT. API access for the model will come soon.

    For non-latency-sensitive tasks, we’ve also added a new Extra High (xhigh) reasoning effort, which lets the model think for an even longer period of time for a better answer. We still recommend medium as your daily driver for most tasks.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.1-codex-max for a new Codex CLI session using:

    codex --model gpt-5.1-codex-max

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.1-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.1-codex-max by updating your config.toml configuration file:

    model = "gpt-5.1-codex-max”
  • Introducing GPT-5.1-Codex and GPT-5.1-Codex-Mini

    Along with the GPT-5.1 launch in the API, we are introducing new gpt-5.1-codex-mini and gpt-5.1-codex model options in Codex, a version of GPT-5.1 optimized for long-running, agentic coding tasks and use in coding agent harnesses in Codex or Codex-like harnesses.

    Starting today, the CLI and IDE Extension will default to gpt-5.1-codex on macOS and Linux and gpt-5.1 on Windows.

    If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.1-codex for a new Codex CLI session using:

    codex --model gpt-5.1-codex

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.1-Codex from the dropdown menu.

    If you want to switch for all sessions, you can change your default model to gpt-5.1-codex by updating your config.toml configuration file:

    model = "gpt-5.1-codex”
  • Introducing GPT-5-Codex-Mini

    Today we are introducing a new gpt-5-codex-mini model option to Codex CLI and the IDE Extension. The model is a smaller, more cost-effective, but less capable version of gpt-5-codex that provides approximately 4x more usage as part of your ChatGPT subscription.

    Starting today, the CLI and IDE Extension will automatically suggest switching to gpt-5-codex-mini when you reach 90% of your 5-hour usage limit, to help you work longer without interruptions.

    You can try the model for a new Codex CLI session using:

    codex --model gpt-5-codex-mini

    You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5-Codex-Mini from the dropdown menu.

    Alternatively, you can change your default model to gpt-5-codex-mini by updating your config.toml configuration file:

    model = "gpt-5-codex-mini”
  • GPT-5-Codex model update

    We’ve shipped a minor update to GPT-5-Codex:

    • More reliable file edits with apply_patch.
    • Fewer destructive actions such as git reset.
    • More collaborative behavior when encountering user edits in files.
    • 3% more efficient in time and usage.

October 2025

  • Credits on ChatGPT Pro and Plus

    Codex users on ChatGPT Plus and Pro can now use on-demand credits for more Codex usage beyond what’s included in your plan. Learn more.

  • Tag @Codex on GitHub Issues and PRs

    You can now tag @codex on a teammate’s pull request to ask clarifying questions, request a follow-up, or ask Codex to make changes. GitHub Issues now also support @codex mentions, so you can kick off tasks from any issue, without leaving your workflow.

    Codex responding to a GitHub pull request and issue after an @Codex mention.

  • Codex is now GA

    Codex is now generally available with 3 new features — @Codex in Slack, Codex SDK, and new admin tools.

    @Codex in Slack

    You can now questions and assign tasks to Codex directly from Slack. See the Slack guide to get started.

    Codex SDK

    Integrate the same agent that powers the Codex CLI inside your own tools and workflows with the Codex SDK in Typescript. With the new Codex GitHub Action, you can easily add Codex to CI/CD workflows. See the Codex SDK guide to get started.

    import { Codex } from "@openai/codex-sdk";
    
    const agent = new Codex();
    const thread = await agent.startThread();
    
    const result = await thread.run("Explore this repo");
    console.log(result);
    
    const result2 = await thread.run("Propose changes");
    console.log(result2);

    New admin controls and analytics

    ChatGPT workspace admins can now edit or delete Codex Cloud environments. With managed config files, they can set safe defaults for CLI and IDE usage and monitor how Codex uses commands locally. New analytics dashboards help you track Codex usage and code review feedback. Learn more in the enterprise admin guide.

    Availability and pricing updates

    The Slack integration and Codex SDK are available to developers on ChatGPT Plus, Pro, Business, Edu, and Enterprise plans starting today, while the new admin features will be available to Business, Edu, and Enterprise. Beginning October 20, Codex Cloud tasks will count toward your Codex usage. Review the Codex pricing guide for plan-specific details.

September 2025

  • GPT-5-Codex in the API

    GPT-5-Codex is now available in the Responses API, and you can also use it with your API Key in the Codex CLI. We plan on regularly updating this model snapshot. It is available at the same price as GPT-5. You can learn more about pricing and rate limits for this model on our model page.

  • Introducing GPT-5-Codex

    New model: GPT-5-Codex

    codex-switch-model

    GPT-5-Codex is a version of GPT-5 further optimized for agentic coding in Codex. It’s available in the IDE extension and CLI when you sign in with your ChatGPT account. It also powers the cloud agent and Code Review in GitHub.

    To learn more about GPT-5-Codex and how it performs compared to GPT-5 on software engineering tasks, see our announcement blog post.

    Image outputs

    codex-image-outputs

    When working in the cloud on front-end engineering tasks, GPT-5-Codex can now display screenshots of the UI in Codex web for you to review. With image output, you can iterate on the design without needing to check out the branch locally.

    New in Codex CLI

    • You can now resume sessions where you left off with codex resume.
    • Context compaction automatically summarizes the session as it approaches the context window limit.

    Learn more in the latest release notes

August 2025

  • Late August update

    IDE extension (Compatible with VS Code, Cursor, Windsurf)

    Codex now runs in your IDE with an interactive UI for fast local iteration. Easily switch between modes and reasoning efforts.

    Sign in with ChatGPT (IDE & CLI)

    One-click authentication that removes API keys and uses ChatGPT Enterprise credits.

    Move work between local ↔ cloud

    Hand off tasks to Codex web from the IDE with the ability to apply changes locally so you can delegate jobs without leaving your editor.

    Code Reviews

    Codex goes beyond static analysis. It checks a PR against its intent, reasons across the codebase and dependencies, and can run code to validate the behavior of changes.

  • Mid August update

    Image inputs

    You can now attach images to your prompts in Codex web. This is great for asking Codex to implement frontend changes or follow up on whiteboarding sessions.

    Container caching

    Codex now caches containers to start new tasks and followups 90% faster, dropping the median start time from 48 seconds to 5 seconds. You can optionally configure a maintenance script to update the environment from its cached state to prepare for new tasks. See the docs for more.

    Automatic environment setup

    Now, environments without manual setup scripts automatically run the standard installation commands for common package managers like yarn, pnpm, npm, go mod, gradle, pip, poetry, uv, and cargo. This reduces test failures for new environments by 40%.

June 2025

  • Best of N

    Codex can now generate multiple responses simultaneously for a single task, helping you quickly explore possible solutions to pick the best approach.

    Fixes & improvements

    • Added some keyboard shortcuts and a page to explore them. Open it by pressing ⌘-/ on macOS and Ctrl+/ on other platforms.

    • Added a “branch” query parameter in addition to the existing “environment”, “prompt” and “tab=archived” parameters.

    • Added a loading indicator when downloading a repo during container setup.

    • Added support for cancelling tasks.

    • Fixed issues causing tasks to fail during setup.

    • Fixed issues running followups in environments where the setup script changes files that are gitignored.

    • Improved how the agent understands and reacts to network access restrictions.

    • Increased the update rate of text describing what Codex is doing.

    • Increased the limit for setup script duration to 20 minutes for Pro and Business users.

    • Polished code diffs: You can now option-click a code diff header to expand/collapse all of them.

  • June update

    Agent internet access

    Now you can give Codex access to the internet during task execution to install dependencies, upgrade packages, run tests that need external resources, and more.

    Internet access is off by default. Plus, Pro, and Business users can enable it for specific environments, with granular control of which domains and HTTP methods Codex can access. Internet access for Enterprise users is coming soon.

    Learn more about usage and risks in the docs.

    Update existing PRs

    Now you can update existing pull requests when following up on a task.

    Voice dictation

    Now you can dictate tasks to Codex.

    Fixes & improvements

    • Added a link to this changelog from the profile menu.

    • Added support for binary files: When applying patches, all file operations are supported. When using PRs, only deleting or renaming binary files is supported for now.

    • Fixed an issue on iOS where follow up tasks where shown duplicated in the task list.

    • Fixed an issue on iOS where pull request statuses were out of date.

    • Fixed an issue with follow ups where the environments were incorrectly started with the state from the first turn, rather than the most recent state.

    • Fixed internationalization of task events and logs.

    • Improved error messages for setup scripts.

    • Increased the limit on task diffs from 1 MB to 5 MB.

    • Increased the limit for setup script duration from 5 to 10 minutes.

    • Polished GitHub connection flow.

    • Re-enabled Live Activities on iOS after resolving an issue with missed notifications.

    • Removed the mandatory two-factor authentication requirement for users using SSO or social logins.

May 2025

  • Reworked environment page

    It’s now easier and faster to set up code execution.

    Fixes & improvements

    • Added a button to retry failed tasks

    • Added indicators to show that the agent runs without network access after setup

    • Added options to copy git patches after pushing a PR

    • Added support for unicode branch names

    • Fixed a bug where secrets were not piped to the setup script

    • Fixed creating branches when there’s a branch name conflict.

    • Fixed rendering diffs with multi-character emojis.

    • Improved error messages when starting tasks, running setup scripts, pushing PRs, or disconnected from GitHub to be more specific and indicate how to resolve the error.

    • Improved onboarding for teams.

    • Polished how new tasks look while loading.

    • Polished the followup composer.

    • Reduced GitHub disconnects by 90%.

    • Reduced PR creation latency by 35%.

    • Reduced tool call latency by 50%.

    • Reduced task completion latency by 20%.

    • Started setting page titles to task names so Codex tabs are easier to tell apart.

    • Tweaked the system prompt so that agent knows it’s working without network, and can suggest that the user set up dependencies.

    • Updated the docs.

  • Codex in the ChatGPT iOS app

    Start tasks, view diffs, and push PRs—while you’re away from your desk.