Today, we’re releasing a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex and our first model designed for real-time coding. Codex-Spark is optimized to feel near-instant, delivering more than 1000 tokens per second while remaining highly capable for real-world coding tasks.
Codex-Spark is available in research preview for ChatGPT Pro users in the latest Codex app, CLI, and IDE extension. This release also marks the first milestone in our partnership with Cerebras.
At launch, Codex-Spark is text-only with a 128k context window. During the research preview, usage has separate model-specific limits and doesn’t count against standard Codex limits. During high demand, access may slow down or queue while we balance reliability across users.
To switch to GPT-5.3-Codex-Spark:
- In the CLI, start a new thread with:
Or usecodex --model gpt-5.3-codex-spark/modelduring a session. - In the IDE extension, choose GPT-5.3-Codex-Spark from the model selector in the composer.
- In the Codex app, choose GPT-5.3-Codex-Spark from the model selector in the composer.
If you don’t see GPT-5.3-Codex-Spark yet, update the CLI, IDE extension, or Codex app to the latest version.
GPT-5.3-Codex-Spark isn’t available in the API at launch.
For API-key workflows, continue using gpt-5.2-codex.





















