OpenAI Introduces Its First AI Model Capable of Real-Time Coding as Codex Push Continues

Codex has become a more important focus of OpenAI in 2026, as it is focused on the term codex. In the wake of the GPT-5, San Francisco-based artificial intelligence (AI) giant released its . This month, the company was the first to put an emphasis on a coding-focused model rather than its general-purpose version of 3-Codex model earlier this month. The growth of Codex grew 50 percent in one week, according to OpenAI CEO Sam Altman last week. The AI giant released its first real-time, dubbed GPT-5 (in the case of an artificial intelligence model) that can write code in one’s lifetime now, keeping up with the momentum. 3 Codex–Spark, if possible, is 3-Codex-S Park.

OpenAI Introduces GPT-5.3-Codex-Spark

The latest coding model was described in an article by OpenAI which announced and detailed its new code model in a post. A , the GPT-5, is. For example, ChatGPT Pro subscribers have access to 3-Codex-Spark in research preview for their codec application (the Codex app and command line interface (CLI), as well as the extension of integrated development environment (IDE) that is currently available. As long as the full version has not been rolled out, those who use the model will have some limitations for the time being limited.

The most notable of these is the fact that, besides its unusually long name, Codex-Spark can be coded in real time. The model is designed for low-latency workloads and can write and edit code almost instantly, the company said. A 1,000 tokens per second are said to be processed by the model, which is alleged to have been issued for .

Unlike other models, it is a text-only model that writes code and makes targeted edits, rehaped logic (and refine interfaces in real time)? This is also a context window of 1,28,000 tokens, so it can be reasonably used in normal and sometimes difficult tasks. Nevertheless, users should follow the GPT-5 for more complex tasks. 3-Codex model .

Why is Codex-Spark so fast besides code-level optimisations, that the AI model runs on low-latency hardware? Last month OpenAI announced the launch of a partnership with Cerebras last month, and as part of it, the new model now runs on its Wafer Scale Engine 3 AI accelerator which provides high-speed inference.

In the run-up to performance, the AI giant shared internal benchmark tests for claiming that on SWE-Bench Pro and Terminal-Bnch 2 it was true. The Codex–Spark beats GPT-5 in terms of agentic software engineering capability, 0. 1-Codex-mini and slightly short of GPT-5, but a little less than the 5-Programmer. 2- CodeX But with the faster output generation, this is a big leap.

A small group of design partners are also being offered the AI model through the application programming interface (API) to help the company understand how developers plan to integrate it into their products, OpenAI said. Sadly, Codex-Spark will have its own rate limits and usage will not be limited to standard rates. The company said expansion will be expanded over the next weeks, citing .


Thanks for reading OpenAI Introduces Its First AI Model Capable of Real-Time Coding as Codex Push Continues
MightNews
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.