Codex Now Builds Codex
OpenAI has crossed a strange, fascinating threshold. Its newest GPT-5 Codex system helps build, test, and refine — itself. Huh? Yeah, that’s right. It works on and improves itself. Straight from the annals of science fiction, self-coding is now integrating into modern software pipelines.
This is a massive paradigm change in code creation. It changes who — or what — does the work.
What’s Happening & Why This Matters
OpenAI confirms that the majority of its new Codex tooling comes from Codex itself. Engineers assign tasks to the AI agent the same way they assign work to human teammates. Codex writes features, fixes bugs, and reviews pull requests. Humans are still active participants, but the balance has changed.
Alexander Embiricos, Codex product lead at OpenAI, says:
“The vast majority of Codex is built by Codex. It’s almost entirely used to improve itself.”
That statement alone explains why Codex is both exciting and terrifying.
From Assistant to Agent

Earlier AI coding tools worked like autocomplete on steroids. GPT-5 Codex acts like a junior developer with greater stamina. It runs within sandboxed environments, connects to real repositories, and executes tasks in parallel. It tests its own output.
Codex also integrates directly into the tools developers already use:
- ChatGPT web
- Command-line interfaces
- Integrated Development Environments (IDEs), including VS Code, Cursor, and Windsurf
After OpenAI released GPT-5 Codex, usage surged; internal adoption increased, too. External usage jumped more than twentyfold.
A Recursive Engineering Precedent
This is not unprecedented. Software has designed better software for decades. Early chips enabled CAD tools. CAD tools enabled modern processors. Each generation bootstrapped the next.
Codex performs in the same manner. It writes code. That code becomes part of Codex. The next version behaves differently because of it. The loop tightens.
The difference, though, is speed.
Humans in the Loop
Despite the hype, OpenAI draws a line between “vibe coding” and “vibe engineering.” Codex handles the heavy lifting. Humans review plans, approve changes, and ship products.
OpenAI reports dramatic gains. One internal team built and shipped the Sora Android app in under a month with four engineers. Codex handled planning, implementation, and iteration.
That efficiency explains why investors, competitors, and developers are very interested in the results.
Productivity vs Reality
However, most research does not agree on AI productivity gains. A METR study found that some experienced developers slowed down on complex codebases when using AI tools. OpenAI counters that Codex performs best when humans guide it carefully, not blindly.
The takeaway: agentic coding works best as amplification. It is not a replacement.
TF Summary: What’s Next
Codex and similar systems are here and likely to proliferate. The real disruption will not be job loss. It will be leverage. Small teams will build products that once required entire departments. Software is its own feedback loop.
MY FORECAST: Self-improving coding agents become standard for large engineering teams within two years. Codex-style agents handle routine development, testing, and maintenance. Human engineers focus on architecture, judgment, and risk.
— Text-to-Speech (TTS) provided by gspeech

