Something quietly funny happened in the AI coding world this month. OpenAI shipped a plugin for Claude Code. The prosumer community shipped “oh-my-zsh for Codex.” Both are open-source. Both were built in the open while the press cycle was busy writing “Codex vs Claude Code” comparison posts.
The rivalry theater is officially over. You can now run Codex inside Claude Code, with official OpenAI-maintained commands, while an orchestration layer called OMX coordinates multiple agents across tmux panes using the same directory structure that oh-my-zsh pioneered for the shell.
If that last sentence sounded like three languages, that’s the point of this post. Here’s what both tools actually are, how to install them (roughly four minutes each), and why the story they tell together is more important than either one individually.
What Is Oh-My-Codex (OMX)?
Short version: OMX is to the Codex CLI what oh-my-zsh is to the zsh shell. If you’ve never installed oh-my-zsh, think of it this way — zsh is the terminal program that runs your commands. Oh-my-zsh is the bundle of themes, plugins, aliases, and hooks that made zsh pleasant for the first time ever. OMX does the same job for Codex.
Longer version: Codex CLI is OpenAI’s terminal-based coding agent — a more capable cousin of ChatGPT that lives in your terminal, runs your tests, and writes your commits. It’s capable but minimal. OMX adds everything the minimal version doesn’t ship with: a team of coordinated agents that run in parallel in tmux panes, persistent memory backed by MCP servers, 33 specialized prompts for different kinds of work, workflow skills like TDD and planning stages, hook systems for every major event, and a staged pipeline that moves from plan → PRD → execute → verify → fix.
The popular fork, by a dev named Yeachan-Heo, crossed 16,000 GitHub stars in the first week of April — a trending-repos tracker flagged 1,789 stars added in a single day. It’s MIT-licensed. Swyx, one of the more prolific AI-dev writers, posted an endorsement of the creator the day after it started trending.
For the pros: it ships 13 named agents ($ultrawork, $deep-interview, $plan, $research, $team, $review, $tdd, $doctor, $hud, $trace, $autoresearch, $architect, $executor, $reviewer), tmux-based parallel workers each in isolated git worktrees (so no merge conflicts between agents), hooks at SessionStart/PreToolUse/PostToolUse/UserPromptSubmit/Stop, Discord and Telegram integrations, and launch profiles from --yolo through --madmax that trade off caution against speed.
What Is Codex-Plugin-CC?
Short version: this one’s from OpenAI itself. It’s a Claude Code plugin — meaning it extends Anthropic’s tool — that lets you delegate tasks from inside Claude Code to Codex. You stay in Claude Code. Codex does the work you hand it.
Longer version: if you’ve ever wished Claude Code could call a second opinion on a risky change, this is that. Three commands matter most:
/codex:reviewruns a Codex-quality code review on your current uncommitted changes/codex:adversarial-reviewpressure-tests your design choices — specifically built for auth changes, infra scripts, and large refactors where silent assumptions kill you/codex:rescuehands Codex an open-ended task as a background job. Resume, wait, fresh-start, pick a specific model, and set effort level — all as flags
Three more commands manage jobs in flight: /codex:status shows what’s running, /codex:result gives you the final output with a session ID so you can resume directly in Codex, and /codex:cancel stops anything you no longer want.
Under the hood, it’s a Model Context Protocol (MCP) plugin that talks to your local Codex CLI or Codex app server. Nothing goes to OpenAI beyond what you’d send running Codex natively. Your ChatGPT subscription (Free works) or OpenAI API key is all you need, plus Node.js 18.18 or later.
One post announcing it got 2,243 likes and 177 reposts in early April. The response was mostly disbelief that OpenAI shipped it at all. “You can now run Codex inside Claude Code” is the kind of sentence that would have been a joke six months ago.
Installing OMX in Under 2 Minutes
Open your terminal.
npm install -g @openai/codex oh-my-codex
omx setup
omx doctor
That’s the whole install. First command grabs the Codex CLI and the OMX package from npm. Second runs the interactive setup — it’ll ask you for your OpenAI API key if you don’t have one set already, and create the .omx/ state directory in your project. Third runs a diagnostic to verify tmux, your git worktree permissions, and your Codex credentials are all wired correctly.
To actually start a session:
omx --madmax --high
--madmax is the most aggressive launch profile. It spins up a team of agents in tmux panes, each in its own git worktree. If you’d rather start cautious, use --high by itself, or drop to --yolo for single-agent mode with light guardrails. There’s also --xhigh for the full deep-agentic treatment.
One thing to watch: if you’re on ChatGPT Plus ($20/month) and you run --madmax, you’ll hit your usage cap faster than you think. One dev reported his limit ran out in six minutes. If you’re doing anything serious with OMX, budget for the $100 Codex Pro tier — the 5x rate limits are what make parallel agents practical.
Installing Codex-Plugin-CC in Under 2 Minutes
This one installs inside Claude Code, not from your shell. Open Claude Code, then run:
/plugin marketplace add openai/codex-plugin-cc
/plugin install codex@openai-codex
/reload-plugins
/codex:setup
Pick user scope if you want it available everywhere, project scope if you want it isolated to the current repo, or local scope for a single session. Setup will walk you through connecting Codex — if you don’t already have Codex CLI installed, it’ll prompt you.
After setup, try this on a real change:
/codex:review --base main
You’ll get a Codex-quality review of everything different from main. For anything touching authentication or infrastructure:
/codex:adversarial-review --background
That starts the adversarial review in the background so you can keep working. Check on it with /codex:status, pull the result with /codex:result, and if the feedback is damning, you at least got it before merge.
What the Two Together Actually Do
Here’s the workflow that keeps showing up in the wild.
Open Claude Code. Use Claude Opus 4.7 for architecture and planning. When you hit something Claude is being too cautious about — static HTML, a CSS tweak, a pure refactor it keeps hedging on — run /codex:rescue and let Codex’s gpt-5.4-mini plow through it in the background. Pull the result back, review it in Claude Code, merge.
For big refactors, run both models adversarially. Ship a branch with your change. Run /codex:adversarial-review with the flag --focus "authentication and session handling". Codex reviews against a fresh model’s perspective. Any disagreement between the two is your answer to what’s actually risky.
On your terminal side, OMX runs the same three agents in parallel: one for the API layer, one for the frontend, one for the tests. Each in its own tmux pane, each in its own git worktree. The $review agent at the end checks all three before a coordinated merge.
A Korean dev summarized the pattern in one tweet: “개발은 클로드 코드에게, 리뷰는 코덱스에게” — development to Claude Code, reviews to Codex. A Japanese dev put it more simply: “The Claude and Codex collaboration is the best.” Neither of them cares which company ships which model.
The Oh-My-Zsh Moment
The reason OMX spiked so hard isn’t the features. It’s the shape of the thing. Oh-my-zsh took an ok shell and turned it into a community-owned platform. Everybody had a plugin. Everybody had a theme. The shell itself didn’t change — the scaffolding around it became the product.
That’s what’s happening to AI coding agents right now. Claude has the obra/superpowers and ui-ux-pro-max ecosystems. Codex now has OMX and the 90+ official plugins that shipped April 17. Both are evolving past the “ship a good model” era into the “ship a good model and an orchestration layer” era.
The part that’s genuinely new: with codex-plugin-cc, those orchestration layers talk to each other. You can run OMX on Codex while Claude Code delegates to it mid-session. The model wars are being resolved by the plumbing around them, not by the benchmarks.
What It Can’t Do (Honest Limits)
Rate limits will crush you on the cheap plans. Parallel agents mean parallel tokens. ChatGPT Plus’s $20 tier handles single-agent Codex use fine. It does not handle OMX in --madmax mode. Budget $100/month for Codex Pro if you want to run parallel.
OMX assumes tmux. If you’re on Windows, there’s a psmux fallback, but it’s less battle-tested. The Mac/Linux experience is where the effort has gone.
Codex-plugin-cc needs Codex CLI running locally. It’s not a cloud-to-cloud bridge. If you don’t want Codex installed on your machine, this isn’t for you.
Both still fail on coding tasks the same way agents always fail. Tests stuck in loops, merge conflicts the LLM can’t resolve, docs written confidently but slightly wrong. Orchestration makes failures faster to catch — it doesn’t make them go away.
The trending stars aren’t the same as battle-tested. OMX is weeks old. Some features (the MCP memory server, the Telegram integration) work in demos and haven’t been pressure-tested in production. If you’re risk-averse, watch it for a month before wiring it into your main workflow.
There are two “oh-my-codex” repos. Yeachan-Heo’s fork is the 16k-star one most tweets reference. The staticpayload fork is a different, leaner project. Make sure you’re installing from the right one.
Post-April-17, some people are questioning OMX’s relevance. A Chinese dev pointed out that with Codex Desktop’s new computer-use plugins, OMX’s orchestration feature might overlap with what’s shipping natively. Watch this tension over the next month. Today, OMX’s hooks and team-based workflows still go further than what the desktop app does — but the gap might narrow.
What This Means for You
If you’re a Claude Code power user: Install codex-plugin-cc this weekend. It’s a 2-minute setup and it gives you a free second opinion on every change. You don’t have to commit to using Codex — just have it available when Claude is being indecisive about something you know is simple.
If you already use Codex CLI: Install OMX if you routinely run long tasks that time out, or if you want multi-agent parallel work. Stay with vanilla Codex CLI if your work is shorter iteration cycles and you’re price-sensitive.
If you’re new to AI coding agents altogether: Skip both for now. Start with Claude Code or Codex Desktop straight. You need to feel what a single agent does wrong before orchestration layers make sense. Come back in a month.
If you’re on a team deciding which tool to standardize on: This is a non-question now. The answer is both, orchestrated. Build your decision around what orchestration stack you want (OMX vs obra/superpowers vs build your own) rather than which model to commit to.
Who Should Install This Weekend
- Install OMX now if: You’re on macOS or Linux with tmux, you have Codex Pro ($100/month), and you’ve got a multi-hour task you can leave running over Sunday.
- Install codex-plugin-cc now if: You already use Claude Code and want a 2-minute upgrade that gives you Codex-as-backup for free.
- Install both if: You’re a solo dev or indie operator and you’ve been running out of ideas for how to get more throughput in the same hours.
- Skip for now if: You’re still learning your primary tool. Orchestration layers amplify whatever you’re already good at. If you’re not yet good at anything, they amplify confusion.
The Bottom Line
OMX and codex-plugin-cc are two halves of the same story: AI coding is becoming a composition problem, not a selection problem. The era of “which model is best” is ending. The era of “which orchestration layer ties my models together” is starting.
Both tools are free. Both are under 5 minutes to install. Both tell you something about where this goes: the companies that ship models are shipping tools to interoperate with their competitors. Because that’s what developers actually want.
Go install OMX. Go install codex-plugin-cc. Use them for a week. Whatever stack you end up with probably won’t match either one exactly — and that’s fine. The point is that the stack is becoming the product.
Sources
- openai/codex-plugin-cc — GitHub
- Yeachan-Heo/oh-my-codex — GitHub (popular fork, 16k+ stars)
- staticpayload/oh-my-codex — GitHub (leaner fork)
- oh-my-codex.dev — official site
- a2a-mcp.org — What Is Oh My Codex (OMX)?
- OpenAI Developer Community — Introducing Codex Plugin for Claude Code
- npm — oh-my-codex
- MindStudio — OpenAI Codex Plugin for Claude Code: Cross-Provider Review
- Alpha Signal — You can now trigger Codex from Claude Code
- OpenAI Developers — Codex Plugins