Codex on AWS Bedrock: 30-Min Setup + Decision Gates

Codex CLI v0.124+ added first-class AWS Bedrock support. A 30-minute setup walkthrough plus 3 'stay on direct API' gates for engineering teams.

If your eng team has been holding off on adopting Codex because the procurement story for a second model-vendor relationship was painful — that calculation just changed. Codex CLI v0.124.0 added first-class Amazon Bedrock support with AWS SigV4 signing and AWS credential-based auth. (Codex changelog) Combined with the April 27 OpenAI–Microsoft non-exclusivity end and the late-April announcement of OpenAI models on Bedrock (Limited Preview), AWS-anchored teams can now run Codex against GPT-5.5 entirely inside Bedrock, billed against existing AWS commits.

The Day-3 window for first-mover setup tutorials is closing — the official Codex docs landed, OpenAI’s announcement is live, and a few hands-on guides have started appearing. Here’s the 30-minute walkthrough plus the three “stay on direct API” gates that the existing tutorials are skipping.

What Just Changed

Three things converged in the last week:

  1. Codex CLI v0.124.0 (April 2026) — first-class Amazon Bedrock support, AWS SigV4 signing, AWS credential-based auth. (Codex changelog)
  2. Codex CLI v0.128.0 (April 2026) — Bedrock endpoint and GPT-5.4 model ID metadata updates.
  3. OpenAI on Amazon Bedrock (Limited Preview) — GPT-5.5, GPT-5.4, Codex, and Managed Agents available through Bedrock APIs. Authentication uses your AWS credentials; inference processes through Bedrock; usage applies toward your AWS cloud commits. (OpenAI on AWS announcement)

The combined effect: an AWS-anchored eng team can use Codex CLI, the Codex desktop app, and the VS Code extension against GPT-5.5 — without an OpenAI direct API key, without a separate procurement workflow, and with Codex usage billed against existing AWS spend commitments.

The 30-Minute Setup

Three phases. Stopwatch on.

Phase 1: AWS Credentials and Bedrock Model Access (10 min)

Before installing or configuring Codex, verify Bedrock model access in the regions where you operate. OpenAI models on Bedrock are in Limited Preview, so model availability varies by region. Check the AWS Bedrock console under “Model access” for openai.gpt-5.5 and openai.gpt-5.4 — request access if not already granted.

Set up AWS credentials. Codex follows the standard AWS credential chain, so any of the following works:

  • IAM role attached to the host (EC2, ECS, EKS, Lambda) — cleanest for production
  • AWS SSO profile via aws configure sso
  • aws configure static keys — fine for local dev, never for production
  • Environment variables via aws configure export-credentials

For local development, the simplest pattern:

aws configure sso          # Or: aws configure for static keys
eval "$(aws configure export-credentials --format env)"

That last line exports the credentials to environment variables Codex picks up.

Phase 2: Codex CLI Configuration (10 min)

Install or upgrade Codex CLI to v0.124.0 or later:

npm install -g @openai/codex@latest
codex --version           # Should be 0.124.0+

Configure Codex to route through Bedrock. The cleanest pattern uses ~/.codex/config.toml:

[provider.bedrock]
type = "bedrock"
region = "us-west-2"           # Your Bedrock region
profile = "default"            # AWS profile (optional; falls back to credential chain)

[model]
default = "openai.gpt-5.5"     # Or openai.gpt-5.4 if you've requested access

If you omit the profile line, Codex uses the standard AWS credential chain — same as the AWS CLI does.

Smoke test:

codex --model openai.gpt-5.5 "what's 2+2"

If you see a response, the auth chain is working. If you see a SigV4 error, your credentials aren’t being picked up — re-run the eval line above.

Phase 3: VS Code Extension + Production Wiring (10 min)

Install the Codex VS Code extension from the Marketplace (search “OpenAI Codex”). In the extension settings, point it at the same Bedrock provider config you just created. The extension reads ~/.codex/config.toml by default.

For production deployment, three things to set up before the first PR lands:

  1. IAM policy template. A least-privilege Bedrock invoke policy:

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"],
        "Resource": [
          "arn:aws:bedrock:us-west-2::foundation-model/openai.gpt-5.5",
          "arn:aws:bedrock:us-west-2::foundation-model/openai.gpt-5.4"
        ]
      }]
    }
    
  2. CloudWatch metric filters on Bedrock invocation logs. Bedrock emits per-invocation cost metrics that you’ll want for the first 30 days of use.

  3. Codex audit log retention. Codex writes session logs locally; for compliance shops, route them to S3 with the Codex --log-output-dir flag.

That’s the setup. From here, your team can run Codex commands and the bills land in your AWS account, not on a separate OpenAI invoice.

The 3 “Stay on Direct API” Gates

The setup tutorials that have started landing don’t typically cover the gates where Bedrock is the wrong choice. Here are three honest situations where direct OpenAI API stays the better answer.

Gate 1: Region Availability Doesn’t Match Your Workload

OpenAI on Bedrock is in Limited Preview. As of early May 2026, model availability varies sharply by AWS region. If your workload runs in eu-west-1, ap-northeast-1, or any region where openai.gpt-5.5 isn’t yet enabled, the cross-region invocation latency adds 200-400ms per call versus direct API.

The decision: if your latency budget is over 800ms and you’re in a Bedrock-supported region, switch. If you’re in a non-supported region or your latency budget is under 500ms, stay on direct API for now and revisit when GA expands the region list.

Gate 2: Existing OpenAI Org-Level Audit Trail Requirements

If your compliance posture requires OpenAI’s org-level audit trail (the OpenAI Admin API audit-log feed, the OpenAI usage dashboard, the OpenAI billing report), you lose those when you route through Bedrock. Bedrock provides AWS-side audit (CloudTrail) but doesn’t surface OpenAI-side metadata in the same form.

The decision: if your security team has a documented dependency on OpenAI Admin API audit logs, stay on direct API and revisit only if AWS publishes a Bedrock-side equivalent. If your audit story is AWS-anchored already, switch.

Gate 3: Codex Features Lagging on Bedrock

Codex CLI features land on direct API first, then Bedrock. The April 2026 v0.124.0 release fixed Bedrock model support for apply_patch — meaning prior to v0.124.0, the Bedrock route had a real feature gap. Future Codex CLI features will likely follow the same pattern: direct API first, Bedrock second.

The decision: if your team uses bleeding-edge Codex features the week they ship, stay on direct API as your primary and use Bedrock for production-stable workloads. If you’re comfortable with a 1-2-week lag on new features, switch to Bedrock-primary.

What This Means for You

If you’re an AWS-anchored eng team with existing OpenAI direct API spend: Run the 30-min setup this week. The cost case is straightforward — Bedrock pass-through is essentially the same per-token pricing as direct API, but it consolidates your billing and applies to AWS commit drawdown. The gating questions are region availability and audit-trail dependencies.

If you’re an AWS-anchored eng team without OpenAI spend yet: This is the cheapest path to add Codex to your stack. No procurement workflow, no vendor relationship, no separate invoice. Run the setup as part of a sprint week and pilot on a non-critical service.

If you’re on GCP or Azure: This change doesn’t affect you. GCP eng teams should stay on direct API or wait for the equivalent Vertex AI integration (no public timeline yet). Azure eng teams have GPT-5.5 via Azure OpenAI Service, which is the equivalent path on that cloud.

If you’re an eng manager doing Q3 model-vendor budgeting: The Bedrock route changes the procurement narrative from “second vendor relationship” to “extending existing AWS commit usage.” If you have unspent Enterprise Discount Program (EDP) commits, this may consolidate them productively.

If you’re a security or compliance lead: The Gate 2 question (OpenAI org-level audit trail) is the one to answer first. CloudTrail covers the AWS side; if your compliance regime needs OpenAI-side metadata too, document the gap before allowing the switch.

What This Doesn’t Cover

Limited Preview means access isn’t guaranteed. OpenAI on Bedrock requires AWS account allowlisting for the OpenAI models. The provisioning timeline varies — some teams report same-week access, others 2-4 weeks. Plan accordingly.

Pricing parity is “essentially same” but not perfectly identical. Bedrock pass-through pricing tracks OpenAI direct, but Bedrock adds operational metadata to the invoice (per-region, per-model line items) that doesn’t appear on direct API invoices. If your finance team needs exact cost-per-call reconciliation, the Bedrock invoice format is more granular but different.

Codex Managed Agents on Bedrock is a separate Limited Preview beyond the CLI / desktop / VS Code paths covered here. The Managed Agents flow is for production agentic deployments and follows a different setup path. (AWS Bedrock OpenAI announcement)

Sonnet 4.8 on Bedrock is unrelated. The Bedrock OpenAI route is for OpenAI models specifically. Anthropic’s Sonnet/Opus models on Bedrock are a separate auth and provisioning path. If your team uses both, you’ll have two separate Bedrock model-access requests.

This piece is a setup walkthrough, not an architecture review. If your existing AI architecture has Claude / GPT / DeepSeek as a portfolio, the Bedrock route shifts the operational picture. Run the architecture review separately before defaulting your team to Bedrock-primary.

The Bottom Line

The Bedrock route is now a credible production path for Codex on GPT-5.5. The setup is 30 minutes. The “stay on direct API” gates are real but narrow — region availability, audit-trail dependencies, bleeding-edge feature usage. For most AWS-anchored eng teams, this is the consolidation move that cleans up the procurement story.

For developers who want a structured walkthrough on AI-coding workflows beyond the setup question, our Developers track covers the practitioner patterns. GPT-5.4 for ChatGPT Users is the closer fit for teams whose default model is now OpenAI rather than Anthropic.

Run the setup. Move the bill to AWS. Move on.


Sources

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume