The short version, before we dig in: the April 27 amendment doesn’t end Microsoft’s relationship with OpenAI — it turns Microsoft from “exclusive gatekeeper” into “preferred wholesaler.” For a 500- to 2,000-person mid-market shop already running Azure OpenAI in production, this isn’t a panic-migration story. It’s a leverage story. And the leverage is best captured if you make decisions in Q3 instead of waiting for Q4.
Here’s the procurement-side read on what changed, what’s actually live now, and the four decisions that come out of it.
What actually changed (and what didn’t)
Per Microsoft’s official partnership update and OpenAI’s own announcement:
- License: non-exclusive, through 2032. Microsoft’s words: “Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.” That’s the formal end of cloud exclusivity.
- Microsoft remains primary cloud partner, but not only cloud. The joint statement: “Microsoft remains OpenAI’s primary cloud partner, and OpenAI products will ship first on Azure, unless Microsoft cannot and chooses not to support the necessary capabilities.” New launches still land on Azure first. OpenAI is now free to follow on AWS, GCP, and others.
- OpenAI free to serve on any cloud. “OpenAI can now serve all its products to customers across any cloud provider.”
- Revenue share changes. Microsoft “will no longer pay a revenue share to OpenAI.” OpenAI’s payments to Microsoft “will continue through 2030 … subject to a total cap.” Per Business Insider’s read, OpenAI’s payments are now capped and “independent of OpenAI’s technology progress.” The AGI trigger is gone from your pricing scenarios.
- Microsoft’s stake. ~27% of OpenAI Group PBC, on a roughly $135B valuation per the October 2025 restructuring — about $36B. Microsoft is locked in long-term as a strategic partner, not a gatekeeper.
For a procurement reader, here’s the budget translation: Azure is no longer a mandatory choke point for OpenAI, so OpenAI spend can increasingly be consolidated into the cloud you already treat as your financial system of record (AWS or GCP) instead of procured as a special Azure-only line item. The capped revenue share through 2030 means less mystery overhang on future pricing.
What’s actually live now
The “wait, can I use OpenAI on AWS today?” question has a real answer — partially yes.
Live: AWS Bedrock (limited preview)
“The latest OpenAI models, including GPT-5.5 and GPT-5.4, will be available in preview on Amazon Bedrock. Use OpenAI’s frontier models through the same Bedrock APIs you already rely on, with unified security, governance, and cost controls.”
What that includes:
- GPT-5.5 + GPT-5.4 in preview on Bedrock through standard
InvokeModelAPIs. - Codex on Bedrock (limited preview) — Codex CLI, desktop app, and VS Code extension all wired to authenticate via AWS credentials. Per OpenAI’s release note: “Codex usage [applies] toward your AWS cloud commitments.” That’s the procurement headline.
- Amazon Bedrock Managed Agents, powered by OpenAI (limited preview) — production-ready OpenAI-powered agents in the cloud, tuned for long-running tool-using workflows.
- Region availability: typical first wave (us-east-1, one EU Region). Limited preview means waitlists — assume selective rollout through Q3.
Not live yet: Google Cloud / Vertex AI
Per the Business20Channel analysis of the deal: “As of 28 April 2026, Google Cloud has not been named in the amended agreement, and no official announcement confirms OpenAI model availability on Google Cloud’s Vertex AI.”
Analyst window: 6 to 12 months for a probable GCP deal. That’s planning guidance, not a procurement plan. Don’t budget against it for Q3.
What practitioners are actually doing
The signal so far isn’t “everyone’s leaving Azure.” It’s selective workload routing.
Computerworld’s coverage quotes Thomas Randall, research director at Info-Tech Research Group: “The era of exclusive frontier model access as a strategic differentiator is coming to an end.” The implication for CIOs: multi-vendor AI is no longer a hedge. It’s the baseline.
A LinkedIn post by Azure architect Gabriel Alejo from late April puts the practitioner pattern plainly: “There is no ‘one-size-fits-all’ platform… The smartest teams today? They don’t choose one — they combine multiple.” He positions AWS Bedrock for “freedom” and avoiding lock-in, Azure for “enterprise scale,” Vertex for “data-driven AI.”
The dual-cloud production pattern that’s emerging:
- ZDR-required workloads (zero data retention compliance) → stay on Azure OpenAI or move to Anthropic on Bedrock
- Non-ZDR workloads with AWS data gravity → evaluate OpenAI Frontier on Bedrock via early-access preview
- New launches → still hit Azure first, follow to other clouds within weeks-to-months
Switching-cost themes that matter:
- Billing consolidation. Per a CIO-focused integration explainer: “Eligible customers can directly apply their Codex and GPT-5.5 usage against their existing AWS cloud computing commitments,” centralizing billing without negotiating secondary vendor agreements. For an AWS-anchored shop, staying Azure-only for OpenAI is effectively maintaining a second island of spend.
- Data gravity. Bedrock processes prompts and proprietary datasets within your existing AWS data residency and security controls. Migrating from Azure-mirrored data is the real switching cost.
- Architectural debt. Practitioners on LinkedIn are flagging “dual-cloud AI challenges” as accumulating “architectural debt every week” if you don’t cleanly separate workloads and governance when you add a second runtime.
The 4 procurement decisions this enables
Decision 1 — AWS-anchored shop? Route GPT-5.5 + Codex via Bedrock
If your main cloud commit is AWS, your dev tooling lives in AWS, and your data lake is in S3:
- Move net-new OpenAI workloads to Bedrock first — especially code-centric ones using Codex and agentic workflows. Codex billing applies against your existing AWS cloud commitments, which means no separate procurement contract to negotiate.
- Keep Azure OpenAI for workloads tightly integrated with Microsoft 365, Dynamics, or Azure-native PaaS. Don’t migrate working integrations.
- Run the comparative TCO: What incremental discount does shifting $200K-$500K/year of OpenAI spend into your AWS commit unlock vs. maintaining the Azure-only line item?
The simplest test: if you’d negotiate a 5-10% commit discount on AWS by adding OpenAI spend to your enterprise agreement, that’s almost always worth more than the migration cost on net-new workloads.
Decision 2 — GCP-anchored shop? Plan, don’t budget
For Q3:
- Don’t assume you can route OpenAI usage into your GCP commits. The deal isn’t there yet.
- Continue treating OpenAI via Azure (or direct API) as a separate vendor stack.
- But update your 2027-2028 AI sourcing roadmap to include “Vertex AI + OpenAI” as a planned scenario once/if announced.
- Use the non-exclusive license to add OpenAI procurement requirements into internal RFPs: “We require the right to host OpenAI models on any major cloud our company uses.”
This is the longest-horizon decision in the table. It’s also the cheapest — you’re updating a planning document, not signing a contract.
Decision 3 — Formalize multi-cloud hedging as policy
The practitioner pattern is becoming the procurement default. For mid-market IT, the policy update for Q3:
- AI vendor diversification policy — written, distributed, in your IT roadmap. The exclusivity end is the explicit trigger to formalize it.
- Workload classification by compliance tier — ZDR / non-ZDR / public-data is the cleanest split. Document which tier each production workload sits in before you make routing decisions.
- Per-workload runtime selection — Azure for ZDR + Microsoft-stack-native; Bedrock for AWS-anchored + cost-optimized; direct OpenAI API for prototyping where commits don’t apply.
- Quarterly recompete review — Q4 2026 is the obvious moment to reassess the routing as Bedrock previews graduate to GA and GCP signals emerge.
Decision 4 — Stay Azure-first for new launches
The Azure-first guarantee isn’t gone. New OpenAI features land on Azure first, and Bedrock follows. For shops where time-to-launch on the latest model matters more than commit-discount math:
- Keep Azure OpenAI as the prototyping / new-feature lane — first access to GPT-5.5, GPT-6, whatever ships next.
- Rotate stable workloads to Bedrock as features hit GA there.
- Use the lag as a free QA period — Bedrock typically gets the model 4-12 weeks after Azure, which means Bedrock users skip the early-stability surprises.
This is the most conservative posture and the one that probably fits 60-70% of mid-market IT shops who don’t have AWS data gravity advantages.
The 3 “skip the change if” gates
Multi-cloud isn’t free. Three honest gates before you move:
- Already heavy Azure-OpenAI committed. If you signed a 3-year Azure enterprise agreement in 2024-2025 with OpenAI as a major spend driver, the migration cost (commit-breakage + re-architecting) almost certainly exceeds Q3 savings. Wait out the contract.
- Data-residency requires an Azure-specific tenant. Some EU regulated workloads (financial, healthcare, public sector) have residency or attestation requirements that only Azure currently meets. Bedrock’s EU coverage is improving, but parity isn’t guaranteed yet.
- Vendor-relationship favors single-provider stack. Some shops have explicit “single primary AI provider” policies driven by procurement risk, support contracts, or executive preference. The exclusivity end doesn’t change those policies on its own — that’s a separate conversation with your CIO.
If any of these apply, the right move is “watch and update planning docs, don’t migrate.” That’s also a real Q3 decision.
What this means for you, by org type
If you’re a CTO at a Series B / C startup running Azure OpenAI: The exclusivity end is your leverage to renegotiate. Your Azure rep knows the alternative is real now. Even if you don’t actually move, the threat is enough to extract better commit pricing.
If you’re a VP Engineering at a Microsoft-365-anchored mid-market shop: Stay Azure-first. The integrations are too valuable. But add Bedrock as a sandboxed second runtime for code-heavy workloads where Codex billing-into-AWS-commit is a genuine win.
If you’re a procurement lead at an AWS-anchored shop with Azure as the secondary cloud: This is the inverse — make Bedrock the primary OpenAI lane and keep Azure for the integrations. Q3 should see the policy update + first workload migration.
If you’re a CIO at a regulated org (financial / healthcare / EU public sector): Slow down. Azure’s compliance attestations take time to mirror to Bedrock. Plan for 2027 multi-cloud, not Q3 multi-cloud. Don’t let the news cycle force a procurement timeline that doesn’t fit your audit posture.
If you’re a platform engineer at a 50-200 person shop: Don’t treat this as your problem. Mid-market policy lags executive decisions. Wait for the procurement memo. Focus on internal abstraction (a thin LLM client) so when the routing changes, your code doesn’t.
Honest limits
- Bedrock OpenAI is limited preview, not GA. Region rollout is selective; access is gated. Plan for waitlists.
- There’s no public migration case study yet. Cost deltas and switching-cost themes are inferred from list pricing, AWS commit math, and practitioner posts. Run your own TCO before you commit.
- The AGI trigger removal is real but slow-burn. It changes the long-tail risk profile of Microsoft and OpenAI’s commercial relationship. It doesn’t change Q3 SKU pricing.
- Capped revenue share through 2030 doesn’t mean zero revenue share. OpenAI still pays Microsoft. The cap is the news; the ongoing payments aren’t.
- GCP timing is rumor, not roadmap. Don’t budget against it. Update planning docs.
The bottom line
The exclusivity end is real procurement leverage for mid-market IT — most usefully for AWS-anchored shops who can now consolidate OpenAI spend into their AWS commits via Bedrock-routed GPT-5.5 and Codex. For Azure-heavy shops, the move is policy update + Bedrock as a sandboxed secondary runtime, not panic migration. For GCP shops, the move is “watch through Q4 and revisit when the deal lands.”
Q3 is the right moment to formalize multi-cloud AI as policy, run the comparative TCO on net-new workloads, and update your 2027 sourcing roadmap. Q4 is the right moment to reassess as Bedrock previews graduate to GA.
If you want the longer-form playbook for rolling out multi-cloud AI procurement — control-plane patterns, audit design, the workload-classification framework that doesn’t break compliance — our Enterprise AI Rollout Playbook course walks through it. Free to start, Pro for the full path.
Sources
- The next phase of the Microsoft–OpenAI partnership — OpenAI
- The next phase of the Microsoft-OpenAI partnership — Microsoft Blog
- OpenAI Models on Amazon Bedrock — AWS announcement
- Codex on Amazon Bedrock — limited preview
- Computerworld — Info-Tech Research Group analyst commentary
- OpenAI breaks free of Microsoft’s cloud — Axios
- OpenAI ends Microsoft legal peril over $50B Amazon deal — TechCrunch
- Is Microsoft’s Amended Deal With OpenAI Good for the Stock — Motley Fool
- Business20Channel — analysis of Q3 procurement implications