Safe Database Migrations
Build safe database migration workflows with AI — migration script generation, locking analysis, rollback plans, expand-contract patterns, and zero-downtime deployment strategies.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
🔄 Quick Recall: In the previous lesson, you optimized slow queries with AI. Now you’ll build the migration workflows that let you change your schema safely — because a migration that locks a table for 15 minutes during business hours is worse than the performance problem you were trying to fix.
Database migrations are the highest-risk routine operation in software development. A bad migration can lock tables, corrupt data, or take down production. AI helps by analyzing each migration for risks, generating safe deployment scripts, and recommending the right pattern (expand-contract, online DDL, batched updates) for your specific database and table size.
Migration Script Generation
AI prompt for migration creation:
Generate a safe database migration script. Change needed: [DESCRIBE — e.g., add a column, create a table, modify a constraint, rename a field]. Database: [PostgreSQL/MySQL VERSION/SQL Server]. Table: [NAME] with approximately [ROW COUNT] rows. Production: [YES/NO — is this a live production database?]. Generate: (1) the migration script with safety checks (IF NOT EXISTS, IF EXISTS), (2) a risk assessment — will this lock the table? How long? Read-blocking or write-blocking? (3) a rollback script that undoes the change, (4) a pre-migration verification query (check that the migration is safe to run), (5) a post-migration verification query (confirm the migration succeeded). For production databases, recommend the deployment strategy: can it run during traffic, or does it need a maintenance window?
Migration risk assessment matrix:
| Operation | PostgreSQL | MySQL 8.0+ | MySQL 5.7 | Lock Duration |
|---|---|---|---|---|
| ADD COLUMN (nullable, no default) | Instant | Instant | Instant | None |
| ADD COLUMN (with default) | Instant (PG 11+) | Instant | Full rewrite | None / Long |
| DROP COLUMN | Quick metadata | Online | Full rewrite | Short / Long |
| ADD INDEX | CONCURRENTLY option | Online | Locks writes | None / Long |
| RENAME COLUMN | Instant | Instant | Instant | None (but breaks code) |
| ADD NOT NULL | Quick (if no NULLs) | Online | Full rewrite | Short / Long |
| MODIFY COLUMN TYPE | Full rewrite | Full rewrite | Full rewrite | Long |
Expand-Contract Pattern
AI prompt for expand-contract migration:
Design an expand-contract migration for this schema change: [DESCRIBE THE CHANGE — e.g., rename column, split table, change data type, merge columns]. Table: [NAME] with [ROW COUNT] rows. Generate a 3-phase migration plan: (1) EXPAND phase — add new structure alongside old (new column, new table, new format), write migration to populate new from old, set up dual-write in application code, (2) MIGRATE phase — update application to read from new structure, verify all reads use new structure, (3) CONTRACT phase — remove old structure, clean up dual-write code. For each phase: the SQL migration script, the application code changes needed, the rollback plan, and the verification that the phase completed successfully.
✅ Quick Check: You need to change a ‘price’ column from FLOAT to INTEGER (storing cents). Current data: 19.99, 29.50, etc. Target: 1999, 2950, etc. Can you do this in one migration? (Answer: No — changing the column type requires data transformation, and a direct ALTER TABLE MODIFY COLUMN would either fail or produce wrong values. Expand-contract: (1) Add ‘price_cents’ INTEGER column, (2) populate: UPDATE products SET price_cents = ROUND(price * 100), (3) verify all values are correct, (4) update code to use price_cents, (5) drop ‘price’ column. Each step is independently reversible.)
Large Table Operations
AI prompt for large table migrations:
I need to perform [OPERATION] on a table with [ROW COUNT] rows in [DATABASE]. The operation involves: [DESCRIBE — e.g., backfilling a new column, updating data formats, adding an index]. Generate a safe migration plan that: (1) avoids table locks on production, (2) processes data in batches (recommend batch size based on table size), (3) includes progress tracking (how to know where you are in the migration), (4) handles errors gracefully (if a batch fails, the remaining data is still processable), (5) can be paused and resumed (important for multi-hour migrations). Provide the batch processing script and estimated total runtime.
Batch migration template considerations:
| Table Size | Batch Size | Estimated Time | Strategy |
|---|---|---|---|
| < 100K rows | All at once | < 1 second | Direct migration |
| 100K - 1M rows | 10,000 | 1-5 minutes | Batched, no special tooling |
| 1M - 10M rows | 5,000-10,000 | 10-60 minutes | Batched with progress tracking |
| 10M - 100M rows | 1,000-5,000 | 1-8 hours | Batched, pausable, off-peak hours |
| 100M+ rows | 1,000 | 8+ hours | Dedicated migration job, monitoring |
Rollback Planning
AI prompt for rollback strategy:
Create a rollback plan for this migration: [DESCRIBE THE MIGRATION]. For each step of the migration: (1) the rollback SQL that reverses it, (2) whether data loss is possible during rollback (e.g., a dropped column can’t be restored), (3) the time window — how long after the migration is rollback still possible? (4) dependencies — does the application code need to be rolled back too? (5) verification — how to confirm the rollback was successful. Flag any migration steps that are irreversible and require special handling (data backups before execution).
Key Takeaways
- Migration risk varies dramatically by database engine and version — the same ALTER TABLE that’s instant in PostgreSQL or MySQL 8.0 can lock a table for minutes in MySQL 5.7. AI knows these differences and recommends the right tool (online DDL, pt-online-schema-change, batched updates)
- The expand-contract pattern (add new → migrate code → remove old) eliminates zero-downtime deployment risk for breaking schema changes — it’s three simple deployments instead of one risky one
- Adding NOT NULL to a column with existing NULL values fails immediately — AI prevents this by generating data-fix migrations before constraint additions, with verification steps between them
- Large table operations (millions of rows) must be batched to avoid lock contention and resource exhaustion — AI generates batch scripts with progress tracking, pause/resume capability, and error handling
- Every migration needs a rollback plan before execution — AI generates the reverse SQL for each step and identifies irreversible operations that require data backups first
Up Next
In the next lesson, you’ll build AI-powered performance monitoring — baseline establishment, anomaly detection, and the proactive alerts that catch problems before users notice.
Knowledge Check
Complete the quiz above first
Lesson completed!