Database migrations seem simple until they're not. A table with millions of rows, a deploy window measured in seconds, and a schema change that locks everythingโsuddenly, those straightforward add_column calls become high-stakes operations.
This guide covers migration patterns that work at scale with Rails 8 and MySQL, focusing on safe deployments, reversible changes, and maintainable schema evolution.
Safe Column Additions with Defaults
Adding a column with a default value in MySQL 8.0+ is fast because it uses instant DDL for most operations. However, Rails migrations can still cause issues if not structured correctly.
The problem: adding a column, setting a default, and backfilling data in one migration creates a long-running transaction that blocks other operations.
# db/migrate/20260104120000_add_status_to_orders.rb
class AddStatusToOrders < ActiveRecord::Migration[8.0]
# AVOID: This approach in one migration can lock tables
# def change
# add_column :orders, :status, :string, default: 'pending'
# Order.update_all(status: 'pending')
# end
# BETTER: Add column with default, let MySQL handle new rows
def change
add_column :orders, :status, :string, default: 'pending', null: false
end
endMySQL 8.0 handles ADD COLUMN ... DEFAULT as an instant operation for most data types. The default applies to new rows immediately, and existing rows get the default value when read (a metadata-only change). No table rebuild required.
For backfilling existing data that needs transformation beyond a simple default, separate the backfill into a data migration or background job:
# db/migrate/20260104120001_backfill_order_statuses.rb
class BackfillOrderStatuses < ActiveRecord::Migration[8.0]
disable_ddl_transaction!
def up
Order.where(legacy_state: 'complete').in_batches(of: 1000) do |batch|
batch.update_all(status: 'fulfilled')
sleep(0.1) # Reduce replication lag on replicas
end
end
def down
# Backfills are typically not reversible
end
endThe disable_ddl_transaction! directive prevents wrapping the entire migration in a transaction, allowing batched updates to commit incrementally.
Zero-Downtime Index Creation
Creating indexes on large tables blocks writes in MySQL unless using the correct algorithm. Rails 8 provides options for concurrent-style index creation.
# db/migrate/20260104130000_add_index_to_orders_customer_id.rb
class AddIndexToOrdersCustomerId < ActiveRecord::Migration[8.0]
disable_ddl_transaction!
def change
add_index :orders, :customer_id,
algorithm: :inplace,
lock: :none,
if_not_exists: true
end
endThe algorithm: :inplace and lock: :none options tell MySQL to build the index without blocking concurrent DML operations. This works for most index types in MySQL 8.0 with InnoDB.
Key considerations for production index creation:
if_not_exists: trueprevents failures if the migration runs twice- Monitor replication lag during index builds on large tables
- Composite indexes should list columns in selectivity order (most selective first)
Renaming Columns Without Downtime
Column renames seem simple but break running application instances during deployment. The standard rename_column makes the old name immediately invalid, causing errors for servers still running old code.
The expand-contract pattern solves this with three phases:
# Phase 1: Add new column, sync writes
# db/migrate/20260104140000_add_email_address_to_users.rb
class AddEmailAddressToUsers < ActiveRecord::Migration[8.0]
def change
add_column :users, :email_address, :string
add_index :users, :email_address, unique: true,
algorithm: :inplace, lock: :none
end
endUpdate the model to write to both columns during the transition:
# app/models/user.rb
class User < ApplicationRecord
# During transition: write to both, read from new
before_save :sync_email_columns
def email_address
super || email # Fall back to old column
end
private
def sync_email_columns
self.email_address = email if email_changed? && !email_address_changed?
self.email = email_address if email_address_changed? && !email_changed?
end
endAfter deploying the dual-write code and backfilling existing data:
# Phase 2: Backfill existing records
# db/migrate/20260104140001_backfill_email_address.rb
class BackfillEmailAddress < ActiveRecord::Migration[8.0]
disable_ddl_transaction!
def up
User.where(email_address: nil).in_batches(of: 5000) do |batch|
batch.update_all('email_address = email')
end
end
def down; end
end
# Phase 3: Remove old column (after all servers use new column)
# db/migrate/20260104150000_remove_email_from_users.rb
class RemoveEmailFromUsers < ActiveRecord::Migration[8.0]
def change
safety_assured { remove_column :users, :email, :string }
end
endThis three-phase approach allows zero-downtime deploys with rolling restarts. Old code reads the old column while new code reads the new one, with writes synchronized during the transition.
Foreign Key Constraints That Don't Lock
Adding foreign keys to existing tables can lock both tables during constraint validation. MySQL 8.0 supports adding constraints without immediate validation:
# db/migrate/20260104160000_add_foreign_key_to_orders.rb
class AddForeignKeyToOrders < ActiveRecord::Migration[8.0]
def change
# Add constraint without validating existing rows
add_foreign_key :orders, :customers, validate: false
end
end
# db/migrate/20260104160001_validate_orders_customer_fk.rb
class ValidateOrdersCustomerFk < ActiveRecord::Migration[8.0]
def change
validate_foreign_key :orders, :customers
end
endThe validate: false option adds the constraint for new rows immediately while skipping the potentially slow validation of existing data. Run the validation as a separate migration during low-traffic periods.
Strong Migrations and Safety Checks
The strong_migrations gem catches dangerous migration patterns before they reach production:
# Gemfile
gem 'strong_migrations'
# config/initializers/strong_migrations.rb
StrongMigrations.target_mysql_version = "8.0"
StrongMigrations.target_version = 8.0
# Customize checks for your deployment process
StrongMigrations.auto_analyze = true
StrongMigrations.lock_timeout = 10.seconds
StrongMigrations.statement_timeout = 1.hourWith this configuration, migrations that would cause downtime raise errors during development, forcing explicit acknowledgment of risky operations with safety_assured blocks.
Migration Testing Strategy
Test migrations against production-like data volumes before deploying:
# spec/migrations/add_status_to_orders_spec.rb
require 'rails_helper'
RSpec.describe AddStatusToOrders do
it 'adds status column with correct default' do
# Migration runs automatically via DatabaseCleaner strategy
order = Order.create!(customer: customers(:alice), total: 100)
expect(order.status).to eq('pending')
end
it 'handles existing records' do
# Verify backfill behavior
legacy_order = orders(:legacy_complete)
expect(legacy_order.reload.status).to eq('fulfilled')
end
endFor critical migrations, consider running against a restored production backup in a staging environment to catch issues with real data volumes and distributions.
Summary
Migrations that work safely at scale follow consistent patterns: separate schema changes from data changes, use MySQL's instant DDL capabilities, and plan for zero-downtime deploys with expand-contract renames. The strong_migrations gem enforces these patterns automatically, catching risky operations before they cause production incidents.
Next steps: combine these migration patterns with the indexing strategies from the MySQL Indexing guide to build a complete database evolution workflow that scales with application growth.