Regeneration-safe schema evolution for predictable migrations
Regeneration-safe schema evolution keeps production data valid when backend code is regenerated. Learn a practical way to plan schema changes and migrations.

Why schema changes feel risky with regenerated backends
When your backend is regenerated from a visual model, a database change can feel like pulling a thread in a sweater. You update a field in the Data Designer, click regenerate, and suddenly you are not only changing a table. You are also changing the generated API, validation rules, and the queries your app uses to read and write data.
What usually goes wrong is not that the new code fails to build. Many no-code platforms (including AppMaster, which generates real Go code for backends) will happily generate a clean project every time. The real risk is that production data already exists, and it does not automatically reshape itself to match your new ideas.
The two failures people notice first are simple:
- Broken reads: the app can no longer load records because a column moved, a type changed, or a query expects something that is not there.
- Broken writes: new or updated records fail because constraints, required fields, or formats changed, and existing clients still send the old shape.
Both failures are painful because they can hide until real users hit them. A staging database might be empty or freshly seeded, so everything looks fine. Production has edge cases: nulls where you assumed values, old enum strings, or rows created before a new rule existed.
This is why regeneration-safe schema evolution matters. The goal is to make each change safe even when the backend code is fully regenerated, so old records remain valid and new records can still be created.
“Predictable migrations” just means you can answer four questions before you deploy: what will change in the database, what will happen to existing rows, what version of the app can still work during rollout, and how you will roll back if something unexpected appears.
A simple model: schema, migrations, and regenerated code
When your platform can regenerate the backend, it helps to separate three things in your head: the database schema, the migration steps that change it, and the live data already sitting in production. Confusing these is why changes feel unpredictable.
Think of regeneration as “rebuilding the application code from the latest model.” In a tool like AppMaster, that rebuild can happen many times during normal work: you tweak a field, adjust business logic, add an endpoint, regenerate, test, repeat. Regeneration is frequent. Your production database should not be.
Here’s the simple model.
- Schema: the structure of your database tables, columns, indexes, and constraints. It is what the database expects.
- Migrations: the ordered, repeatable steps that move the schema from one version to the next (and sometimes move data too). This is what you run on each environment.
- Runtime data: the real records created by users and processes. It must remain valid before, during, and after change.
Regenerated code should be treated as “the current app that talks to the current schema.” Migrations are the bridge that keeps the schema and runtime data aligned as the code changes.
Why regeneration changes the game
If you regenerate often, you will naturally make lots of small schema edits. That is normal. The risk appears when those edits imply a database change that is not backward compatible, or when your migration is not deterministic.
A practical way to manage this is to plan regeneration-safe schema evolution as a series of small, reversible steps. Instead of one big switch, you do controlled moves that keep old and new code paths working together for a short time.
For example, if you want to rename a column used by a live API, don’t rename it immediately. First add the new column, write to both, backfill existing rows, then switch reads to the new column. Only after that do you remove the old column. Each step is easy to test, and if something goes wrong, you can pause without corrupting data.
This mental model makes migrations predictable, even when code regeneration is happening daily.
Types of schema changes and which ones break production
When your backend is regenerated from the latest schema, the code usually assumes the database matches that schema right now. That is why regeneration-safe schema evolution is less about “Can we change the database?” and more about “Can old data and old requests survive while we roll the change out?”
Some changes are naturally safe because they do not invalidate existing rows or queries. Others change the meaning of data or remove something the running app still expects, which is where production incidents happen.
Low-risk, usually safe (additive)
Additive changes are the easiest to ship because they can coexist with old data.
- New table that nothing depends on yet.
- New nullable column with no default requirement.
- New API field that is optional end-to-end.
Example: adding a nullable middle_name column to a users table is typically safe. Existing rows remain valid, regenerated code can read it when present, and older rows simply have NULL.
Medium-risk (meaning changes)
These changes often “work” technically but break behavior. They need careful coordination because regeneration updates validations, generated models, and business logic assumptions.
Renames are the classic trap: renaming phone to mobile_phone might regenerate code that no longer reads phone, while production still has data there. Similarly, changing units (storing price in dollars vs cents) can silently corrupt calculations if you update code before data, or data before code.
Enums are another sharp edge. Tightening an enum (removing values) can make existing rows invalid. Expanding an enum (adding values) is usually safe, but only if all code paths can handle the new value.
A practical approach is to treat meaning changes as “add new, backfill, switch, then remove later.”
High-risk (destructive)
Destructive changes are the ones that most often break production immediately, especially when the platform regenerates code that stops expecting the old shape.
Dropping a column, dropping a table, or changing a column from nullable to not-null can fail writes the moment any request tries to insert a row without that value. Even if you think “all rows already have it,” the next edge case or background job can prove otherwise.
If you must do a not-null change, do it in phases: add the column as nullable, backfill, update app logic to always set it, then enforce not-null.
Performance and safety changes (can block writes)
Indexes and constraints are not “data shape” changes, but they can still cause downtime. Creating an index on a large table or adding a unique constraint can lock writes long enough to cause timeouts. In PostgreSQL, certain operations are safer when done with online-friendly methods, but the key point is timing: do heavy operations during low traffic and measure how long they take in a staging copy.
When changes need extra care in production, plan for:
- A two-step rollout (schema first, code second, or vice versa) that stays compatible.
- Backfills that run in batches.
- A clear rollback path (what happens if the regenerated backend goes live early).
- Verification queries that prove data matches the new rules.
- A “remove old field later” ticket so cleanup does not happen in the same deploy.
If you’re using a platform like AppMaster that regenerates backend code from the Data Designer, the safest mindset is: ship changes that old data can live with today, then tighten rules only after the system has already adapted.
Principles for regeneration-safe changes
Regenerated backends are great until a schema change lands in production and old rows do not match the new shape. The goal of regeneration-safe schema evolution is simple: keep your app working while your database and regenerated code catch up to each other in small, predictable steps.
Default to “expand, migrate, contract”
Treat every meaningful change as three moves. First, expand the schema so both the old and new code can run. Then migrate data. Only after that, contract by removing old columns, defaults, or constraints.
A practical rule: never combine “new structure” and “breaking cleanup” in the same deploy.
Support old and new shapes for a while
Assume there will be a period where:
- some records have the new fields, some do not
- some app instances run old code, some run regenerated code
- background jobs, imports, or mobile clients may lag behind
Design your database so both shapes are valid during that overlap. This matters even more when a platform regenerates your backend from the latest model (for example, in AppMaster when you update the Data Designer and regenerate the Go backend).
Make reads compatible before writes
Start by making the new code able to read old data safely. Only then switch write paths to produce the new data shape.
For example, if you split a single "status" field into "status" + "status_reason", ship code that can handle missing "status_reason" first. After that, begin writing "status_reason" for new updates.
Decide what to do with partial and unknown data
When you add enums, non-null columns, or tighter constraints, decide upfront what should happen when values are missing or unexpected:
- allow nulls temporarily, then backfill
- set a safe default that does not change meaning
- keep an “unknown” value to avoid failing reads
This prevents silent corruption (wrong defaults) and hard failures (new constraints rejecting old rows).
Have a rollback story for every step
Rollback is easiest during the expand phase. If you need to revert, the old code should still run against the expanded schema. Write down what you would roll back (code only, or code plus migration), and avoid destructive changes until you are confident you will not need to undo them.
Step by step: plan a change that survives regeneration
Regenerated backends are unforgiving: if the schema and the generated code disagree, production usually finds it first. The safest approach is to treat every change as a small, reversible rollout, even if you are building with no-code tools.
Start by writing down the intent in plain language and what your data looks like today. Pick 3 to 5 real rows from production (or a recent dump) and note the messy parts: empty values, old formats, surprising defaults. This prevents you from designing a perfect new field that real data cannot satisfy.
Here is a practical sequence that works well when your platform regenerates backend code (for example, when AppMaster regenerates Go backend services from the Data Designer model):
-
Expand first, do not replace. Add new columns or tables in an additive way. Make new fields nullable at first, or give them safe defaults. If you are introducing a new relationship, allow the foreign key to be empty until you have populated it.
-
Deploy the expanded schema without removing anything. Ship the database change while the old code still works. The goal is: old code can keep writing the old columns, and the database accepts it.
-
Backfill in a controlled job. Populate new fields using a batch process that you can monitor and rerun. Keep it idempotent (running twice should not corrupt data). Do it gradually if the table is large, and log how many rows were updated.
-
Switch reads first, with fallback. Update the regenerated logic to prefer the new fields, but fall back to old fields when the new data is missing. Only after reads are stable, switch writes to the new fields.
-
Clean up last. Once you have confidence (and a rollback plan), remove old fields and tighten constraints: set NOT NULL, add unique constraints, and enforce foreign keys.
Concrete example: you want to replace a free-text status column with a status_id pointing to a statuses table. Add status_id as nullable, backfill it from existing text values, update the app to read status_id but fall back to status when null, then finally drop status and make status_id required. This keeps each regeneration safe because the database remains compatible at every stage.
Practical patterns you can reuse
When your backend is regenerated, small schema tweaks can ripple into APIs, validation rules, and UI forms. The goal of regeneration-safe schema evolution is to make changes in a way that old data stays valid while new code rolls out.
Pattern 1: Non-breaking rename
A direct rename is risky because old records and old code often still expect the original field. A safer approach is to treat a rename as a short migration period.
- Add the new column (for example,
customer_phone) and keep the old one (phone). - Update logic to dual-write: whenever you save, write to both fields.
- Backfill existing rows so
customer_phoneis filled for all current records. - Switch reads to the new column once coverage is high.
- Drop the old column in a later release.
This works well in tools like AppMaster where regeneration will rebuild data models and endpoints from the current schema. Dual-write keeps both generations of code happy while you transition.
Pattern 2: Split one field into two
Splitting full_name into first_name and last_name is similar, but the backfill is trickier. Keep full_name until you are confident the split is complete.
A practical rule: do not remove the original field until every record is either backfilled or has a clear fallback. For example, if parsing fails, store the whole string in last_name and flag the record for review.
Pattern 3: Make a field required
Turning a nullable field into required is a classic production breaker. The safe order is: backfill first, then enforce.
Backfill can be mechanical (set a default) or business-driven (ask users to complete missing data). Only after the data is complete should you add NOT NULL and update validations. If your regenerated backend adds stricter validation automatically, this sequencing prevents surprise failures.
Pattern 4: Change an enum safely
Enum changes break when old code sends old values. During a transition, accept both. If you are replacing "pending" with "queued", keep both values valid and map them in your logic. Once you confirm no clients send the old value, remove it.
If the change must ship in one release, reduce risk by narrowing the blast radius:
- Add new fields but keep old ones, even if unused.
- Use a database default so inserts keep working.
- Make code tolerant: read from new, fall back to old.
- Add a temporary mapping layer (old value in, new value stored).
These patterns keep migrations predictable even when regenerated code changes behavior quickly.
Common mistakes that cause surprises
The biggest surprises happen when people treat code regeneration like a magic reset button. Regenerated backends can keep your application code clean, but your production database still contains yesterday’s data, with yesterday’s shape. Regeneration-safe schema evolution means you plan for both: the new code that will be generated and the old records that will still be sitting in tables.
One common trap is assuming the platform will “take care of migrations.” For example, in AppMaster you can regenerate a Go backend from your updated Data Designer model, but the platform cannot guess how you want to transform real customer data. If you add a new required field, you still need a clear plan for how existing rows get a value.
Another surprise is dropping or renaming fields too early. A field may look unused in the main UI, but still be read by a report, a scheduled export, a webhook handler, or an admin screen someone rarely opens. The change looks safe in testing, then fails in production because one forgotten code path still expects the old column name.
Here are five mistakes that tend to cause late-night rollbacks:
- Changing the schema and regenerating code, but never writing or verifying the data migration that makes old rows valid.
- Renaming or deleting a column before every reader and writer has been updated and deployed.
- Backfilling a large table without checking how long it will run and whether it will block other writes.
- Adding a new constraint (NOT NULL, UNIQUE, foreign key) first, then discovering legacy data that breaks it.
- Forgetting background jobs, exports, and reports that still read the old fields.
A simple scenario: you rename phone to mobile_number, add a NOT NULL constraint, and regenerate. The app screens may work, but an older CSV export still selects phone, and thousands of existing records have null mobile_number. The fix is usually a phased change: add the new column, write to both for a while, backfill safely, then tighten constraints and remove the old field only after you have proof nothing depends on it.
Quick pre-deploy checklist for safer migrations
When your backend is regenerated, the code can change quickly, but your production data will not forgive surprises. Before you ship a schema change, run a short “can this fail safely?” check. It keeps regeneration-safe schema evolution boring (which is what you want).
The 5 checks that catch most problems
- Backfill size and speed: estimate how many existing rows need to be updated (or re-written) and how long it will take in production. A backfill that is fine on a small database can take hours on real data and make the app feel slow.
- Locks and downtime risk: identify whether the change may block writes. Some operations (like altering large tables or changing types) can hold locks long enough to cause timeouts. If there’s any chance of blocking, plan a safer rollout (add new column first, backfill later, switch code last).
- Old code vs new schema compatibility: assume the old backend might run for a short time against the new schema during deploy or rollback. Ask: will the previous version still read and write without crashing? If not, you need a two-step release.
- Defaults and null behavior: for new columns, decide what happens for existing records. Will they be NULL, or do you need a default? Make sure your logic treats missing values as normal, especially for flags, status fields, and timestamps.
- Monitoring signals to watch: pick the exact alarms you will stare at after deploy: error rate (API failures), database slow queries, queue/job failures, and any key user action (checkout, login, form submit). Also watch for “silent” bugs like a spike in validation errors.
A quick example
If you add a new required field like status to an orders table, don’t push it as “NOT NULL, no default” in one go. First add it as nullable with a default for new rows, deploy regenerated code that handles missing status, then backfill old rows, and only then tighten constraints.
In AppMaster, this mindset is especially useful because the backend can be regenerated often. Treat each schema change as a small release with an easy rollback, and your migrations stay predictable.
Example: evolving a live app without breaking existing records
Imagine an internal support tool where agents tag tickets with a free text field called priority (examples: "high", "urgent", "HIGH", "p1"). You want to switch to a strict enum so reports and routing rules stop guessing.
The safe approach is a two-release change that keeps old records valid while your backend is being regenerated.
Release 1: expand, write both, and backfill
Start by expanding the schema without removing anything. Add a new enum field, for example priority_enum with values like low, medium, high, urgent. Keep the original priority_text field.
Then update logic so new and edited tickets write to both fields. In a no-code tool like AppMaster, that typically means adjusting the Data Designer model and updating the Business Process so it maps input to the enum and also stores the original text.
Next, backfill existing tickets in small batches. Map common text values to the enum ("p1" and "urgent" -> urgent, "HIGH" -> high). Anything unknown can temporarily map to medium while you review.
What users see: ideally nothing changes yet. The UI can still show the same priority control, but behind the scenes you are populating the new enum. Reports can start using the enum as soon as backfill is underway.
Release 2: contract and remove the old path
After you are confident, switch reads to use priority_enum only, update any filters and dashboards, and then remove priority_text in a later migration.
Before Release 2, validate with a small sample so you catch edge cases:
- Pick 20 to 50 tickets across different teams and ages.
- Compare the displayed priority with the stored enum value.
- Check counts by enum value to spot suspicious spikes (for example, too many
medium).
If issues appear, rollback is simple because Release 1 kept the old field: redeploy the Release 1 logic and set the UI to read from priority_text again while you fix the mapping and rerun backfill.
Next steps: make schema evolution a repeatable habit
If you want predictable migrations, treat schema changes like a small project, not a quick edit. The goal is simple: every change should be easy to explain, easy to rehearse, and hard to accidentally break.
A visual data model helps because it makes impact visible before you deploy. When you can see tables, relations, and field types in one place, you notice things that are easy to miss in a script, like a required field that has no safe default, or a relationship that will orphan old records. Do a quick “who depends on this?” pass: APIs, screens, reports, and any background jobs.
When you need to change a field that is already in use, prefer a short transition period with duplicated fields. For example, add phone_e164 while keeping phone_raw for one or two releases. Update business logic to read from the new field when present, and fall back to the old field when it is not. Write to both fields during the transition, then remove the old one only after you have verified the data is fully backfilled.
Environment discipline is what turns good intentions into safe releases. Keep dev, staging, and production aligned, but do not treat them as identical.
- Dev: prove the regenerated backend still starts cleanly and basic flows work after regeneration.
- Staging: run the full migration plan on production-like data and verify key queries, reports, and imports.
- Production: deploy when you have a rollback plan, clear monitoring, and a small set of “must pass” checks.
Make your migration plan a real document, even if it is short. Include: what changes, the order, how to backfill, how to verify, and how to roll back. Then run it end to end in a test environment before it ever touches production.
If you are using AppMaster, lean on the Data Designer to reason about the model visually, and let regeneration keep your backend code consistent with the updated schema. The habit that keeps things predictable is keeping migrations explicit: you can iterate quickly, but every change still has a planned path for existing production data.


