Generated columns vs triggers in PostgreSQL: what to use
Generated columns vs triggers in PostgreSQL: choose the right approach for totals, statuses, and normalized values with clear speed and debugging tradeoffs.

What problem are we trying to solve with derived fields?
A derived field is a value you store or expose because it can be calculated from other data. Instead of repeating the same calculation in every query and every screen, you define the rule once and reuse it.
Common examples are easy to picture:
order_totalequals the sum of line items, minus discounts, plus tax- a status like "paid" or "overdue" based on dates and payment records
- a normalized value like a lowercased email, a trimmed phone number, or a search-friendly version of a name
Teams use derived fields because reads get simpler and more consistent. A report can select order_total directly. Support can filter by status without copying complicated logic. One shared rule also reduces small differences between services, dashboards, and background jobs.
The risks are real, though. The biggest one is stale data: the inputs change, but the derived value doesn’t. Another is hidden logic: the rule lives in a trigger, a function, or an old migration, and nobody remembers it exists. A third is duplication: you end up with “almost the same” rules in multiple places, and they drift apart over time.
That’s why the choice between generated columns and triggers in PostgreSQL matters. You’re not only choosing how to compute a value. You’re choosing where the rule lives, what it costs on writes, and how easy it is to trace a wrong number back to its cause.
The rest of this article looks at three practical angles: maintainability (can people understand and change it), query speed (reads, writes, indexes), and debugging (how you find out why a value is wrong).
Generated columns and triggers: simple definitions
When people compare generated columns and triggers in PostgreSQL, they’re really choosing where a derived value should live: inside the table definition, or inside procedural logic that runs when data changes.
Generated columns
A generated column is a real table column whose value is computed from other columns in the same row. In PostgreSQL, generated columns are stored (the database saves the computed result on disk) and kept up to date automatically when the referenced columns change.
A generated column behaves like a normal column for querying and indexing, but you don’t write to it directly. If you need a computed value that is not stored, PostgreSQL typically uses a view (or a query expression) rather than a generated column.
Triggers
A trigger is logic that runs on events like INSERT, UPDATE, or DELETE. Triggers can run BEFORE or AFTER the change, and they can run once per row or once per statement.
Because triggers run as code, they can do more than simple math. They can update other columns, write to other tables, enforce custom rules, and react to changes across multiple rows.
A useful way to remember the difference:
- Generated columns fit predictable, row-level calculations (totals, normalized text, simple flags) that should always match the current row.
- Triggers fit rules that involve timing, side effects, or cross-row and cross-table logic (status transitions, audit logs, inventory adjustments).
One note on constraints: built-in constraints (NOT NULL, CHECK, UNIQUE, foreign keys) are clear and declarative, but limited. For example, a CHECK constraint can’t depend on other rows via a subquery. When a rule depends on more than the current row, you usually end up with triggers or a redesign.
If you build with a visual tool like AppMaster, this difference maps neatly to “data model formula” style rules versus “business process” rules that run when records change.
Maintainability: which one stays readable over time?
The main maintainability difference is where the rule lives.
A generated column keeps the logic next to the data definition. When someone opens the table schema, they can see the expression that produces the value.
With triggers, the rule moves into a trigger function. You also need to know which tables and events call it. Months later, “readability” often means: can someone understand the rule without hunting around the database? Generated columns usually win because the definition is visible in one place and has fewer moving parts.
Triggers can still be clean if you keep the function small and focused. The trouble starts when a trigger function becomes a dumping ground for unrelated rules. It may work, but it becomes hard to reason about and risky to change.
Changes are another pressure point. With generated columns, updates are typically a migration that changes a single expression. That’s straightforward to review and roll back. Triggers often require coordinated changes across the function body and the trigger definition, plus extra steps for backfills and safety checks.
To keep rules discoverable over time, a few habits help:
- Name columns, triggers, and functions after the business rule they enforce.
- Add short comments that explain intent, not just the math.
- Keep trigger functions small (one rule, one table).
- Keep migrations in version control and require reviews.
- Periodically list all triggers in the schema and remove the ones you no longer need.
The same idea applies in AppMaster: prefer rules you can see and audit quickly, and keep “hidden” write-time logic to a minimum.
Query speed: what changes for reads, writes, and indexes?
The performance question is basically: do you want to pay the cost on reads, or on writes?
A generated column is computed when the row is written, then stored. Reads are fast because the value is already there. The tradeoff is that every INSERT and every UPDATE that touches the inputs must also compute the generated value.
A trigger-based approach usually stores the derived value in a normal column and keeps it updated with a trigger. Reads are also fast, but writes can be slower and less predictable. Triggers add extra work per row, and the overhead becomes obvious during bulk updates.
Indexing is where stored derived values matter most. If you frequently filter or sort by a derived field (a normalized email, a total, a status code), an index can turn a slow scan into a quick lookup. With generated columns, you can index the generated value directly. With triggers, you can also index the maintained column, but you’re relying on the trigger to keep it correct.
If you compute the value inside the query instead (for example, in a WHERE clause), you may need an expression index to avoid recomputing it for many rows.
Bulk imports and large updates are common hotspots:
- Generated columns add a consistent compute cost to each affected row.
- Triggers add compute cost plus trigger overhead, and poorly written logic can multiply that cost.
- Big updates can make trigger work the bottleneck.
A practical way to choose is to look for real hotspots. If the table is read-heavy and the derived field is used in filters, stored values (generated or trigger-maintained) plus an index usually win. If it’s write-heavy (events, logs), be careful about adding per-row work unless it’s truly needed.
Debugging: finding the source of wrong values
When a derived field is wrong, start by making the bug repeatable. Capture the exact row state that produced the bad value, then rerun the same INSERT or UPDATE in a clean transaction so you’re not chasing side effects.
A quick way to narrow it down is to ask: did the value come from a deterministic expression, or from write-time logic?
Generated columns usually fail in consistent ways. If the expression is wrong, it’s wrong every time for the same inputs. The common surprises are NULL handling (one NULL can turn a whole calculation into NULL), implicit casts (text to numeric), and edge cases like division by zero. If results differ across environments, look for differences in collation, extensions, or schema changes that altered the expression.
Triggers fail in messier ways because they depend on timing and context. A trigger might not fire when you expect (wrong event, wrong table, missing WHEN clause). It might fire multiple times via trigger chains. Bugs can also come from session settings, search_path, or reading other tables that differ between environments.
When a derived value looks wrong, this checklist is usually enough to pinpoint the cause:
- Reproduce with a minimal INSERT/UPDATE and the smallest sample row.
- Select the raw input columns next to the derived column to confirm the inputs.
- For generated columns, run the expression in a SELECT and compare.
- For triggers, temporarily add RAISE LOG notices or write to a debug table.
- Compare schema and trigger definitions between environments.
Small test datasets with known outcomes reduce surprises. For example, create two orders: one with a NULL discount and one with discount 0, then confirm totals behave as expected. Do the same for status transitions, and verify they happen only on the intended updates.
How to choose: a decision path
The best choice usually becomes clear once you answer a few practical questions.
Step 1-3: correctness first, then workload
Work through these in order:
- Does the value need to always match other columns, with no exceptions? If yes, enforce it in the database rather than setting it in the app and hoping it stays correct.
- Is the formula deterministic and based only on columns in the same row (for example,
lower(email)orprice * quantity)? If yes, a generated column is usually the cleanest option. - Are you mostly reading this value (filtering, sorting, reporting) or mostly writing it (lots of inserts/updates)? Generated columns shift cost to writes, so write-heavy tables may feel it sooner.
If the rule depends on other rows, other tables, or time-sensitive logic (for example, “set status to overdue if no payment after 7 days”), a trigger is often a better fit because it can run richer logic.
Step 4-6: indexing, testing, and keeping it simple
Now decide how the value will be used and verified:
- Will you filter or sort by it often? If yes, plan for an index and confirm your approach supports it cleanly.
- How will you test and observe changes? Generated columns are easier to reason about because the rule lives in one expression. Triggers need targeted tests and clear logging because the value changes “on the side.”
- Choose the simplest option that meets the constraints. If a generated column works, it’s usually easier to maintain. If you need cross-row rules, multi-step status changes, or side effects, accept the trigger, but keep it small and well-named.
A good gut check: if you can explain the rule in one sentence and it only uses the current row, start with a generated column. If you’re describing a workflow, you’re probably in trigger territory.
Using generated columns for totals and normalized values
Generated columns work well when the value is fully derived from other columns in the same row and the rule is stable. This is where they feel simplest: the formula lives in the table definition, and PostgreSQL keeps it consistent.
Typical examples include normalized values (like a lowercased, trimmed key used for lookups) and simple totals (like subtotal + tax - discount). For instance, an orders table might store subtotal, tax, and discount, and expose total as a generated column so every query sees the same number without relying on application code.
When you write the expression, keep it boring and defensive:
- Handle NULLs with
COALESCEso totals don’t unexpectedly become NULL. - Cast intentionally to avoid mixing integers and numerics by accident.
- Round in one place, and document the rounding rule in the expression.
- Make timezone and text rules explicit (lowercasing, trimming, replacing spaces).
- Prefer a couple of helper columns over one giant formula.
Indexing helps only when you actually filter or join on the generated value. Indexing a generated total is often wasted if you never search by total. Indexing a normalized key like email_normalized is often worth it.
Schema changes matter because generated expressions depend on other columns. Renaming a column or changing a type can break the expression, which is a good failure mode. You find out during migration instead of silently writing wrong data.
If the formula starts to sprawl (many CASE branches, lots of business rules), treat that as a signal. Either split parts into separate columns, or switch approaches so the rule stays readable and testable. If you’re modeling a PostgreSQL schema in AppMaster, generated columns work best when the rule is easy to see and explain in a line.
Using triggers for statuses and cross-row rules
Triggers are often the right tool when a field depends on more than the current row. Status fields are a common case: an order becomes "paid" only after at least one successful payment exists, or a ticket becomes "resolved" only when every task is done. That kind of rule crosses rows or tables, which generated columns can’t read.
A good trigger is small and boring. Treat it like a guardrail, not a second application.
Keep triggers predictable
Hidden writes are what make triggers hard to live with. A simple convention helps other developers spot what’s happening:
- One trigger for one purpose (status updates, not totals plus audit plus notifications).
- Clear names (for example,
trg_orders_set_status_on_payment). - Consistent timing: use BEFORE for fixing incoming data, AFTER for reacting to saved rows.
- Keep the logic in a single function, short enough to read in one sitting.
A realistic flow looks like this: payments is updated to succeeded. An AFTER UPDATE trigger on payments updates orders.status to paid if the order has at least one succeeded payment and no open balance.
Edge cases to plan for
Triggers behave differently under bulk changes. Before you commit, decide how you’ll handle backfills and reruns. A one-time SQL job to recompute status for old data is often clearer than firing triggers row by row. It also helps to define a safe “reprocessing” path, such as a stored procedure that recomputes status for a single order. Keep idempotency in mind so re-running the same update doesn’t flip states incorrectly.
Finally, check whether a constraint or application logic is a better fit. For simple allowed values, constraints are clearer. In tools like AppMaster, many workflows are also easier to keep visible in the business logic layer, while the database trigger stays as a narrow safety net.
Common mistakes and traps to avoid
A lot of pain around derived fields is self-inflicted. The biggest trap is choosing the more complex tool by default. Start by asking: can this be expressed as a pure expression on the same row? If yes, a generated column is often the calmer choice.
Another common mistake is letting triggers slowly become a second application layer. It starts with “just set the status,” then grows into pricing rules, exceptions, and special cases. Without tests, small edits can break old behavior in ways that are hard to notice.
Pitfalls that show up again and again:
- Using a trigger for a per-row value when a generated column would be clearer and self-documenting.
- Updating a stored total in one code path (checkout) but forgetting another (admin edits, imports, backfills).
- Ignoring concurrency: two transactions update the same order lines, and your trigger overwrites or double-applies a change.
- Indexing every derived field “just in case,” especially values that change often.
- Storing something you could compute at read time, like a normalized string that is rarely searched.
A small example: you store order_total_cents and also let support adjust line items. If the support tool updates lines but doesn’t touch the total, the total goes stale. If you add a trigger later, you still need to handle historical rows and edge cases like partial refunds.
If you’re building with a visual tool like AppMaster, the same rule applies: keep business rules visible in one place. Avoid scattering “derived value updates” across multiple flows.
Quick checks before you commit
Before you pick between generated columns and triggers in PostgreSQL, do a quick stress test of the rule you want to store.
First, ask what the rule depends on. If it can be computed from columns in the same row (a normalized phone number, a lowercased email, line_total = qty * price), a generated column is usually easier to live with because the logic sits next to the table definition.
If the rule depends on other rows or other tables (an order status that changes when the last payment arrives, an account flag based on recent activity), you’re in trigger territory, or you should compute it at query time.
A quick checklist:
- Can the value be derived only from the current row, with no lookups?
- Do you need to filter or sort by it often?
- Will you ever need to recompute it for historical data after changing the rule?
- Can a developer find the definition and explain it in under 2 minutes?
- Do you have a small set of sample rows that proves the rule works?
Then think about operations. Bulk updates, imports, and backfills are where triggers surprise people. Triggers fire per row unless you design carefully, and mistakes show up as slow loads, lock contention, or half-updated derived values.
A practical test is simple: load 10,000 rows into a staging table, run your usual import, and verify what gets computed. Then update a key input column and confirm the derived value stays correct.
If you’re building an app with AppMaster, the same principle holds: put simple row-based rules in the database as generated columns, and keep multi-step, cross-table status changes in one place where you can test them repeatedly.
A realistic example: orders, totals, and a status field
Picture a simple store. You have an orders table with items_subtotal, tax, total, and a payment_status. The goal is that anyone can answer one question quickly: why is this order still unpaid?
Option A: generated columns for totals, status stored plainly
For money math that depends only on values in the same row, generated columns are a clean fit. You can store items_subtotal and tax as regular columns, then define total as a generated column like items_subtotal + tax. That keeps the rule visible on the table and avoids hidden write-time logic.
For payment_status, you can keep it as a normal column that your app sets when it creates a payment. That’s less automatic, but straightforward to reason about when you read the row later.
Option B: triggers for status changes driven by payments
Now add a payments table. Status is no longer only about one row in orders. It depends on related rows like successful payments, refunds, and chargebacks. A trigger on payments can update orders.payment_status whenever a payment changes.
If you choose this route, plan a backfill: a one-time script that recalculates payment_status for existing orders, and a repeatable job you can run again if a bug slips in.
When support investigates “why is this order unpaid?”, Option A usually sends them to the app and its audit trail. Option B sends them to database logic too: did the trigger fire, did it fail, did it skip because a condition wasn’t met?
After release, watch a few signals:
- slow updates on
payments(triggers add work to writes) - unexpected updates to
orders(status flipping more often than expected) - rows where
totallooks right but status is wrong (logic split across places) - deadlocks or lock waits during peak payment traffic
Next steps: pick the simplest approach and keep rules visible
Write the rule in plain language before touching SQL. “Order total equals sum of line items minus discount” is clear. “Status is paid when paid_at is set and balance is zero” is clear. If you can’t explain it in one or two sentences, it probably belongs somewhere it can be reviewed and tested, not tucked into a quick database hack.
If you’re stuck, treat it like an experiment. Build a tiny copy of the table, load a small dataset that looks like real life, and try both approaches. Compare what you actually care about: read queries, write speed, index use, and how easy it is to understand later.
A compact checklist for deciding:
- Prototype both options and inspect query plans for common reads.
- Run a write-heavy test (imports, updates) to see the cost of keeping values current.
- Add a small test script that covers backfills, NULLs, rounding, and edge cases.
- Decide who owns the logic long-term (DBA, backend, product) and document that choice.
If you’re building an internal tool or portal, visibility matters as much as correctness. With AppMaster (appmaster.io), teams often keep simple, row-based rules close to the data model and put multi-step changes into a Business Process, so the logic stays readable during reviews.
One final thing that saves hours later: document where the truth lives (table, trigger, or application logic) and how to recompute it safely if you need a backfill.
FAQ
Use a derived field when many queries and screens need the same value and you want one shared definition. It’s most helpful for values you frequently filter, sort, or display, like normalized keys, simple totals, or a consistent flag.
Pick a generated column when the value is purely a function of other columns in the same row and should always match them. It keeps the rule visible in the table schema and avoids hidden write-time code paths.
Use a trigger when the rule depends on other rows or other tables, or when you need side effects like updating a related record or writing an audit entry. Triggers are also a fit for workflow-style transitions where timing and context matter.
Generated columns can only reference columns from the same row, so they can’t look up payments, line items, or other related records. If your “total” needs to sum child rows, you typically compute it in a query, maintain it with triggers, or redesign the schema so the needed inputs live on the same row.
A generated column stores the computed value at write time, so reads are fast and indexing is straightforward, but inserts and updates pay the compute cost. Triggers also shift work to writes, and can be slower and less predictable if the logic is complex or fires in chains during bulk updates.
Index when you frequently filter, join, or sort by that derived value and it meaningfully narrows results, such as a normalized email or status code. If you only display the value and never search by it, an index often adds write overhead without much benefit.
Generated columns are usually easier to maintain because the logic lives in the table definition where people naturally look. Triggers can stay maintainable too, but only if each trigger has a narrow purpose, a clear name, and a small function that’s easy to review.
For generated columns, the most common issues are NULL handling, type casting, and rounding rules that behave differently than expected. For triggers, issues often come from the trigger not firing, firing more than once, running in an unexpected order, or depending on session settings that vary across environments.
Start by reproducing the exact insert or update that produced the bad value, then compare the input columns next to the derived value. For a generated column, run the same expression in a SELECT to confirm it matches; for a trigger, inspect the trigger and function definitions and add minimal logging to confirm when and how it runs.
If you can say the rule in one sentence and it only uses the current row, a generated column is a strong default. If you’re describing a workflow or referencing related records, use a trigger or compute at read time, and keep the logic in one place you can test; in AppMaster, that often means simple row rules near the data model and cross-table workflows in a Business Process.


